Data quality: The key ingredient in a data-driven strategy
The last fifteen years have been a wild ride in sales and marketing. Fifteen years ago, if you were lucky, you had a CRM platform and maybe a little marketing automation. Today, many apps serve sales and marketing, managing every imaginable task—if there’s a process, there’s an app for it. But there’s one little problem: Every one of those apps needs the data to be perfect to work correctly. Acquiring perfect data quality is your department; it’s not the app developer’s problem.
It’s pretty easy to clean and enrich data lead records, but the problem is more significant than that.
Want to route leads?
Your routing protocols could be based on any number of fields in the record. Do you determine who gets the lead based on the account owner, region, postal code, annual revenue, number of employees, or buying group?
Is lead attribution your goal?
You likely have many opportunities in your pipeline that have only one contact, or worse, none at all. But if you have half a dozen in there, you already know your attribution efforts won’t be successful. Your data-driven sales and marketing applications are only as good as the data that’s driving them. You need to take a strategic approach to manage your data quality to get the best results.
Data gone wild: data quality to tame the beast
You get data from many different sources, and each source probably presents the data in various ways. Maybe one source provides you with exact employee counts, while another only gives you employee count ranges. Then, over time, as business requirements change, teams make chop-and-dice decisions about which fields are necessary and which are unimportant. Is Washington D.C. in the south, or is it part of the east coast? Did one manager insist that Oklahoma was part of the southwest territory, and another manager placed it firmly in the midwest? Things like this happen all the time, in every company. And eventually, you wind up with inconsistent, unusable data.
Another symptom suggesting a lack of data strategy is poor data quality. For example, some of your data provides location information about an individual prospect in Florida. Still, in another set of data, that prospect’s information places them at the global office in Tokyo. Where are they, Florida or Japan? How can Sales Ops correctly assign or even market to this lead? It’s up to you to ensure that the data you’re sending out is accurate, and your data strategy enables you to avoid mismatched data that’ll make everyone question the accuracy of all your data.
Data quality: the lifeblood of every company
Data isn’t used just in sales and marketing. It’s the critical link and bond between and across departments, platforms, and apps. And it needs to work for everyone. RevOps automation helps you solve even more issues, starting with managing different data types and formats across multiple sources. It works by bringing in data from numerous sources to aggregate it, ensuring that it’s unified, standardized, and transformed to be functional and consistent. From there, teams can disseminate that refined data to all the systems and platforms that need it. When you orchestrate data across your organization, you can be sure that everyone’s looking at the same base information—no silos. It’s not just sales and marketing that needs the information. Product, finance, and executive teams also rely on it. Ensuring that the data’s both clean and consistent is critical.
Deduplication: a data strategy for when you see double
It’s inevitable. When you’re getting leads from multiple sources, some of them are going to be duplicates. Even if you’re sourcing only from your website, there’s a possibility that a contact will fill out a web form more than once—and slightly differently. Suppose someone named Bob Jones downloads a white paper and then, a week later, registers for a webcast as Bob Jones. We can easily see it’s the same person, but the database can’t. You need a strategy for what to do with those duplicates once you find them.
Deduplication is an essential step in the practice of data hygiene. Orchestration gives you the power to create a complex, ongoing deduplication strategy across data objects in sources. But you have to choose which record is going to make the cut and become the master. Using the Openprise RevOps Automation Platform, you can decide how you want to approach dedupes. Is “the winner”:
- All the contacts from a specific owner?
- The oldest of the duplicate records?
- The most recently updated record?
And more: considering what we just discussed about seeing multiple addresses for the same contact, you might want to think about whether you should retain all the possible address information or just have a single address.
The deduplication process means thinking through all the variables: what are the custom objects, opportunities, notes, cases, campaigns, and other information specific to your company? You might lose critical information when you’re simply trying to dedupe if you don’t have complex deduplication rules across data objects. Or worse, your deduplication efforts might grind to a halt because you can’t reach a consensus to satisfy all the stakeholders. Openprise enables your team to set up customized deduplication processes for different groups and disparate rules across your entire organization. That way, the only thing you lose is truly duplicate information.
Trust, but verify
The Russian proverb, “trust, but verify” may not be the best advice for interpersonal relationships, but it’s essential in technology, where outcomes are the goal. Even when you have a high level of confidence in the systems and tools you’re working with, you never want to push anything live until you’ve checked, tested, and validated it. Never is this more true than with deduplication, where simple errors can accidentally obliterate data and existing processes forever. When you’re working on a deduplication project, check a subset of your data and validate that you’re getting the results you want before committing your merges to the database of record.
And before you do anything, create an emergency backup of your data. “Just in case” is a crucial part of any data strategy. Most importantly, remember that it’s never enough to collect as much data as you can. The person with the most data does not win. The person or team with the best data quality wins—and they win for the whole organization.
If you’d like to learn more about how we can help with your data quality concerns, check out our [email protected] Master Class on Data Quality: Common, but not obvious, data quality issues we solved that drove us nuts, where we cover data hygiene issues we encountered at Openprise and why getting it right is critical for effective lead routing and much more.