Marketing automation technology has been growing in popularity with many good products such as Marketo, Eloqua, Pardot, and HubSpot. The big promise of marketing automation is lead nurturing and personalized engagement, the ability for marketers to target specific campaigns at different users based on their profiles and activities to ensure the optimal interactions to move a lead down the purchase funnel.
Sounds wonderful, but most marketers who pay $50,000 a year and more for these powerful technologies are simply using them as expensive SPAM machines. Does the following sound familiar?
- Pay $ to buy leads
- Run new leads through a fixed N-week campaign
- Unsubscribe, unsubscribe, unsubscribe ……
- Run more campaigns on the remaining leads
- Unsubscribe, unsubscribe, unsubscribe ……
- Pay $ to house those unsubscribed leads year after year
- Rinse and repeat
This is a game of volume that is both expensive and ineffective, not to mention the growing challenges of privacy and double opt-in laws around the world. Most marketers can benefit from a more precise lead nurturing strategy focusing on fewer new leads, lower attrition, and higher conversion.
That means, you need to stop sending out generic SPAM. You need to stop wearing out your list and instead focus on more precise campaigns. There are 2 keys to achieving that: ability to segment your leads and ability to deliver relevant content optimized for the segmentation. Let’s talk about the first problem today, how to segment your leads.
There are many different dimensions you can segment your leads on, some of the most common ones are:
- Job seniority
- Job function
- Company size
- Purchase lifecycle
- Purchase type
- Purchase history
- Interaction history
If you have these types of segmentation data about your leads, then you can create precise and relevant campaigns that won’t tire out your leads. So where do you get that segmentation data?
- From lead vendors – most leads you buy will have some segmentation information
- From your sales team – your sales team manually adds this information
- From enrichment services – you can pay to have your leads enriched and cleaned against a vendor’s database
- Manual research
- From internal data – your various systems hold data that can be correlated
- From data you already have – you can decipher and infer segmentation data from data you already have
No one approach above can provide you all the segmentation data you need. There is simply no perfect lead database in the sky and some of the data you need is your own. 1 and 2 are givens. People’s experiences vary widely with 3. 4 yields very high quality marketing data, but it is very expensive to scale beyond the smallest databases. Very few marketers take advantage of 5 and 6, but with a little bit of automation help, companies can gain a lot of rich segmentation data by processing what they already own. Let’s take a look at one very common example.
Produce job seniority and job function segmentation from job title
Job title data is useless unless you can distill the endless combination of words into useful segmentation data. In this blog Do You Really Know Who Your Customers Are? we showed how to use our Word Frequency and Rank Analytics to get a quick sense of how job titles breakdown in your database. Now we will see how to turn that free-form data into structured segmentation data.
Enter the powerful combination of Data Rules and Reference Data.
In Openprise, Data Rules are rules that automate data manipulation tasks. These rules are also based on simple IF-THEN templates. Reference Data are look-up lists and mappings that are designed to work in conjunction with Data Rules to scale data manipulation tasks. You can read more about the Reference Data in this blog: It Takes Data To Clean Data.
The ingredients we need for this automation recipe are:
- Normalization and Simple Replacement Rule Template
- Job Seniority Reference Data
- Job Function Reference Data
The problem with job title is that it comes in every shape and form you can imagine, and this is not even including the new trendy and creative titles like “Code Ninja” and “Chief People Officer”. The key is to ignore the title but look for common keywords that can accurately infer seniority and job function information. For example, “Vice President” infers a seniority of “executive”, and “Architecture” infers an engineering or IT job function. There are complexities on industry variations, whole word vs. partial word, and combination words. This is why having a ready-to-use reference data set that already takes into account these complexities is extra helpful.
This next screenshot shows you how to configure such a rule in Openprise. A simple task that takes about 30 seconds. Since we are looking for keywords within a longer text string, make sure you use the matching method “Contains”.
Here is the end result of how 2 simple rules can create the much needed segmentation on job seniority and function:
Does the machine always produce the right results? No. Our experience shows it is about 80% accurate and it will get even better as we continue to improve on the matching capabilities and the reference data. Does any of the methods listed above give you 100% accurate segmentation data? Not by a long shot. But is it worth 10 minutes of your time to generate 80% accurate segmentation data on your leads?
Here is an illustration of how the data quality has improved instantly. The first pie chart below shows the breakdown of the top 10 job functions in the database before the segmentation rule ran. As you can see, very little usable data as 98.3% falls into “others”
Now compare to the breakdown after the segmentation exercise. 79.5% of the database are now segmented into the top 10 job functions. That is useful data.
So with a little effort in segmentation, you can stop spamming, and start slicing your SPAM? 🙂