Defining Deep Learning, Part 2: How We Apply It
So…now we’re all experts in how Deep Learning / machine learning algorithms operating on a neural network handle the task of interpreting difficult data, thanks to Part I of this series, right?
(Don’t worry, there’s no test. Just nod and read on…)
Facial recognition was the example we used to show how a Deep Learning platform can train itself to model meaning out of masses of unlabeled data. But what value does it deliver for B2B marketing?
That’s simple: all marketing relies on targeting, and targeting relies on data.
Take a look at the doubtlessly reputable product advertised at right. Who does it target? Old folks, with old folk ailments. So it’s leveraging a targeted message – a poster full of promises – that’s based on a kind of data: the common wisdom that old people develop rheumatism.
But where and when to reach them would be other datasets that would come in handy. So Hamlin’s circuit-riding huckster – um, salesman – acquires that data by asking around at every town he hits.
The poster then goes up in the local apothecary or general store, where the senior citizenry gather around the cracker barrel to jaw about their aches and pains and how that durned Grover Cleveland will never get my vote!
Scale that model up a millionfold or so in reach and complexity, and that’s how modern Deep Learning-driven social targeting works. Right?
Absolutely. Sort of.
Putting personas on a pedestal
Marketing and advertising depend, now more than ever, on building precise buyer personas to empower accurate targeting. We’ve discussed the importance of B2B buyer personas – and the problems confronting marketers in using what are essentially two-dimensional, manually-built personas – in prior articles.
Here’s a quick set of facts to remind us about just how important they are:
- 81% of buyers will pay a premium for industry experience and industry-specific solutions – persona-based content, in other words.
- Buyers are 48% more likely to consider solution providers that personalize their marketing to address their specific business issues.
- Buyer personas are the second-most-popular criteria for segmenting content for B2B marketers, with 40% of B2B marketers segmenting content by persona.
Deep Learning helps us build those better B2B buyer personas. Here’s how.
Targeting by topic modeling
There’s a parallel between that snake oil salesman and Deep Learning (no, it’s not the snake oil). It’s that each of them was able to sift through huge amounts of data and learn how to target their audience.
A Deep Learning platform searches databases, websites and social feeds to find commonalities, exactly as he worked his sales route. And the longer it does it, the better it gets at the job.
- For the traveling salesman, that data consisted of his experience of each of these one-horse burgs he visited, and of the people therein. He learned to process it all to recognize the right opportunities to hang his posters and shill his Wizard Oil.
- For Deep Learning for B2B, that data is in the form of text. A lot of it. An entire Internet’s worth, possibly.
The challenge? Processing that much data. That’s where topic modeling comes in.
A Deep Learning system uses topic modeling to find the commonalities about an audience, or audiences, that it can then use to build actionable buyer personas.
It’s a type of data mining for identifying patterns in a corpus: a comprehensive collection of texts. And a corpus could be (conceivably) as big as the entire web. In its own way, a search engine like Google is crawling through a web-sized corpus every day.
Topic modeling finds related word clusters (the “topics”) in a corpus or, as one person put it, “a recurring pattern of co-occurring words.” The marketer supplies the system with keywords they feel belong in a cluster: for example, if they’re out to identify and target Marketing Operations personnel, those keywords might include marketing automation, Eloqua, HubSpot, Marketo, CRM, and so on.
There are various statistical probability models used in identifying these clusters. One, Latent Dirichlet Allocation (LDA), is notable because it’s been explained using the visual analogy of highlighting: the AI tags the words it finds in the documents it searches, each word attributable to one of the document’s topics, since each doc is usually a mixture of topics.
In applying topic modeling to B2B targeting, that corpus of documents usually starts off by analyzing a marketer’s own customer data. Our AI, leveraging Deep Learning as opposed to LDA above, uses topic modeling to find like-minded individuals and group them according to their interests, accounts, even their purchase intent.
Those groupings are then used by the system to build highly-detailed personas that are applied to prospecting and marketing, as seen in the image below. Being generated by an AI, those personas are updated and refined automatically.
Very often, though, a marketer’s own data may not be enough to create personas with the depth of detail necessary for optimized personas and prospecting. Their in-house data may not be up to date, or they never captured enough (usable) information in the first place.
So a Deep Learning platform may then go afield, searching the unstructured, unlabeled data on the web to help build its fully-dimensionalized personas, often drawing on third-party data from sources like LinkedIn or Facebook.
The amount of Big Data we’re talking about to construct these personas would be impossible to analyze for human beings. The AI, however, can sort and score it all without breaking a sweat (or whatever AIs do when they’re stressed), uncovering connections that would be invisible to human eyes, and refining its approach over time.
Lead generation becomes an automated process, freeing up marketing and sales teams to do what they’re good at.
Granularity and ABM
One hiccup that occurs in topic modeling? It’s the occurrence of off-kilter terms as the AI attempts to build a persona. Understanding why this happens also demonstrates how Deep Learning can be configured to address account-based marketing (ABM), where it proves its ultimate value in B2B.
In topic modeling, coherence is a desired outcome: the topics a system identifies should make sense to us. But sometimes they don’t.
One anecdotal example? In trying to build personas for data analysts, “swimming” surfaced as one of the topics tagged by AI. Why? “swimming in data”
This happens because a Deep Learning system is too smart to conduct a search based on just the original keywords we fed it. It’ll spot patterns of related words that may go outside of those keywords, patterns that other systems (and humans) wouldn’t be able to uncover.
In the case of our data analyst query, the phrase “swimming in data” is a widely-used cliché allusion about what data analysts have to do (just Google it). In searching web, social or other third-party data, the AI may have found enough co-occurrences of “data,” “analysts” and “swimming” (not to mention “data lake,” to further muddy the, uh, waters) to give swimming undeserved weight in its modeling. That’s because the original topic model it was provided didn’t include parameters that would have made it obvious swimming wasn’t a relevant topic.
This illustrates how the size of the topic space you define can affect the precision of the results you receive from Deep Learning. The more definition (the number of defining keywords) given to the system at the onset, the smaller the topic space it will work within.
The publicly-available example below is a good example that’s accessible to web users. Literary and library researchers use topic modeling, and Signs@40 is a site where they can analyze which topics and stories have been included in feminist journal Signs over the past 40 years. Here’s a visualization of their total topic space:
By using a more specific or larger set of search terms, we would be able to narrow the topic space to only deliver articles within one of those bubbles – Feminist Movement, for instance.
For the B2B marketer, it’s important to create a more manageable topic space, so leads can be more precisely identified.
The need for granularity increases as a marketer aims for greater personalization in their targeting and engagement. To employ ABM, it’s essential to make these topic spaces very complex indeed to deliver personas individualized to the account, and to that target’s role and needs within that account as party to its buying process.
It’s possible to do ABM targeting for a few accounts when using manual systems and manually-built personas. The success of those efforts is what has sold B2B marketers on the promise of ABM. But doing it at scale using manual, human-managed means, across multiple accounts? It’s practically impossible.
So in the context of lead generation and ABM, any platform that automates this process needs to meet several challenges:
- Collect and analyze huge amounts of data, including unlabeled data;
- Work within very complex topic spaces – the marketer’s search parameters for accounts and targets;
- Intelligently identify quality leads that match those terms with a very high degree of accuracy – delivering coherent results, in other words.
Deep Learning solves for these because of its ability to train itself to solve the problems it’s given by the marketer: “Here’s the type of person I want to reach, inside these accounts.”
There’s another challenge Deep Learning has to overcome, and we human beings are the ones throwing that particular wrench into the machinery: titles. We love giving ourselves, or others, job titles…even if they’ve got nothing to do with the actual job they’re doing.
That can create multiple headaches as we’re trying to identify targets. How Deep Learning copes with that is what we’ll tackle in our next Deep Learning blogpost!