Automated Insights, Inc.

Good vs. Bad Automated Content? It's In the Context Layer

Posted by Joe Procopio on Aug 24, 2017

Sign up for our Newsletter

Get the latest findings about NLG and more delivered right to your inbox.

The difference between good automated content and bad automated content can be boiled down to the number of scenarios the programmer creates to turn ordinary data into beautiful prose.

Data variability, which is predicated upon the number and the depth of insights driven by changes in the data, is the key quality driver in Natural Language Generation (NLG). And to do NLG data variability right, you have to create a lot of scenarios.

NLG creators must always be asking: How vast is the universe of outcomes that the engine takes into account when creating a narrative?

In other words: How many ways can you say something?

It's not a coincidence that this is the same approach used when developing NLG's reverse twin, Natural Language Processing (NLP).

Words from Data meets Data from Words

People get touchy when you confuse NLG and NLP, especially those people who do either for a living (which is not a lot of people, but they still get touchy). The truth is that there is a lot of commonality between NLG and NLP. The core concept is the same: Understand the input and translate to the output.

While NLP takes in words and translates those words to data, NLG takes in data and translates that data to words. But creating words isn't the hard part of NLG. In fact, we've reached the point where machines can create complex sentences without too much trouble. In its simplest form, creating words from data is a binary proposition:

1 = Good, 0 = Bad.

Expand that out, and you have simple grading:

A = You did a great job.

B = You did a good job.

C = You did a mediocre job.

And so on.

There are tricky points, of course. For example, if you want the "C" grade to be deemed "average" instead of "mediocre," you then have to replace two words in the sentence to accommodate the switch between "a" and "an," as in "a mediocre" job versus "an average" job. Better still, you can write a function that reflects that a/an rule. Relatively speaking, however, making words is easy.

It's data variability that will start a slippery slope if you don't accommodate it up front.

Simple data variability can be accomplished using If This Then That (IFTTT) branching logic. This is where you have to start making assumptions about the meaning of your data. In the grading scenario, an A grade might be great and an A+ grade might be excellent.

Simple data variability can be intertwined with word variability to expand overall variability. For example. that same A+ grade could also be "terrific," and that could be interchanged with "excellent" at random intervals. The data is the same, the words are different.

I'm not a huge fan of word variability in NLG. I'd always rather rely on differences in data to create variability in the words.

Personalization is the final variability step, and that involves injecting my own data into the text, when possible:

You did an excellent job, Joe. You scored 97%.

But was I excellent? Or great? Does it matter? Is it semantic? Am I giving new information to the reader by changing a word?

Scratch That, Reverse It

Well, that same conundrum holds true for NLP, only backwards. If you tell your chatbot you want a "great" pair of golf shoes, those shoes should probably be rated in the top 10% of all golf shoes. If you want an "excellent" pair of golf shoes, are we still talking top 10%? Lower? Higher?

These are contextual decisions, and they're business decisions, not technical decisions. And they're very necessary.

The quality of our golf shoes is the first axis in the context layer; the logic that lies between data and words and makes the words much more useful and the automated content much more powerful.

If you also want an "inexpensive" pair of golf shoes, your NLP should convert those words into data that lies somewhere below the median price before it starts looking at quality rating. That's a huge contextual shift. We now have an X and a Y axis in the context layer, and a few more ways to shop for (or describe) golf shoes.

But there are a lot of ways to describe something as mundane as golf shoes. More importantly for a discussion around business cases for NLP, there are a lot of ways to decide to buy golf shoes: Price, rating, brand, size, fit, color, spike or spikeless, even gender.

These are the axes of context, and those axes are the core of similarity between NLG and NLP. That's your universe of outcomes. You use NLP to turn a buyer's words into data to find the right pair of golf shoes, and you use NLG to turn data about golf shoes into words a buyer can use to make a purchase decision.

The permutations grow exponentially pretty quickly, but when you define a context layer in either NLG and NLP, those possibilities can be narrowed down almost as quickly.

It's rare a purchase decision is ever made without price. Other axes like size, fit, and gender are likely locked in, not a choice. The decision gets made around the story, and the story for golf shoes comes down to brand, color, rating, all of the optional data. NLG and NLP have to figure out which of those are most important to the buyer, and translate the output correctly.

Complex Data = Complex Narratives

In a sports recap, two bits of data can tell you the outcome of a game or match. One side has a score, the other side has a score, and one number is hopefully higher than the other. Or in golf and racing and a couple other sports, you're looking for the lower number. If the numbers are the same, you have a tie, and new logic is needed.

Anyone can look at those numbers and determine who won. That's not NLG. But sports recaps aren't about just winners and losers. They're about how the game or match was won or lost, who was responsible, when the outcome approached inevitability, not to mention championships, careers, records, streaks, slumps, and fan expectations.

There is probably a close-to-infinite number of individual outcomes as to how that result came to be. You can't plan for each one. This is where that contextual layer comes into play, by creating a universe of scenarios that both the human and the machine can work with.

Creating Scenarios Based on Context

While you can't plan for each individual scenario, those axes can all be determined algorithmically. A simple example is win probability (WP). As a game or match progresses, each side's WP goes up or down based on their performance to that point against the likelihood that the lead can be overcome. You can calculate WP throughout the timeline, pick a point at which one side's WP closes in on 100% and the direction never changes, and determine with confidence that "The game was essentially over mid-way through the 3rd quarter."

Or maybe a winner isn't determined until the final second, in which case you have a buzzer-beater. Pretty solid lede, and you can probably run with it. Or can you? What if a long-standing record was broken? What if a championship was on the line? What if the winner was a huge underdog?

Context Matters

Back to NLP and our shopping bot. How many times have you walked into a retail store, found a salesperson, and said to them "I'm looking for a pair of FootJoy Hyperflex Men's Size 11 Wide golf shoes in black, preferably at $99 or less."? The answer is never.

You won't need a chatbot or any NLP for that.

Most people are looking for that "inexpensive pair of good golf shoes," and they're asking for price to be the axis that takes priority in converting their words to data, with rating right behind it. If they're looking for an "inexpensive pair of women's golf shoes," gender is now the lead axis, because it's locked, then price, where there's preference, then rating where there's choice.

Developing the Contextual Layer

These are contextual business decisions. For something like our Associated Press Quarterly Earnings reports, we spent weeks working with the AP developing that contextual layer, and we revise it when necessary. Mapping it out can be done on paper, in spreadsheets, or in code, as long as it's done, but it's the first thing anyone should do when designing either NLG or NLP.

When you develop axes and a context layer up front, you eliminate the exponential growth of the combinations and permutations of the universe of outcomes, and you create quality content that serves a purpose.