Artificial intelligence can greatly enhance programmatic advertising, but some brand marketers may wonder whether it’s really safe.
The fear is that the algorithms, working at lightning speed and massive scale, will make mistakes too quickly to catch and cause lasting reputational damage.
The truth, though, is that nearly all of the fears are unfounded.
AI algorithms behave according to how they’ve been instructed, and human beings are in complete control of that. Skilled product managers and engineers will understand the needs of the marketer and manage AI to optimize campaigns toward desired outcomes, while also applying appropriate checks to ensure every needed protection is in place.
If your rules are properly set, there’s literally no way for the machines to make an unwanted move.
Putting Breakers in Place
Let me give one example from the world of automated trading to show how we can protect against egregious errors by people, then extend the analogy to AI.
Automated trading desks give traders the capability to quickly and easily buy millions or even billions of impressions across multiple platforms, executing deals in fractions of a second.
They could theoretically waste a multimillion-dollar budget very quickly with a few errant keystrokes.
But, in reality, they can’t.
We protect against such a scenario by making it impossible to trade more than a specified amount in a specified period of time. No matter what a trader clicks, or how hard they try, they’ll be stymied in spending beyond a certain amount.
So, too, for artificial intelligence. It’s easy to give examples of ways to keep it from making egregious errors:
- Brand safety can be enforced by allowing only whitelisted URLs to receive placements, and require human approval for any URLs not on the list.
- Impressions can be prevented from delivering for users whose profiles don’t appear to match desired target segments, so impressions can’t be sent to the wrong individuals.
- Dynamic creative optimization can be made safe by appropriately managing the component assets, as we describe further below.
Protecting Against Errors in DCO
One specific fear we sometimes hear is that AI algorithms executing dynamic creative optimization (DCO) might choose clashing components and assemble jarring or inappropriate images.
But those types of errors are also avoidable. The components that are used for a brand’s messaging and assembled in real-time for maximum effect will have been reviewed and approved by human art directors before they’re ever made accessible to the machines.
Those art directors will work with AI engineers to properly categorize and tag images so that, say, someone dressed barefoot in a bikini will not appear on snow-filled ski slopes, nor will the bundled-up winter skier appear on a palm-treed beach.
There may be many thousands of possible permutations for an ad, but each will follow a strict style guide, created by people and enforced by machines.
A Better Option
One logical fallacy sometimes imposed on AI is that it must be perfect. In reality, the goal is to get lower error rates lower than people doing the same job. When that’s achieved and applied on a scale that’s orders of magnitude larger than the people-driven process, brands start to see major benefits.
Today’s programmatic market operates on a massive scale that’s best engaged with the help of artificial intelligence. Skilled application of supervised AI is more efficient and considerably safer. Far from damaging brand reputation, it serves to enhance it by reducing errors, optimizing placements, and increasing brand safety.