Automatic optimization of advertising on automated ad platforms is a universal aspect of adtech. Adtech firms see AI optimization as a key differentiator giving advertisers a frictionless way to plan and spend their advertising budget.
Instagram, for instance, believes: “AI recommendations is a key element of our future, and in maximizing user engagement.” META’s sleek explanation of its AI oozes superior optimization capabilities: “We’ve built an end-to-end AI platform called Looper, with easy-to-use APIs for optimization, personalization, and feedback collection. Looper supports the full machine learning lifecycle from model training, deployment, and inference all the way to evaluation and tuning of products. The Looper platform currently hosts 700 AI models and generates 4 million of AI outputs per second.”
Questionable name choice aside, the Looper description suggests a “set ‘n forget” approach to campaign optimization as in, “Give us your ad dollars and trust us to make best use of it.” Aside from adtech’s endemic trust gaps, there is also an implied promise that the more an advertiser spends – the better the AI can optimize results.
As an advertiser, what’ not to love? With minimum effort, the AI promises a brand that its ad budgets will be well-spent to drive outcomes.
Right?
Wrong.
The brutal truth is that the process of generating a real outcome is not easily impacted by the muscly and rather blunt AI instrument.
In the outcome business, AI can be likened to peeling a grape with an axe. You quickly realize how messy and wasteful it really can be.
Once we accept this reality, we can put automation tools in their proper place to help us optimize results – not amount of ad spend.
Here’s how.
Measurement of AI performance requires masterful discipline.
Usually, the adtech salesperson promises their AI-powered ad buying platform will optimize ad budgets. To convince you, they often offer a test where the advertiser spends a low budget (usually under $25,000) for a few weeks to assess the power of the AI optimization prowess.
Don’t fall for this sales ploy because as an experiment, it is likely to fail to test what you really want to test. First, there’s the issue that any AI platform needs data, time and meaningful outcomes to measure anything.
It is not useful to measure clicks or CPM or other fluffy metrics when outcomes can take longer that a typical test lasts, especially in B2B marketing.
Second, and this is hard to hear, too many ad platforms goose results in tests to be sure the results look good to secure the contract. With the murky ad supply chain, it is hard for advertisers to know if the buy was “enhanced” artificially.
Finally, even if everything is kosher, it’s difficult to tweeze out AI driven performance versus performance of the platform itself. This issue is compounded by the fact that you can’t really compare one platform’s AI to another for similar reasons.
We all know the answer – rigorous and disciplined measurement. And yes – we all know this is easier said than done. To crack the measurement nut, don’t be tempted to take short cuts despite pressure adtech platforms may apply to get you to commit. Measuring outcomes is time based and nothing will change that. If you can afford it, commit to a few platform tests but over longer periods of time. This way there is enough data to have a good shot at understanding what happened.
Choose carefully what is being optimized via technology.
Technically, optimization is a binary function when it comes marketing. AI can optimize the Cost per Click or CPM or leads but not all at the same time.
This often overlooked reality means marketers are forced to make Faustian choices that are not well-aligned with the conversion marketing process. The key to useful optimization is to keep it simple and focus on metrics on the low end of the sales funnel – like traffic. Trying to optimize upper funnel activities is too nuanced for AI and therefore is likely to lead to disappointment.
Automated optimization cannot correct for flaws in campaigns.
Despite automated ad creation capabilities like META’s Advantage Platform’s ability to, “automate up to 150 creative combinations at once” (different than the Looper optimization platform one would surmise), great campaign construction includes a tight message, a clear call-to-action and an offer that is compelling. No amount of AI or optimization will compensate for sloppy campaign construction.
Assume all default settings in the campaign are guilty until proven innocent.
There is much documentation of the fact the defaults in adtech platforms are designed to optimize the platform’s revenue first and advertisers KPIs second. This is how platforms make it easy for advertisers to spend more money than they should or need to.
Here are just a few examples.
Dr. Fou, a well-known fraud detective and CEO of FouAnalytics spent years analyzing the gap between human and fake traffic. A recent post of his makes this important point: “When you run Facebook campaigns, be sure to turn OFF Facebook Audience Network, because that is where most of the ad fraud is.” In Google there are a few places where the default settings are suboptimal for advertisers in areas like GEO targeting or campaign flighting. (Here is a more detailed explanation of this point: https://trustwebtimes.com/top-ways-digital-ad-platforms-hurt-small-businesses-more-than-they-help/). Human expertise is needed to know how to work the platform correctly so that it performs best for the advertiser and not the platforms.
eCommerce is least impacted by AI’s optimization.
There is nothing harder than generating online sales. It is the most nuanced function of marketing and thus least responsive to the brute muscle that is AI.
AI is helpful but don’t be fooled that it can really optimize actual conversion process well or efficiently without skilled human management. To shine a light on this point, let’s look at Google’s SEARCH ad platform, famed for its conversion capability powered by its optimization features. When advertisers run SEARCH campaigns (a.k.a. PPC), Google helpfully (snark alert) gives campaigns an optimization score from 1 to 100 (with 100 being fully optimized) along with suggestions on how to get to 100. The biggest ingredient of the AI optimization recommendation is to use keywords the AI suggests. Naturally, many marketers will follow Google’s AI suggestions but by following Google’s optimization recommendation, this actually makes things more expensive for marketers. The key point to understand is that the AI is recommending the same set of keywords for many advertisers in the same category. The more advertisers that are bidding on the same keywords, the higher the cost for all advertisers and the more money Google makes. To add insult to injury, Google’s AI encourages spend on more keywords at higher cost which will generate some better results but it will not be commensurate with the increased spend. The math is straightforward and rather discouraging for advertisers if you know how to analyze the outcomes.
With every adtech platform touting its AI optimization as the best in the business, it’s easy to get inured to all the AI talk. That’s dangerous. While it’s wishful thinking to believe AI will optimize your results, it’s best to remember the job of AI is to keep you spending on the platform since AI seduces advertisers to believe that higher spend will yield good, optimized outcomes even if outcomes seem underwhelming. The lesson here to make AI work for advertisers is to never send in AI/ tech to do what a human needs to do. And never send in a human to do what tech can do better. In adtech, knowing the difference is where skill and expertise come in.