Wed. Dec 8th, 2021

Network of blue platforms in the dark with bots on top botnet cybersecurity concept 3D illustration
In the world of rampant data sharing, nefarious use of personal data and media manipulation, it is clear the lucrative ad tech market may not necessarily be ready for complete transformation. This post follows the introductory article entitled, Real-Time Bidding: The Ad Industry has Crossed a Very Serious Line. I had a chance to sit down with Dr. Augustine Fou, my collaborator on the article, and a seasoned marketer, who has “witnessed the entire arc of the evolution of digital marketing”. Dr. Fou currently helps marketers audit their digital campaigns for ad fraud and optimize campaigns based on accurate analytics.
Advertising has evolved tremendously in the last 20 years. The market for digital ads and the scale to which impressions are bought and sold across the ad exchanges has made the industry more efficient and more lucrative.  Has advertising been truly transformed?
The largest advertisers are all proud to be “digitally​ transformed.” What they mean by that is they spend most of their ad budgets in digital now, and most of that is spent through programmatic channels. This is because for the last 10 years, since the rise of programmatic ad exchanges, advertisers have been chasing 1) large “reach” in the form of huge quantities of ads to buy, 2) “cost efficiency” in terms of low CPM prices, and 3) high “engagement” seen as higher click rates. What they didn’t account for is the fact that very large quantities, super low prices, and high click throughs are driven mainly by bot activity, not by humans.
So the rise of programmatic and ad exchanges has also enabled a world of endless ad inventory, fraudulently generated. How has this remained hidden from the largest marketers in the world, often assumed to be the most savvy too? 
As marketers allocate more and more of their budgets to programmatic channels, more and more of the buying and selling have become automated. The decisions on what to buy, how much to buy, and what prices to pay are controlled by AI (artificial intelligence) and ML (machine learning). But artificial intelligence is only as intelligent as the coders who wrote the initial rules. Much of “AI” today is just a bunch of “in-then statements” some of which are not even in tune with the realities of the real world. Nowhere is this — “AI is smoke and mirrors” — more true than in ad tech. Black box algorithms make decisions on 10s of trillions of digital ads per week and are responsible for spending the largest portion of the $400 billion spent in digital advertising worldwide each year. 
But none of the advertisers know how it works. None of the media agencies spending their money know how it works. And often the adtech vendors selling the AI-driven services and tech platforms don’t know how it works and certainly aren unable to explain it. 
So you are saying marketers are being tricked into spending more money.  Doesn’t conventional wisdom of user clicks and optimization prevail? 
Fraudsters’ bots trick algos to give them more money. This is as simple as apple pie. You know how your campaigns are automatically optimized? Do you realize HOW they are optimized? Put simply, they are optimized based on available signals — in digital campaigns, that means clicks and click through rates. Advertisers and their media agencies have assumed that more clicks or higher click rates means better — i.e. more engagement. That would be correct if only humans saw and clicked your ads. If you take into account bot activity, then you will realize very high click through rates come from bots clicking your ads not from humans (because humans don’t like ads that much). 
All the fraudsters have to do is ensure that their bots have a bit higher click-through rates (CTRs) than other legitimate, mainstream publishers sites. And the algorithms that spend your money will dutifully allocate more budget and bid higher and more often on ads that run on fake sites. They are being tricked into optimizing your ad budget towards fake and fraudulent sites. If you let AI buy media for you and let them optimize campaigns for performance, more of your budget is going to fake and fraudulent sites than to real publishers. 
What if I have fraud detection in place? Is there a way to scale this solution to determine legitimate traffic?
Advertisers will turn to more AI to help them solve the bot problem; they pay for fraud verification services to detect bots and IVT (“invalid traffic”). It’s like AI versus AI — that’s cool but what if they are only looking for invalid traffic (fake users) and neglecting to even look for other forms of fraud that are even larger? For example a site that pays for bot traffic is likely to be doing other things to inflate their own ad revenue — like ad stacking, pixel stuffing, page and ad slot refreshing, popunders, forced redirects and other more nefarious things. If fraud detection is not looking for these things, it will be severely underreporting the fraud; and you’re going to be getting a far smaller refund than you deserve. 
Further, what if the bots are actively tricking the detection to avoid getting caught, and to get marked as “valid?” You can safely assume that most bots are doing this, so they can keep making money for their botmasters. There are a large variety of techniques they use. For example, disguising themselves with residential proxies makes them appear to be coming from human households, so they blend in with real humans. Bots are smart enough to block detection tags so they don’t get scanned. So lack of an “invalid” signal does not necessarily mean the user is “valid”; it could simply mean the user could not be measured because the detection was blocked. 
And finally, most bots are faking human interaction events now, like mouse movement, page scrolling, touch events, and clicks to defeat detection. So the algos and AI used by fraud detection may not see the bots and thus give you an “all clear” to keep spending, when in reality the fake sites your ads run on are overrun with bots. Think about it, long tail sites have few humans; so they buy virtually all of their traffic — “bought traffic = bot traffic.” 
Many advertisers believe that the lack of targeting parameters and tracking is bad; or conversely, they need to have tracking and targeting for their digital marketing to work. Logic would say that the more I know about my prospects, the higher the likelihood for success?
This is fine to an extent. In other words, some targeting is good — for example, the avoidance of wasted dollars by not showing a beard trimmer ad to women or children. But micro targeting and over targeting are wasteful too because the targeting parameters are not accurate at all. But the AI algorithms that spend your money don’t know that. 
They are also programmed to bid less frequently or submit lower CPM bids for iOS devices due to lack of third party cookies used in ad targeting. This results in a systemwide undervaluing of iOS devices (iPhones and iPads) and Macs, considering that large numbers of real humans use iPhons, iPads, and Macs. By some estimates iOS CPMs are 50 – 70% lower than Android and Windows. But if advertisers wanted to get ads in front of humans, they can simply show ads on iOS devices with no further targeting. This, of course, works best with large-scale branding and awareness campaigns. So in this case, common sense should outweigh the AI that is programmed to not bid or bid less for iOS devices. 
So, not only is the AI that buys media for you costing you money, it may also be harming your brand reputation too? Please explain.
Yes, we saw above how the AI can be tricked into allocating more ad dollars to fake sites simply because the signal they have is click through rate. Fake sites use bot traffic that can tune their click rates higher and cause your AI to give them more money. Using fraud detection AI may not be enough if the bad guys bots know exactly how to trick the AI and avoid getting caught. So you end up wasting double the money – buying ads on fake sites with fake traffic and also paying for fraud detection that is not catching it. 
Finally, the AI is only as good as the information it is given. For example, if hate speech and disinformation sites pretend to be other mainstream sites (by passing a different domain than their own), they easily get by your domain block list. Even if you have blocked sites like Breitbart they can lie and get around that. If your AI does not see any blocked domain in the bid request, it lets it through. This is why despite having block lists, your ads are still ending up on hate, disinfo, fake news, piracy, porn, and worse sites. Blackbox AI might be trying its hardest to help you spend your ad budgets, but if they are easily tricked into letting you ads go to bad places, your brand reputation is at risk too. 
So, in your experience, what do marketers need to do to equip themselves? What do they need to know?
Are you letting AI buy media for you? If you are buying ads through programmatic channels then yes, algorithms and AI are making the decisions on how to spend your money. If you don’t know how they work (they are “black box”), do you know how much money they are costing you and what risk they are causing to your brand? If you are courageous enough to run some experiments to test the above, here are a few ideas:

There is a growing market for digital advertising. Automation is inevitable. Where do you think the industry is going?  What, if anything, needs to be done to protect business from ad fraud? 
Automation is good. AI is good. But there should always be human oversight and judgement involved. As we saw in the examples above, the algorithms are only as good as the data it is given as input and only as robust as the “if-then” statements of the coders that created it. Without humans to “gut check” whether any of it makes common sense in the real world, the algorithms can run amok. And you certainly don’t want that to happen when they are spending your money, because of all the bad stuff that can happen — from your ads being wasted when they are shown to bots and not humans, to your budgets funding hate speech, piracy, disinformation sites or worse. 

Dr. Augustine Fou

Dr. Augustine Fou is a digital marketer of 26 years. He is an independent ad fraud researcher and currently helps advertisers audit their digital campaigns for fraud. Dr. Fou was formerly the Group Chief Digital Officer of Omnicom’s Healthcare Consultancy Group, and Digital Strategy Lead at McCann Worldgroup’s MRM Worldwide. Dr. Fou also taught digital marketing at Rutgers University and NYU.

Hessie Jones is a Strategist and Venture Partner advocating for human-centred AI, education and the ethical distribution of AI in this era of transformation. As a

Hessie Jones is a Strategist and Venture Partner advocating for human-centred AI, education and the ethical distribution of AI in this era of transformation. As a Venture Partner at MATR Ventures, she seeks to power the underestimated founder through capital and connectivity. As COO of Beacon Trust Network, she aims to advance the quality of human-computer experiences through values-based technology education and innovation. As a seasoned digital strategist, author, tech geek and data junkie, she has spent the last 18 years on the internet at Yahoo!, Aegis Media, CIBC, and Citi, as well as tech startups including Cerebri, OverlayTV and Jugnoo. Hessie saw things change rapidly when search and social started to change the game for advertising and decided to figure out the way new market dynamics would change corporate environments forever: in process, in culture and in mindset. She launched her own business, ArCompany in social intelligence, AI readiness and research. Through the weekly think tank discussions her team curated, she surfaced the generational divide in this changing technology landscape across a multitude of topics. Hessie also co-founded MyData Canada, and is a board member with Technology for Good Canada. She is also part of the Women in AI Ethics Collective, a contributor/editor to Towards Data Science and GritDaily.


Leave a Reply