Solving Google’s, Facebook’s, and Twitter’s Russian (and other) Ad Problems


I’m really not in a good mood right now and I didn’t need the phone call. But someone I know who monitors right-wing loonies called yesterday to tell me about plotting going on among those morons. The highlight was their apparent discussions of ways to falsely claim that the secretive Russian ad buys on major USA social media and search firms — so much in the the news right now and on the “mind” of Congress — were actually somehow orchestrated by Russian expatriate engineers and Russian-born executives now in this country. “Remember, Google co-founder Sergey Brin was born in Russia — there’s your proof!”, my caller reported as seeing highlighted as a discussion point for fabricating lying “false flag” conspiracy tales.

I thanked him, hung up, and went to lie down with a throbbing headache.

The realities of this situation — not just ad buys on these platforms that were surreptitiously financed by Putin’s minions, but abuse of “microtargeting” ad systems by USA-based operations, are complicated enough without layering on globs of completely fabricated nonsense.

Well before the rise of online social media or search engines, physical mail advertisers and phone-based telemarketers had already become adept at using vast troves of data to ever more precisely target individuals, to sell merchandise, services, and ideas (“Vote for Proposition G!” — “Elect Harold Hill!”). There have long been firms that specialize in providing these kinds of targeted lists and associated data.

Internet-based systems supercharged these concepts with a massive dose of steroids.

Since the level of interaction granularity is so deep on major search and social media sites, the precision ad targeting opportunities become vastly greater, and potentially much more opaque to outside observers.

Historically, I believe it’s fair to assert that the ever-increasingly complex ad systems on these sites were initially built with selling “stuff” in mind — where stuff was usually physical objects or defined services.

Over time, the fraud prevention and other protections that evolved in these systems were quite reasonably oriented toward those kinds of traditional user “conversions” — e.g., did the user click the ad and ultimately buy the product or service?

Even as political ads began to appear on these systems, they tended to be (but certainly were not always) comparatively transparent in terms of who was paying for those ads, and the ads themselves were often aimed at explicit campaign fundraising or pushing specific candidates and votes.

The game changer came when political campaigns (and yes, the Russian government) realized that these same search and social media ad systems could be leveraged not only to sell services or products, or even specific votes, but rather to literally disseminate ideas — where no actual conversion — no actual purchase per se — was involved at all. Merely showing targeted ads to as many carefully targeted users as possible is the usual goal, though just blasting out an ad willy-nilly to as many users as possible is another (generally less effective) paradigm. 

And this is where we intersect the morass of fake news, fake ad buyers, fake stories, and the rest of this mess. The situation is made all the worse when you gain the technical ability to send completely different — even contradictory — contents to differently targeted users, who each only see what is “meant” for them. While traditional telemarketing and direct mail had already largely mastered this process within their own spheres of operations, it can be vastly more effective in modern online environments.

When simply displaying information is your immediate goal, when you’re willing to present content that’s misleading or outright lies, and when you’re willing to misrepresent your sources or even who you are, a perfect storm of evil intent is created.

To be clear, these firms’ social media and search ad platforms that have been gamed by evil are not themselves evil. Essentially, they’ve been hijacked by the Russians and by some domestic political players (studies suggest that both the right and left have engaged in this reprehensible behavior, but the right to a much greater and effective extent).

That these firms were slow to recognize the scope of these problems, and were initially rather naive in their understanding of these kinds of attacks, seems indisputable. 

But it’s ludicrous to suggest that these firms were knowing partners with the evil forces behind the onslaught of lying advertising that appeared via their platforms.

So where do we go from here?

Like various other difficult problems on the Web, my sense is that a combination of algorithms and human beings must be the way forward.

At the scale that these firms operate, ever-evolving algorithmic, machine-learning systems will always be required to do the heavy lifting.

But humans need a role as well, to act as the final arbiters in complex situations, and to provide “sanity checking” where required. (I discussed some aspects of this in: “Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem” – https://lauren.vortex.com/2017/10/05/vegas-horror-fixing-youtube-fake-news).

Specifically in the context of ads, an obvious necessary step would be to bring Internet political advertising (this will need to be carefully defined) into conformance with much the same kind of formal transparency rules under which various other forms of media already operate. This does not guarantee accurate self-identification by advertisers, but would be a significant step toward accountability.

But search and social media firms will need to go further. Essentially all ads on their platforms should have maximally practical transparency regarding who is paying to display them, so that users who see these ads (and third parties trying to evaluate the same ads) can better judge their origins and the veracity of those ads’ contents.

This is particularly crucial for “idea” advertising — as I discussed above — the ads that aren’t trying to “sell” a product or service, but that are purchased to try spread ideas — potentially including utterly false ones. This is where the vast majority of fake news, false propaganda, and outright lies have appeared in this context — a category that Russian government trolls apparently learned how to play like a concert violin.

This means more than simply saying “Ad paid for by Pottsylvania Freedom Fighters LLC.” It means providing tools — and firms like Google, Facebook, and Twitter should be working together on at least this aspect — to make it more practical to track down fake entities — for example, to learn that the fictional group in Fresno actually runs out of the Kremlin, or is really some shady racist, alt-right group.

On a parallel track, many of these ads should be blocked before they reach the eyeballs of platform users, and that’s where the mix of algorithms and human brains really comes into play. Facebook has recently announced that they will be manually reviewing submitted targeted ads that involve specific highly controversial topics. This seems like a good first step in theory, and we’ll be interested to see how well this works in practice.

Major firms’ online ad platforms will undoubtedly need significant and in some cases fairly major changes in order to flush out — and keep out — the evil contamination of our political process that has occurred.

But as the saying goes, forewarned is forearmed. We now know the nature of the disease. The path forward toward ad platforms resistant to such malevolent manipulations — and these platforms are crucial to the availability of services on which we all depend — is becoming clearer every day.

–Lauren–