On Oct. 30, Twitter announced it would now prohibit political advertising on its platform. On Friday, it tried its best to explain exactly what that would mean. Under the new policy, ads from government officials, candidates, parties, and PACs would be prohibited, as well as any ad that “references a candidate, political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome.” Further, Twitter will limit (but not entirely prohibit) microtargeting of “issue ads” that “drive political, judicial, legislative, or regulatory outcomes,” while permitting ads that align with an advertiser’s “publicly stated values.”
Twitter’s policy is a welcome improvement—if only as a rebuke to Facebook, which not only accepts political ads of all kinds but refuses to fact-check them. But Twitter is positioning itself to have to make distinctions it is ill equipped to make. How will Twitter determine whether an issue ad does or does not “drive outcomes” before it is allowed to circulate? Deciphering whether an ad is or is not in line with the advertiser’s values is all but impossible, as advertisers can misrepresent their values as needed.
There is so much more “political” advertising than just what comes from official campaigns or speaks explicitly about legislative issues. Nearly anyone can afford to advertise, from deep-pocketed political action groups to grassroots activists to individuals. These ads need not name a candidate or party or legislative issue to have political impact. Anyone willing to pay to say “Black lives matter? Don’t all lives matter!?” is engaged in political advertising. For just a few dollars, they can enjoy the immense reach of social media and their precision tools for microtargeting users by demographics, location, preferences, or political persuasion.
Political advertising, not just on social media but across the internet, has become a searing problem for American democracy. And there is so much that could be done. In the United States, for starters, Congress could pass the Honest Ads Act, which would impose the same disclosure requirements on online political ads as television and print ads, and obligate platforms to archive political ads with data on who paid for them and how they targeted users. Similar calls have been made by the European Commission. Some have suggested banning all forms of microtargeting of political ads, requiring all political ads to be fact-checked, and limiting the reach of political ads to the districts in which they are relevant. Many have called for Facebook to follow Twitter’s lead (and Pinterest’s, LinkedIn’s, and Twitch’s), to refuse all political advertising, at least in the time leading up to an election.
But any restriction of political advertising will stumble on the same fundamental question: What counts as “political”? The solution, I think, requires a much grander intervention, one that has been necessary for more than two decades.
It’s time to fix online advertising. All of it.
What if every ad—political and commercial—revealed who purchased it, when, at what cost, how many views it had accumulated, and how it was targeted? A tiny icon on every ad reveals a floating dashboard, like a nutritional label for every ad, giving users the information that they need most, and that they are least able to find out for themselves.
What if social media platforms provided an intuitive graph of how that ad traveled through the network—a map that, without identifying individuals who saw or forwarded it, demonstrated how often it had been seen, how quickly it had moved?
What if platforms were required to acknowledge if they provided an advertiser any kind of consulting services for their marketing efforts or privileged access to user data?
What if every ad included a link to a profile every advertiser was required to have on that platform, as a condition of advertising on that platform? That profile page would archive every ad that advertiser had ever posted to that platform, along with all the data described above; it would be searchable by date, dollars spent, or targeting criteria.
Every ad, political or commercial, should reveal its own provenance. Every advertiser should stand by their ads and their efforts to circulate them, and be held accountable for them. Platforms, which depend on user data for the entirety of their economic value, should pay users back in kind: the data we need in exchange for the data we freely give.
Instead of asking platforms to discern what’s political from what’s not (a judgment they’re not particularly qualified to make), their responsibility would align with what they actually can know. Is an advertisement criticizing CNN as “fake news” a political ad? It doesn’t matter. If money changed hands, disclosure is required.
And let’s make it very simple: This should apply not only to every advertiser but also to individual users if they pay to boost their posts. If money was exchanged for amplification, users should know all about it.
Despite its reprehensible refusal to fact-check ads, Facebook has gone the farthest in ad transparency: A couple of clicks reveals the date an ad launched and all the active ads from that advertiser. But self-regulation is not enough. Better would be to require this through regulation, aimed at both platform and client. Platforms must build these advertiser archives, and keep and display the accompanying data. They must go to reasonable lengths to ensure that advertisers are who they say they are; anyone evading these restrictions (e.g., buying ads under different names) should no longer be allowed to advertise on the platform. Advertisers misrepresenting themselves should face penalties. An “Honest Ads Act” for all advertising, across the digital ecosystem—for several reasons.
First and foremost, unregulated political advertising is a part of the broader problem of online misinformation and manipulation. Some misinformation has been paid for, either through the advertising apparatus of a social media platform, or with small payments that “boost” the circulation of a specific post. On some sites, paid advertising can eventually circulate organically, forwarded like anything else. Especially on Facebook, what may seem like two separate elements of the platform are now thoroughly interwoven: Ads can be forwarded as content, and the circulation of some content has been paid for. Regulating online advertising would not solve the problem of misinformation, of course, but it would put distinct pressure on those who use advertising techniques to muddy the political waters.
Legitimate advertisers should have few complaints about having to stand behind their advertising, especially in the name of a healthier political sphere. But these requirements would discourage advertisers less eager to stand by their tactics: products that shouldn’t target children, regulated services (like housing, credit, and employment) that shouldn’t discriminate by race, sex, or income. It would help reduce, or at least bring to light, nefarious tactics like targeting only users who listed “Jew hater” or “white genocide” in their profiles.
Being transparent to users doesn’t mean the burden of oversight should fall on users. Requiring advertisers to reveal the precise terms they use might help educate users about how targeted advertising works. More importantly, it would help journalists investigate bad-faith or unscrupulous advertisers. They could call out advertisers who direct different messages to different users, or field-test messages to maximize their impact—as the Trump campaign was revealed to be doing in 2016.
These requirements are not a penalty on the internet. Think of them instead as finally regulating advertising as we always should have—even for TV ads—but didn’t have the means to until now. While it isn’t particularly feasible to provide this kind of data in a 15-second ad on broadcast television, in the digital environment, delivering it is simple. Every ad can now carry the attestation of those whose interests it represents. The internet, in this sense, is the gift we didn’t know we needed.
If this is an unrealistic proposal, it is worth wondering why that is.
This kind of aggressive transparency around advertising would also recall long-standing principles of the early web, principles these platforms and their managers have long forgotten. Besides a commitment to open participation and rough consensus, the web promised a wide provision of tools to those who need them; transparent and open standards for how to find and provide information; and generative hardware and platforms that empowered users rather than restricting them. The open web, free software, open source, HTML, access to knowledge, Wikipedia, GitHub … all prioritized giving users the tools, signals, and expertise to navigate the universe of information on their own terms. That is, until enormous, profit-driven social media platforms built on data collection and precision advertising emerged.
Platforms could make amends by revealing the precise logics by which they show us some ads and not others, how ads move through these intricate social networks. If a platform finds itself incapable of doing so at scale, perhaps that’s an indication that it has grown too big to meet its obligations to the public altogether.
Such a policy would not be a panacea. Social media companies earn substantial revenue from political advertising, enough that they’re willing to court campaigns, advise them on marketing strategies, and obscure their own role in helping campaigns field-test their ads. These are all additional problems that transparency does not resolve. Facebook should be fact-checking political ads, even if it really can’t. Its unwillingness to recognize itself as a steward of public discourse remains deeply worrying. And there are enormous problems in our contemporary political discourse that have little to do with how online advertising works.
This is not a particularly feasible proposal, given the current political climate in the United States. There is little appetite for broad restrictions on advertising, beyond political campaigning, products like tobacco and alcohol, and advertising targeting children. It is unrealistic to imagine a platform being first to commit to such aggressive standards of transparency, out of fear of losing business to the rest. And it is unrealistic to imagine the corporate giants that spend billions on advertising supporting such a move. Revealing how ads circulate might inadvertently reveal that targeted advertising is not as effective as has been claimed.
Still, out of concern for the corruption of the democratic process, we must deal with how easy it has become to pay platforms to amplify your message. If this is an unrealistic proposal, it is worth wondering why that is. And if the tactical amplification of political messages on social media is as pernicious a problem as it appears, it is worth wondering what measures are necessary.