Only “Authorized” Speakers Can Put “Issue Ads” on Facebook Now

Only “Authorized” Speakers Can Put “Issue Ads” on Facebook Now

Facebook just announced, in a blog post by its Vice-Presidents for Ads and Local & Pages, that only advertisers who were “authorized” could post issue advertisements on Facebook. This extends a similar restriction Facebook announced last October.

Last October, we announced that only authorized advertisers will be able to run electoral ads on Facebook or Instagram. And today, we’re extending that requirement to anyone that wants to show “issue ads” — like political topics that are being debated across the country. We are working with third parties to develop a list of key issues, which we will refine over time. To get authorized by Facebook, advertisers will need to confirm their identity and location. Advertisers will be prohibited from running political ads — electoral or issue-based — until they are authorized.

In addition, these ads will be clearly labeled in the top left corner as “Political Ad.” Next to it we will show “paid for by” information. We started testing the authorization process this week, and people will begin seeing the label and additional information in the US later this spring. …

We know we were slow to pick-up foreign interference in the 2016 US elections. Today’s updates are designed to prevent future abuse in elections — and to help ensure you have the information that you need to assess political and issue ads, as well as content on Pages. By increasing transparency around ads and Pages on Facebook, we can increase accountability for advertisers — improving our service for everyone.

Facebook is a private entity, and so is not bound by the First Amendment. It also has a history of favoritism in political activities, including helping the 2008 Obama campaign obtain (or “scrape”) and profile friend lists – which ironically is very similar to the behavior that got Cambridge Analytica (and Facebook) in trouble by scraping lots of Facebook user information. Both organizations scraped friends lists; the major difference was that the Obama campaign asked users to send their campaign messages – an example of the peer-to-peer communication that has been a mainstay of American politics at least since the classic “Abe Lincoln four-step” technique – while Cambridge Analytica just sold the information to others).

But today’s new restrictions on “issue ads” are something different. It’s difficult enough for a private company to determine when an ad is “political” when it addresses an election or a candidate. Even the U.S. Supreme Court had to relax an “express advocacy” rule that defined an electoral or candidate-related advertisement by whether it used “magic words” of “vote for” or “vote against”: “a court should find that an ad is the functional equivalent of express advocacy only if the ad is susceptible of no reasonable interpretation other than as an appeal to vote for or against a specific candidate.” FEC v. Wisconsin Right to Life, 551 U.S. 449 (2007). The Federal Election Commission generally uses a “PASO” test as to whether the ad “promotes, attacks, supports, opposes” a specific candidate or party. The IRS uses a “3 T’s” test, looking at the Timing of an ad is close to an election, the Targeting of an ad is to those who will vote in a particular election, and the Text refers to a candidate’s character or fitness for office.

It is much, much harder to determine when an “issue ad” is “political.” Issue ads, by their nature, address controversial subjects on which there are reasonable positions about which people disagree. What is “political” to one person may look completely different to another. A new study suggests that reactions to “political” choices may be hard-wired into the human brain, which is one reason it’s so difficult to change those choices:

It’s no sweat to change your mind on the accomplishments of Thomas Edison. But on topics like abortion, same-sex marriage, and immigration, people don’t budge. … The brain processes politically charged information (or information about strongly held beliefs) differently (and perhaps with more emotion) than it processes more mundane facts. It can help explain why attempts to correct misinformation can backfire completely, leaving people more convinced of their convictions.

Some people may say that Facebook isn’t censoring the ads; it will still run the ads if the sponsor is “authorized” and discloses certain information. And it responded to the Ars Technica article revealing the Facebook authorization requirement by claiming that the authentication will likely be “location-based” rather than content-based.

But if that is the emphasis, why is Facebook working with “third parties” to make a list of topics on which ads will be considered “political”? Are the topics going to be controversial, partisan, only “wedge” issues (which vary from campaign to campaign) and so on?

Will Facebook end up conducting the same kind of psychological analysis of users to determine when an ad is “political” that it criticized when Cambridge Analytica used its own, proprietary and really very odd form of remote psychoanalysis? But that would be ok because … it’s Facebook, not some scummy political consulting firm?

As with the Obama-Cambridge Analytica example and the timeless examples of biased news media coverage demonstrate, very few private censorship choices would withstand even the simplest of First Amendment review if the same actions were conducted by a governmental entity. And the reason that government is simply banned from conducting such issue analyses is that nobody, even someone as big and powerful as Facebook, can do it well. So nobody – or at least nobody with power over what other people can say, see or hear – should do it at all.