Vox: Time for Liberals to Get Over Citizens United

Vox: Time for Liberals to Get Over Citizens United

It takes courage to buck the orthodoxy of your foundational audience, so it’s always nice in these polarized times to see a major publication like Vox offer a somewhat dissenting view from liberal orthodoxy on Citizens United and other progressive dog-whistles on campaign finance reform.  Today Scott Castleton writes: “Repealing the controversial decision is a pipe dream. And there are more promising avenues for campaign-finance reform.”

In 2017, the commissioner of the Federal Election Commission resigned, claiming “since the Supreme Court’s Citizens United decision, our political campaigns have been awash in unlimited, often dark money.” This was the animating sentiment of Bernie Sanders’s 2016 campaign for president; he even went so far as to claim that billionaires are simply “buying elections.”

This idea has given rise to a new liberal battle cry: Repeal Citizens United! Unfortunately, that tactic is naive and misguided, and relies on a misunderstanding of the law and politics surrounding the case. …

Let’s put the hated decision into context. The inundation of elections with private cash is not the result of Citizens but rather was facilitated by the 1976 decision Buckley v. Valeo. That case established the legal framework sanctioning billions of dollars of independent private campaign spending. In it, the Court ruled that limits on campaign donations — direct donations to candidates — are constitutional but said it was unconstitutional to limit non-donation expenditures, such as independently funded advertisements.

 Such independent spending — which cannot be coordinated with candidates, according to the Court — was protected under the First Amendment as not just speech but political speech. The idea is that money is a necessary instrument for supporting a political candidate, whether it’s paying for yard signs or taking out an ad in the newspaper.

Not unreasonably, the Court ruled that limitations on independent expenditures would constitute limitations on one’s ability to support a candidate through any number of media. Placing a dollar limit on such expenditures would arbitrarily prevent certain kinds of campaign support simply by the fact of how expensive they are. …

Citizens simply has not had the seismic legal impact that many think. Since Buckley protected money as speech, the only question was whether corporations were legitimate speakers. It may surprise some to hear, but the Court had already answered this question in 1978. In First National Bank of Boston v. Bellotti, the Supreme Court recognized a corporate right to free speech, concluding that the value of speech in the course of political debate does not depend on the identity of the speakerCitizens simply followed the precedent of these two cases.

So when liberals intone that “corporations aren’t people,” thinking they are making a knock-down argument against Citizens, they miss the point. Citizens did not make corporations persons. And corporations do not need to be persons to receive First Amendment protections. Citizens upheld the liberty, provided by Bellotti, of corporations to speak, and they speak under the rules provided by Buckley.

Castleton then suggests that the remedy for “big money” in politics is to encourage small money donations. He uses the example of small money propelling Bernie Sanders’ 2016 presidential campaign to prominence.

Castleton’s arguments are simplistic and often mis-guided, but at least he’s considering what others deem to be immovable dogma: that maybe censorship is not the answer. Just as with other forms of speech, where the correct answer to “bad” speech is more speech, maybe the answer to “the wrong” people spending “too much” money on elections is to help other people to speak their own minds.

There is another point that Castleton misses entirely: the amount of big money in politics is driven almost entirely by advertising costs. That was true in the correlation between the growth of television ad campaigns and the growth of campaign spending. Yet recent research suggests that, past a certain point, the effectiveness of these ads is zero on voter turnout, and one-half of one percent on candidates’ relative shares of the vote. Enough to affect very close campaigns, but not really a dramatic justification of the costly advertising.

In fact, recent presidential campaigns relative spending did not show an effect on the outcome, with the Clinton campaign and her allies outspending the Trump campaign and its allies two-to-one. The more effective campaigns, Obama and Trump, spent less money, and spent more of what they did spend on highly-targeted digital campaigns or similarly-targeted advertising blitzes.

The reason: the old adage of “half your advertising is wasted, but you don’t know which half” no longer applies in this era of rapid and highly-refined ad targeting. As Advertising Age reported: “And, while broadcast TV retained its dominance, the mass media mainstay of political advertising took a big blow from more targetable and data-driven ad options such as cable TV and digital.”

That dollar-driven trend will likely continue. Meaning that the question is not how much money is pouring into campaigns, but how that money is used.

The Noose Around Issue Advocacy Tightens Again Today: Google Ads Will Require “Verification” to Mention or Show A Candidate or Current Officeholder

The Noose Around Issue Advocacy Tightens Again Today: Google Ads Will Require “Verification” to Mention or Show A Candidate or Current Officeholder

Today the noose begins to tighten more around social media issue advocacy. Facebook is not the only platform censoring content; today Google announced a new requirement for advertisers who want to “purchase an election ad” on Google:

As a first step, we’ll now require additional verification for anyone who wants to purchase an election ad on Google in the U.S. and require that advertisers confirm they are a U.S. citizen or lawful permanent residents, as required by law. That means advertisers will have to provide a government-issued ID and other key information. To help people better understand who is paying for an election ad, we’re also requiring that ads incorporate a clear disclosure of who is paying for it.

Google says its version does not go as far as Facebook’s: it doesn’t also cover “issue ads,” a slippery term not entirely defined by Facebook that can mean almost anything depending on who’s speaking. But Google will likely do so in the near future, depending on its conversations with third-parties: “As we learn from these changes and our continued engagement with leaders and experts in the field, we’ll work to improve transparency of political issue ads and expand our coverage to a wider range of elections.”

Axios has more. For example:

Advertisers can go through the verification process starting at the end of May, and Google will start enforcing the new rules on July 10, the company said.

The new requirements will apply to ads featuring candidates for federal office or current officeholders in the United States.

Google will also start requiring these ads to carry a disclosure that says who paid for them.

So even though Google’s new policy says it doesn’t cover issue ads, it probably does. Many issue ads, as defined by the Supreme Court in Wisconsin Right to Life and Internal Revenue Service rules, deal with legislative issues and say things like “Write your Senator” or “Senator Jones Supports S. 123.” One assumes these would be considered “featuring … current officeholders in the United States.”

Thus, Google now restricts issue ads even without saying so. Illustrating again how difficult it is to limit speech, even in the service of some worthy purpose, without collateral damage that would make such a policy unconstitutional if done by a government subject to the First Amendment.

[Update – Reaction to new Google policy:

Jason Torchinsky, a well-known attorney to many politically-active organizations, writes: 

I think under Google’s policy – with the election related labeling – our non profit clients will have a hard time arguing that any of these ads are purely issue advertisements – no matter when they are publicly released – when they will have an “election related” label right on them.
What happens even during the 2018 lame duck session?  No calls to specific members from non profits that won’t do political activity or are near their limits?
Eric Wang, another well-known attorney to many politically-active organizations and a Senior Fellow at the Institute for Free Speech, writes:

So, voters shouldn’t have to be required to show government-issued ID to vote, but speakers should be required to show government-issued ID when talking about election-related topics (whatever that means).

 In the immortal words of the late James Traficant, “Beam me up!”


9th Circuit Denies Rehearing En Banc in Montana Contribution Limits Appeal

9th Circuit Denies Rehearing En Banc in Montana Contribution Limits Appeal

It’s been six months since a three-judge panel of the Ninth Circuit Court of Appeals, on a 2-1 vote, upheld Montana’s right to limit campaign contributions. After Citizens United, the only governmental interest strong enough to over-ride the First Amendment is quid pro quo corruption or the appearance of corruption, and the government had to show “objective evidence” of that to justify a limit on speech. See McCutcheon v. FEC, 134 S. Ct. 1434, 1441, 1444–45 (2014); Citizens United v. FEC, 558 U.S. 310, 359 (2010).

The question in Lair was what was “objective evidence” of quid pro quo corruption: was evidence about lobbying or campaign contributions enough to show corruption, even though the Supreme Court held in Citizens United and McCutcheon that “ingratiation” or “access” was not corruption or its appearance? For example, Chief Justice Roberts wrote in McCutcheon:

We have said that government regulation may not target the general gratitude a candidate may feel toward those who support him or his allies, or the political access such support may afford. “Ingratiation and access . . . are not corruption.” Citizens United v. Federal Election Comm’n, 558 U. S. 310, 360 (2010). They embody a central feature of democracy—that constituents support candidates who share their beliefs and interests, and candidates who are elected can be expected to be responsive to those concerns.

The original panel decision last October ruled that, to prove “objective evidence” of corruption, Montana only had to show objective evidence of lobbying activity or campaign contributions. Today, the entire Court of Appeals refused to rehear the case en banc.

There was a dissent by five judges, written by Judge Sandra Ikuta and a concurrence with the denial by two. Judge Ikuta’s dissent stressed that the cases on which the original panel majority relied were handed down before recent Supreme Court cases like McCutcheon and Citizens United. The concurrence, written by Judges Raymond Fisher and Mary Murguia, argued that the original panel had respected the newer Supreme Court precedents in its decision.

The denial’s dissent and concurrence have actually set up a set of very timely and important questions for the Supreme Court. Professor Richard Hasen, host of the Election Law Blog, has already posted about the denial of rehearing. Hasen, who has defended contribution limits in the past, believes that the dissents and concurrence have identified an issue that the Supreme Court might likely choose to review:

Judge Ikuta’s dissent hits on an unresolved question. There are a number of campaign contribution cases, such as Shrink Missouri, decided when the Court was much more deferential to campaign finance regulations and much more willing to let states and localities support contribution limits with a little bit of evidence. No doubt these cases are in tension with McCutcheon, but McCutcheon did not overrrule these cases. And so judges like today divide on what to do.

Nevertheless, Hasen posits that a reversal of the Ninth Circuit’s decision “would almost certainly be to call into question all campaign contribution limits (as indicated in the Judge Fisher/Judge Murguia response).” He doesn’t think the Supreme Court would want to do that much and so might be unlikely to grant certiorari. Hasen does not mention that both Citizens United and McCutcheon are controversial cases, but that is likely behind his thinking.

On the other hand, this is a “clean” case, in the sense that there aren’t a lot of extraneous procedural or other issues that prevent a direct Supreme Court review of the critical legal question. The primary method for convincing the Supreme Court to review a decision is a division (“conflict”) among the circuit courts of appeal. The reason for this is to prevent “forum-shopping” between federal courts when litigants see that decisions in one Circuit are favorable and another not so much. Uniformity of the law across the country is primary among the interests of the Supreme Court.

And the Supreme Court has always been fairly protective of its own decisions, and may choose to use this case to educate lower court judges on this fundamental question. McCutcheon is a particularly recent decision, and, in Part V of its opinion, the Court parsed at some length the corruptive effect of campaigns contributions and lobbying:

The Government argued that there is an opportunity for corruption whenever a large check is given to a legislator, even if the check consists of contributions within the base limits to be appropriately divided among numerous candidates and committees. The aggregate limits, the argument goes, ensure that the check amount does not become too large. That new rationale for the aggregate limits—embraced by the dissent, see post, at 15–17—does not wash. It dangerously broadens the circumscribed definition of quid pro quo corruption articulated in our prior cases, and targets as corruption the general, broad-based support of a political party.

But that’s exactly what the Ninth Circuit decision would do. The Fisher/Murguia concurrence, for example, gave two examples of legislators speculating about large contributions to the Republican Party. Slip op. 25. This is a fundamental divide about the corrupting effect of campaign contributions, just four years after the Supreme Court dealt with the same question in McCutcheon. 

This one might be more important than its focus on the quantum of evidence would indicate. And Jim Bopp, the legendary attorney who brought this case, just messaged me to say that he definitely would ask the Supreme Court to grant cert.

The Quickest Way To Censorship And Public Incivility Is To Ask If Everything is Hate Speech

The Quickest Way To Censorship And Public Incivility Is To Ask If Everything is Hate Speech

I’ve been planning to write again about Facebook’s efforts to censor its major product: what people post. Facebook is rolling out so much at once, though, that it’s hard to keep up, and harder to craft the lengthy posts necessary to understand the impact of today’s technology on First Amendment advocacy questions. But today’s Facebook screwup, asking people if every news item or post is “hate speech,” has to be explained in historical and technical context to understand how massively injurious it will be.


(From Ars Technica)

[Update: Facebook has now responded that this was not a “mistake” per se. Vice-President of Product Management Guy Rosen said that it was “a test and a bug” which was visible for only 20 minutes. Nverse‘s Danny Paez points out, quite reasonably, that “A prompt like “Does this contain hate speech?” may seem silly when it’s under something benign, but it could be a way for the company to train an algorithm to think more like a human.” But the rest of my much longer post below is that “do we want machines to think like humans? Not if we don’t want human biases incorporated in our algorithms.”]

[More Updates: Wired offers an opposing view, asking for more human intervention but recognizing a role for AI, from a researcher who has studied hate speech. And C|Net offers a report from yesterday’s F8 Facebook conference on Facebook’s AI programs.  “‘We have a lot of work ahead of us,’ Guy Rosen, vice president of product management, said in an interview last week. ‘The goal will be to get to this content before anyone can see it.'”]

In short, Facebook appears to be promoting a “reform” that will lead to increasing amounts of public incivility and possibly unfounded and unsupportable regulation.

Exactly the opposite of what it says it wants. And decidedly unscientific.

Some of what Facebook has done is at least arguably reasonable. Their attempt to define “community standards” is far better than Justice Potter Stewart’s “I know it when I see it” standard for obscenity. Jacobellis v. Ohio, 384 U.S. 184, 197 (1964). When you remember that Facebook’s anonymous drafters have to deal across many different cultures with different levels of awareness, most of its community standards make sense.

The best way to understand what Facebook is up to now, however, is to watch a Roomba robo-vacuum explore and clean a room for the first time. It starts out spinning in a circle for a while, then tentatively bumps out in one direction until it hits something, spins again and moves somewhere else, and so on, for an amazingly long time compared to a human “sanitary engineer,” who simply looks, recognizes and vacuums. It is the beginning of a search by a machine (or those responsible for machine thinking) for a path through the typically-chaotic environment created by unpredictable humans. Another example is how robot engineers learned that children near a robot were a highly-dangerous environment — for the robot, which had to be protected from the kids.

Many people looking at how to police “harmful speech,” Mark Zuckerberg included, talk about “AI” (Artificial Intelligence) being the holy grail of dealing with vast amounts of speech, hateful or otherwise. And the latest “flavor of the month” in AI for this topic is “machine learning.” Machine learning is simply asking a machine to analyze massive amounts of data to predict future outcomes.

Machine learning is, in fact, likely to be a very good way to look at human speech and actions to identify root causes of later actions that may not be intuitively and immediately obvious. Because they are not subject to “confirmation bias,” primacy, perseverence, and many other human tendencies to misread or ignore evidence, well-programmed machines can start with a blank slate, looking for correlations and factors that would likely elude even determined human observers.

I’m not unsympathetic to the difficulty of Facebook’s self-imposed goal. I started “coding” in the 1960’s, as part of a California effort to promote computer and science literacy at an early age (I think it backfired in my case; I did go to college in astrophysics, but ended up a mildly-libertarian First Amendment lawyer). I have always had clients who are among the most advanced in data collection, analysis, and machine learning for use in advocacy and political activity.

You can, for example, view the movement of computer-aided analysis in advocacy in three stages: looking at the past to predict the future -> looking at the present to predict the future -> using mathematics to predict the future. In the early days of computer analysis, we looked to see cause and effect in the past. For example, we mailed (snail-mailed in those days) to a list of members of an organization that looked like ours, based on perceived characteristics; if we got a good return from that mailing, we did it again (and if we were smart, we cross-checked among lists to find newer and better lists to mail). Back in the 1970’s, the Democratic National Committee had me learn from Matt Reese, the “father of microtargeting” these lists.

Looking at the past was mostly human-guided. We looked at the results of prior tests and paid a lot of money to experts who said they could tell us what they meant. But predicting future results was mostly guesswork. And, of course, distorted by the same biases that humans — even well-trained humans — are always subject to.

In the present, better technology  has given us almost real-time results of tests. The Trump and Obama campaigns were very good at this. With massive scale and advanced technology, those data teams could target millions of fundraising and political appeals with vastly-improved precision. But it was still brute-force guesswork, for the most part, with human predictions driving analyses, which then produced more human-guided predictions.

An example: Cambridge Analytica’s “psychographic modeling” behavior. In effect, automating “psychohistory” in author Isaac Asimov’s 1950’s Foundation trilogy, algorithms that can predict human history for hundreds of years in the future. The popular OCEAN psychographic model technique was developed in the 1980’s. And it was pretty much what Matt Reese and programmer Jonathan Robbin were doing with “Claritas” in the 1970’s, categorizing mailing lists as being of “clusters” of people likely to respond to particular messages. More importantly, this type of modeling, though based on computer-aided analyses and much more accurate than earlier methods, is still subject to human biases, both in analyses of massive data and predictions based on the analyses.

Enter the (predicted and sometimes here) third wave: removing humans from the prediction process. Machine learning. Develop the algorithms necessary for the computers themselves to review the data and make predictions from it. Taking humans out of the process allows insights such as the famous “political ideology can be predicted by sales of frozen dinners.” Which is, in fact, true, if incomplete.

What is missing from most analyses of machine learning is an understanding of what separates good from bad machine learning. If all the machines do is take a quick snapshot of what’s happening, the resulting analyses are likely to be no better than what Reese and Robbin were doing five decades ago. The predictions from those analyses will be very accurate — for a very short time and in limited circumstances. The use of static machine learning analyses is simply another form of human bias: “it came from a computer, so it must be true.”

Accurate modern machine learning is an “iterative” and constantly-changing process. If analyses of data are improved by removing human biases, so are analyses of data analysis algorithms and models. And one of the elements of improved algorithms is their maintenance and improvement over time. Feedback and testing are requirements for any machine-learned result, and more importantly, for any machine-learning algorithm. What we might have felt was accurate in 2015 was likely proven wrong in 2017.

Today’s obvious Facebook mistake is not just a reversion, it’s a demonstration of exactly the wrong way to go. The trend in predicting human issue and political positions is, as shown above, to move beyond human biases. Today, Facebook mistakenly rolled out a tool that not only solicits human bias, but uses it to silence the speech of others.

Facebook is going to ask readers of its newsfeed to click a box if they think something is “hate speech.” Facebook’s Community Standards define “hate speech” as:

We do not allow hate speech on Facebook because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence.

We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation. We separate attacks into three tiers of severity, as described below.

Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others. Similarly, in some cases, words or terms that might otherwise violate our standards are used self-referentially or in an empowering way. When this is the case, we allow the content, but we expect people to clearly indicate their intent, which helps us better understand why they shared it. Where the intention is unclear, we may remove the content.

Do they really expect people to read and understand that particular definition before they click “yes” or “no?” Or is it more likely that people will just call everything they don’t like “hate speech?”

Facebook is a private company, not a government subject to the First Amendment. In that context, Facebook is perfectly entitled to define hate speech as whatever it feels like. But, as Guidestar (which publishes copies of tax filings by tax-exempt entities) found when it tried to flag three dozen organizations as “hate groups,” mistakes can happen. Imagine your outrage if someone wrongfully accused you of being a “hate group.” Guidestar won a costly legal battle, but its new chief executive Jacob Harold “acknowledged that there were reasonable disagreements about the fairness of some of the hate-group labels.”

And, in fact, Facebook admits that mistakes have happened at Facebook’s existing “hate speech” monitoring process. ProPublica, a journalistic advocacy organization, reported in December 2017 that when it asked Facebook about 49 seemingly offensive examples of “hate speech” that remained visible, Facebook replied that 22 of the 49 decisions were “the wrong call.”

Facebook is much, much larger and stronger than Guidestar. It can undoubtedly withstand the thousands of lawsuits likely to arise from its labeling of posts as “hate speech.” But its legal position is far weaker than Guidestar’s; Guidestar won its lawsuit because it was not deemed to be engaging in “commercial speech.” Facebook won’t have that protection.

But even more important, if Facebook’s halting and mistaken efforts to use “crowdsourcing” to determine whether particular posts or ads are “objectionable content” are any indication, Facebook is headed down the wrong AI path. As shown above, modern AI analyses, especially involving machine learning, as Facebook appears to be doing, take OUT the human element, using objectively measurable standards rather than innate human biases. Facebook is doing the opposite.

By proceeding in an unscientific and biased manner, Facebook itself is encouraging a finding of “hate speech,” and thus slanting not only its actions, but likely also the reporting of how much “hate speech” actually exists. Just asking the question of humans is very likely to trigger a much higher level of positive reporting, a well-recognized phenomenon known as the “observer-expectancy” effect. Because the question is asked, and a reporting option offered, readers will think that someone must have thought it was hate speech to even ask about it. This is a “positive feedback loop” that will likely have unexpected and probably unpleasant consequences.

Again, I understand that this was a preliminary effort, and Facebook has been candid that in other countries, relying on posts to determine whether language can be “coded” threats is useful. But Facebook’s own history, as well as the well-reported development of analytical techniques using Facebook’s own type of technology, demonstrate that having people self-report “hate speech” is likely to over-report and under-report, as well as mis-report, objectionable speech. Even worse, it is probably equally likely to censor or restrict legitimate speech that someone just doesn’t like.

And that doesn’t seem at all within Facebook’s own community standards or announced mission.