The Problem Is Facebook and Twitter Themselves

From The Washington Post:

Facebook said Saturday evening that an article raising concerns that the coronavirus vaccine could lead to death was the top performing link in the United States on its platform from January through March of this year, acknowledging the widespread reach of such material for the first time. It also said another site that pushed covid-19 misinformation was also among the top 20 most visited pages on the platform.

In related news, the German Marshall Fund issued a study of interactions on Facebook. The study found that sites that share news in misleading ways attracted a record-level percentage of Facebook users:

More than 1 in 5 interactions — such as shares, likes or comments — with U.S. sites from April to June happened on “outlets that gather and present information irresponsibly,” according to [the study].

This includes outlets such as the Daily Wire, TMZ, the Epoch Times and Breitbart that researchers say “distort or misrepresent information to make an argument or report on a subject,” a metric determined by NewsGuard, a website cited in the study that rates the credibility of news sources. Researchers say these sources, which they argue spread subtler but still harmful forms of misinformation, are decidedly different from sites that publish overtly false news.

“These are the kinds of sites that will cherry pick anecdotes and are giving rise to vaccine hesitancy and other kinds of conspiracy theories,” said [the study’s director].

Researchers highlighted articles that they say “disproportionately amplify vaccine-hesitant voices over experts” and “fail to mention risks of not being vaccinated against covid-19″ . . .

While platforms have cracked down on black-and-white cases of fiction masquerading as fact, they are still grappling with how to handle murky yet wide-reaching cases that stop short of falsehood. . . . 

The ratio of misleading content marks a five-year high for Facebook, where “false content producers” have received a higher share of engagement in the past, according to the findings (The Washington Post).

Unquote.

What should we do about a company like Facebook that seems hell-bent on spreading harmful misinformation? One answer is to prosecute Mark Zuckerberg and his ilk as dangers to public health.

However, Vinay Prasad, a professor at the University of California medical school, suggests a more measured approach: we should deal with social media companies the way we dealt with tobacco companies thirty years ago. From Medpage Today

Many Americans, and especially healthcare providers, are frustrated as we watch yet another rise in COVID-19 cases. Severely ill and hospitalized patients are often unvaccinated, which is particularly disheartening, given the widespread availability — surplus — of mRNA vaccines in the U.S. . . .

One potential reason why a sizable fraction of Americans are reluctant to be vaccinated is the widespread availability of inaccurate, unbalanced, or irrational rhetoric. This speech falls across a spectrum from overtly delusional — vaccines contain microchips so Bill Gates can track you — to lesser degrees of pejorative and doubtful comments. . . .

[But] regulating or policing medical misinformation is doomed. It’s easy for most (sensible) people to recognize that mRNA vaccines do not contain microchips that allow Bill Gates to track you. But very quickly we find statements about vaccines that are unknown, disputed, and worthy of further dialogue. Lines between legitimate debate and misinformation become scientifically impossible to draw. . . .

Even on a social media website for medical professionals that restricts who can comment and regulates comments, as Doximity does, there are a number of erroneous statements, mis-statements and ill-informed comments, suggesting that regulating speakers is not an effective solution either. Some doctors may say incorrect things, and some lay people may be spot on. Policing speakers can’t solve the content issue.

. . . It is easy to feel that some erroneous views should not be permitted on social media, but the hard part is to define what should not be allowed. Notably, despite the Surgeon General’s report and much debate on the issue, no one has actually delineated what counts as misinformation. I suspect that it cannot be done. No one can create a rule book that separates black and white because the world we live in is only gray. You can’t outlaw what you can’t define.

The problem is Facebook and Twitter themselves.

In 50 years, social media in 2021 will look like the tobacco industry in 1960 — they knowingly offered an addictive product, and, worse, hid the damage the addiction caused, while actively tried to deepen the dependency. Social media companies try to keep you using the platform longer, baiting you with content to trigger your rage, disgust, lust, or hatred. These companies offer products that have been linked to anxiety and depression among users. . . .
When it comes to information, social media does three things.

First, it drives people into irrational poles. On one side are folks who think SARS-CoV-2 is not real or just another seasonal flu. These individuals are often suspicious of vaccination as a path out of the pandemic. On the other side are folks who believe we should lock down until there isn’t a single case of COVID left. . . . The very nature of social media drives individuals into further extreme positions, possibly aided by bots, sock-puppet accounts, or foreign intelligence agencies. The middle ground is lost.

Second, a good or bad idea on these platforms can reach millions of individuals. An anecdote (of dubious validity) of a vaccinated individual suffering a bizarre harm, or one of an unvaccinated person begging for vaccination before the endotracheal tube is placed . . . are both powerful psychological stories that reach millions. This is heroin of the mind.

Third, social media causes deterioration of discourse and harsh proposals. . . . We no longer see individuals with whom we have policy disagreements as people.

The solution is inevitable. Social media of 2021 must be dismantled and crippled like the tobacco industry. These digital tools have hijacked our neurotransmitters, just like tobacco. Denying the pernicious role of these platforms on our society is similar to those who denied the harms of tobacco. Just like tobacco, social media offers pleasures. But, just like tobacco, the industry that supports it has pushed too far, lusting for profits and domination.

. . . Our leaders offer toothless solutions like policing or removing information they view as particularly egregious. This introduces countless problems and immense potential for abuse. . . .

Instead, the platforms need to be crushed, broken up, and regulated. Rather than just censoring specific ideas, measuring attention and trying to capture more of it must be prohibited. . . . The platforms must be brought to their knees, just like Big Tobacco, while human ideas — good, bad, sublime, horrible, true, false, and everything in – between must be free.

Looking Toward January 6, 2025

Republicans are predictably screaming about their cult leader being kept off Facebook for the time being. They’re citing the First Amendment, of course, but that’s got nothing to do with social media platforms (until the government starts operating its own platform or regulating their content).

Or as our congressman, Tom Malinowski, tweeted:

The 1st Amendment gives us the right to say crazy things without gov’t interference. It doesn’t require Random House to give us a book contract, or FOX to give us a prime time show, or Facebook to amplify our rantings to billions of people. Freedom of speech is not freedom of reach.

A much more significant issue is the speed with which the Republican Party is deteriorating. From Greg Sargent of The Washington Post

Rep. Liz Cheney’s fate appears sealed: Republicans are set to oust the Wyoming Republican as the No. 3 in the House GOP leadership . . . This is being widely depicted as a battle over the past . . . Most accounts portray it as a sign that in today’s GOP, fealty to the former president is a bedrock requirement, denouncing his lies about 2020 has become unacceptable, and telling the truth about the Jan. 6 insurrection is disqualifying.

All that is true, but the forward-looking dimension to this story is getting lost. What also seems unavoidably at stake is that the GOP appears to be plunging headlong into a level of full-blown hostility to democracy that has deeply unsettling future ramifications.

. . . Republicans may be unshackling themselves from any obligation to acquiesce to future presidential election outcomes they don’t like — that is, liberating themselves to overturn those outcomes by any means necessary.

. . . A Cheney spokesperson denounced her GOP enemies for wanting to “perpetuate lies about the 2020 election and attempt to whitewash what happened on Jan. 6.” This comes after Cheney told GOP colleagues that those lies are “poison in the bloodstream of our democracy” and that insurrection “is a line that cannot be crossed.”

Cheney has also urged Republicans not to turn “their back on the rule of law.” And she insists that the commission examining Jan. 6 should focus on the insurrection, not on leftist extremism that Republicans are hyping to muddy the waters around their ongoing radicalization.

So why is all this disqualifying? [It’s because] she’s demanding something important from fellow Republicans: a full and unequivocal renunciation of the lie that the election’s outcome was dubious. . . .

Now consider what else we’re seeing. Some Republicans are increasingly asserting a willingness to overturn future elections: Rep. Jody Hice’s primary challenge to the Georgia secretary of state is driven by the promise to use his power to invalidate future outcomes.

Other Republicans are asserting the freedom to keep alive the fiction that the election was stolen forever. In Arizona, a GOP-sponsored recount is underway [in hopes of] bolstering that false conclusion.

This combination is toxic: Republicans are untethering themselves from any obligation to recognize future legitimate election outcomes, which will provide the rationale to overturn them, a freedom they are also effectively in process of appropriating. Cheney is insisting on a GOP future premised on a full repudiation of these tendencies, and getting punished for it.

Guess what: These same House Republicans might control the lower chamber when Congress is counting electors after the 2024 presidential election.

“We should start to very much worry about what Jan. 6, 2025, looks like,” Edward Foley, a renowned election law scholar and a Post contributing columnist, told me.

Imagine a 2024 election decided in one state, where a GOP-controlled legislature sends electors for the GOP candidate in defiance of a close popular vote. The same House Republicans who punished Cheney — many of whom already voted against President Biden’s electors, but now control the House and have continued radicalizing — could vote to certify that slate. . . .

This places burdens on Democrats. Democratic strategist Simon Rosenberg told me that this obliges Democrats to level with voters about the threat Republicans pose to democratic stability.

“If Cheney is ousted, Democrats will have to make the radicalization of the GOP a major part of the 2022 conversation,” Rosenberg said.

And as elections scholar Rick Hasen told me, Democrats should try to get patriotic Republicans to support revisions to the Electoral Count Act, to make it “harder for a legislature to send a separate slate when there was no problem with how the election was run.”

Cheney’s ouster should prompt this, along with a much greater public and media focus on the brute reality of the GOP’s fundamental turn away from democracy.

“The core component of the democratic process is that we count the votes as cast,” Foley told me. The punishing of Cheney, Foley concluded, suggests that the Republican Party might [might???] be institutionally “abandoning the very essence of democracy”.

Keep This in Mind When You Hear the Right Claim They’re Censored on Social Media

It’s bullshit. From The Washington Post:

A new report calls conservative claims of social media censorship “a form of disinformation”.

[The] report concludes that social networks aren’t systematically biased against conservatives, directly contradicting Republican claims that social media companies are censoring them. 

Recent moves by Twitter and Facebook to suspend [the former president’s] accounts in the wake of the violence at the Capitol are inflaming conservatives’ attacks on Silicon Valley. But New York University researchers today released a report stating claims of anti-conservative bias are “a form of disinformation: a falsehood with no reliable evidence to support it.” 

The report found there is no trustworthy large-scale data to support these claims, and even anecdotal examples that tech companies are biased against conservatives “crumble under close examination.” The report’s authors said, for instance, the companies’ suspensions of [the ex-president’s] accounts were “reasonable” given his repeated violation of their terms of service — and if anything, the companies took a hands-off approach for a long time given [his] position.

The report also noted several data sets underscore the prominent place conservative influencers enjoy on social media. For instance, CrowdTangle data shows that right-leaning pages dominate the list of sources providing the most engaged-with posts containing links on Facebook. Conservative commentator Dan Bongino, for instance, far out-performed most major news organizations in the run-up to the 2020 election. 

The report also cites an October 2020 study in which Politico found “right-wing social media influencers, conservative media outlets, and other GOP supporters” dominated the online discussion of Black Lives Matter and election fraud, two of the biggest issues in 2020. Working with the nonpartisan think tank Institute for Strategic Dialogue, researchers found users shared the most viral right-wing social media content about Black Lives Matter more than ten times as often as the most popular liberal posts on the topic. People also shared right-leaning claims on election fraud about twice as often as they shared liberals’ or traditional media outlets’ posts discussing the issue.

But even so, baseless claims of anti-conservative bias are driving Republicans’ approach to regulating tech. Republican lawmakers have concentrated their hearing exchanges with tech executives on the issue, and it’s been driving their legislative proposals. . . .

The New York University researchers called on Washington regulators to focus on what they called “the very real problems of social media.”

“Only by moving forward from these false claims can we begin to pursue that agenda in earnest,” Paul Barrett, the report’s primary author and deputy director of the NYU Stern Center for Business and Human Rights said in a statement. 

The researchers want the Biden administration to work with Congress to overhaul the tech industry. 

Their recommendations focus particularly on changing Section 230, a decades-old law shielding tech companies from lawsuits for the photos, videos and posts people share on their websites. . . . 

The researchers warn against completely repealing the law. Instead, they argue companies should only receive Section 230 immunity if they agree to accept more responsibilities in policing content such as disinformation and hate speech. The companies could be obligated to ensure their recommendation engines don’t favor sensationalist content or unreliable material just to drive better user engagement. 

“Social media companies that reject these responsibilities would forfeit Section 230’s protection and open themselves to costly litigation.” the report proposed.

The researchers also called for the creation of a new Digital Regulatory Agency, which would serve as an independent body and be tasked with enforcing a revised Section 230. 
The report also suggested Biden could empower a “special commission” to work with the industry on improving content moderation, which would be able to move much more quickly than legal battles over antitrust issues. It also called for the president to expand the task force announced by Biden on online harassment to focus on a broad range of harmful content. 

They also called for greater transparency in Silicon Valley. 

The researchers said the platforms typically don’t provide much justification for sanctioning an account or post, and when people are in the dark they assume the worst. 

“The platforms should give an easily understood explanation every time they sanction a post or account, as well as a readily available means to appeal enforcement actions,” the report said. “Greater transparency—such as that which Twitter and Facebook offered when they took action against [a certain terrible person] in January— would help to defuse claims of political bias, while clarifying the boundaries of acceptable user conduct.”

Members of Congress Want Action from Facebook, YouTube and Twitter

Below is a press release from my congressman. It contains links to letters he and another member of Congress sent to the CEOs of three social media behemoths. The letters represent the view of dozens of representatives. Each letter is worth looking at, since each one highlights specific problems relating to the company in question:

Today, in the aftermath of the violent attack on the U.S. Capitol, Congressman Tom Malinowski (NJ-7) and Congresswoman Anna G. Eshoo (CA-18) sent letters to the CEOs of Facebook, YouTube, and Twitter urging the companies to address the fundamental design features of their social networks that facilitate the spread of extreme, radicalizing content to their users.

Representatives Malinowski and Eshoo, along with dozens of their colleagues, called on the companies to reexamine their policy maximizing user engagement as the basis for algorithmic sorting and promotion of news and information, and to make permanent and platform-wide design changes to limit the spread of harmful, conspiratorial content. 

The lawmakers note that the rioters who attacked the Capitol earlier this month were radicalized in part in digital echo chambers that these platforms designed, built, and maintained, and that the platforms are partially responsible for undermining our shared sense of objective reality, for intensifying fringe political beliefs, for facilitating connections between extremists, leading some of them to commit real-world, physical violence.

To view the full text of the letters and their respective signers click on the links below.

  • Letter to Mark Zuckerberg, Facebook 
  • Letter to Susan Wojcicki and Sundar Pichai, YouTube; Alphabet/Google 
  • Letter to Jack Dorsey, Twitter 

“Social media platforms’ algorithms are designed to feed each of us increasingly hateful versions of what we already hate, and fearful versions of what we already fear, so that we stay glued to our screens for as long as possible. In this way, they regularly promote and recommend white supremacist, anti-Semitic, anti-government, and other conspiracy-oriented material to the very people who are most susceptible to it — some of whom just attacked our Capitol,” said Rep. Malinowski. “We are urging the CEOs of these large social media companies to make permanent and platform-wide changes to limit the frictionless spread of extreme, radicalizing content – something they’ve shown they are capable of doing but are consciously choosing not to.” 

“For years social media companies have allowed harmful disinformation to spread through their platforms, polluting the minds of the American people. Online disinformation is not just about removing bad content. I see it as largely a product design issue. The algorithmic amplification and recommendation systems that platforms employ spread content that’s evocative over what’s true,” said Rep. Eshoo. “The horrific damage to our democracy wrought on January 6th demonstrated how these social media platforms played a role in radicalizing and emboldening terrorists to attack our Capitol. These American companies must fundamentally rethink algorithmic systems that are at odds with democracy.”

Last Fall, Representatives Malinowski and Eshoo introduced the Protecting Americans from Dangerous Algorithms Act, legislation to hold large social media platforms accountable for their algorithmic amplification of harmful, radicalizing content that leads to offline violence.

Rep. Malinowski represents New Jersey’s 7th congressional district. . . .Rep. Eshoo represents California’s 18th congressional district, which includes much of Silicon Valley. . . .

One Way to Start Fixing the Internet

Yaël Eisenstat has been a CIA officer, White House adviser and Facebook executive. She says the problem with social media isn’t just what users post — it’s what the platforms do with that content. From Harvard Business Review:

While the blame for President Txxxx’s incitement to insurrection lies squarely with him, the biggest social media companies — most prominently my former employer, Facebook — are absolutely complicit. They have not only allowed Txxxx to lie and sow division for years, their business models have exploited our biases and weaknesses and abetted the growth of conspiracy-touting hate groups and outrage machines. They have done this without bearing any responsibility for how their products and business decisions effect our democracy; in this case, including allowing an insurrection to be planned and promoted on their platforms. .  . .

The events of last week . . . demand an immediate response. In the absence of any U.S. laws to address social media’s responsibility to protect our democracy, we have ceded the decision-making about which rules to write, what to enforce, and how to steer our public square to CEOs of for-profit internet companies. Facebook intentionally and relentlessly scaled to dominate the global public square, yet it does not bear any of the responsibilities of traditional stewards of public goods, including the traditional media.

It is time to define responsibility and hold these companies accountable for how they aid and abet criminal activity. And it is time to listen to those who have shouted from the rooftops about these issues for years, as opposed to allowing Silicon Valley leaders to dictate the terms.

We need to change our approach not only because of the role these platforms have played in crises like last week’s, but also because of how CEOs have responded — or failed to respond. The reactionary decisions on which content to take down, which voices to downgrade, and which political ads to allow have amounted to tinkering around the margins of the bigger issue: a business model that rewards the loudest, most extreme voices.

Yet there does not seem to be the will to reckon with that problem. Mark Zuckerberg did not choose to block Txxxx’s account until after the U.S. Congress certified Joe Biden as the next president of the United States. . . . And while the decision by many platforms to silence Txxxx is an obvious response to this moment, it’s one that fails to address how millions of Americans have been drawn into conspiracy theories online and led to believe this election was stolen — an issue that has never been truly addressed by the social media leaders.

A look through the Twitter feed of Ashli Babbit, the woman who was killed while storming the Capitol, is eye-opening. A 14-year Air Force veteran, she spent the last months of her life retweeting conspiracy theorists, QAnon followers, and others calling for the overthrow of the government. . . . The likelihood that social media played a significant part in steering her down the rabbit hole of conspiracy theories is high, but we will never truly know how her content was curated, what groups were recommended to her, who the algorithms steered her towards.

If the public, or even a restricted oversight body, had access to the Twitter and Facebook data to answer those questions, it would be harder for the companies to claim they are neutral platforms who merely show people what they want to see. Guardian journalist Julia Carrie Wong wrote in June of this year about how Facebook algorithms kept recommending QAnon groups to her. . . .  The key point is this: This is not about free speech and what individuals post on these platforms. It is about what the platforms choose to do with that content, which voices they decide to amplify, which groups are allowed to thrive and even grow at the hand of the platforms’ own algorithmic help.

So where do we go from here?

I have long advocated that governments must define responsibility for the real-world harms caused by these business models, and impose real costs for the damaging effects they are having on our public health, our public square, and our democracy. As it stands, there are no laws governing how social media companies treat political ads, hate speech, conspiracy theories, or incitement to violence. This issue is unduly complicated by Section 230 of the Communications Decency Act, which has been vastly over-interpreted to provide blanket immunity to all internet companies — or “internet intermediaries” — for any third-party content they host. Many argue that to solve some of these issues, Section 230, which dates back to 1996, must at least be updated. But how, and whether it alone will solve the myriad issues we now face with social media, is hotly debated.

One solution I continue to push is clarifying who should benefit from Section 230 to begin with, which often breaks down into the publisher vs. platform debate. To still categorize social media companies — who curate content, whose algorithms decide what speech to amplify, who nudge users towards the content that will keep them engaged, who connect users to hate groups, who recommend conspiracy theorists — as “internet intermediaries” who should enjoy immunity from the consequences of all this is beyond absurd. The notion that the few tech companies who steer how more than 2 billion people communicate, find information, and consume media enjoy the same blanket immunity as a truly neutral internet company makes it clear that it is time for an upgrade to the rules. They are not just a neutral intermediary.

However, that doesn’t mean that we need to completely re-write or kill Section 230. Instead, why not start with a narrower step by redefining what an “internet intermediary” means? Then we could create a more accurate category to reflect what these companies truly are, such as “digital curators” whose algorithms decide what content to boost, what to amplify, how to curate our content. And we can discuss how to regulate in an appropriate manner, focusing on requiring transparency and regulatory oversight of the tools such as recommendation engines, targeting tools, and algorithmic amplification rather than the non-starter of regulating actual speech.

By insisting on real transparency around what these recommendation engines are doing, how the curation, amplification, and targeting are happening, we could separate the idea that Facebook shouldn’t be responsible for what a user posts from their responsibility for how their own tools treat that content. I want us to hold the companies accountable not for the fact that someone posts misinformation or extreme rhetoric, but for how their recommendation engines spread it, how their algorithms steer people towards it, and how their tools are used to target people with it.

To be clear: Creating the rules for how to govern online speech and define platforms’ responsibility is not a magic wand to fix the myriad harms emanating from the internet. This is one piece of a larger puzzle of things that will need to change if we want to foster a healthier information ecosystem. But if Facebook were obligated to be more transparent about how they are amplifying content, about how their targeting tools work, about how they use the data they collect on us, I believe that would change the game for the better.

As long as we continue to leave it to the platforms to self-regulate, they will continue to merely tinker around the margins of content policies and moderation. We’ve seen that the time for that is long past — what we need now is to reconsider how the entire machine is designed and monetized. Until that happens, we will never truly address how platforms are aiding and abetting those intent on harming our democracy.