Using the Legal System Against Facebook and Other Titans of the Internet

Two Democratic members of Congress are trying to stop big social media companies from doing so much damage:

Imagine clicking on a Facebook video alleging that a “deep-state cabal” of Satan-worshiping pedophiles stole the election from [a horrible person]. Moments later, your phone rings. The caller says, “Hey, it’s Freddie from Facebook. We noticed you just watched a cool video on our site, so we’ll send you a few dozen more videos about election-related conspiracy theories. As a bonus, we’ll connect you to some people who share your interest in ‘stopping the steal’. You guys should connect and explore your interest together!”

The scenario is, of course, made up. But it basically captures what social media platforms do every day. In the real world, “Freddie from Facebook” is not a person who calls you, but an algorithm that tracks you online, learns what content you spend the most time with and feeds you more of whatever maximizes your engagement — the time you spend on the platform. Greater engagement means that users see more ads, earning Facebook more revenue.

If you like cat videos, great; you’ll get an endless supply. But the same is true for the darkest content on the Web. Human nature being what it is, the content most likely to keep us glued to our screens is that which confirms our prejudices and triggers our basest emotions. Social media algorithms don’t have a conservative or liberal bias, but they know if we do. Their bias is to reinforce ours at the cost of making us more angry, anxious and afraid.

Facebook recently played down the role of its algorithms in exploiting users’ susceptibilities and enabling radicalization. The company says that users, not its product, are largely responsible for the extreme content showing up in their news feeds.

But Facebook knows how powerful its algorithms can be. In 2016, an internal Facebook study found that 64 percent of people who joined an extremist group on the platform did so only because its algorithm recommended it. Recently, a member of the Wolverine Watchmen, the militia accused of trying to kidnap Michigan Gov. Gretchen Whitmer (D), said he joined the group when it “popped up as a suggestion post” on Facebook because he interacted with pages supporting the Second Amendment.

Policymakers often focus on whether Facebook, YouTube and Twitter should take down hate speech and disinformation. This is important, but these questions are about putting out fires. The problem is that the product these companies make is flammable. It’s that their algorithms deliver to each of us what they think we want to hear, creating individually tailored realities for every American and often amplifying the same content they eventually might choose to take down.

In 1996, Congress passed Section 230 of the Communications Decency Act, which says that websites are not legally liable for content that users post (with some exceptions). While the law helped to enable the growth of the modern Internet economy, it was enacted 25 years ago when many of the challenges we currently face could not have been predicted. Large Internet platforms no longer function like community bulletin boards; instead, they use sophisticated, opaque algorithms to determine what content their users see. If companies such as Facebook push us to view certain posts or join certain groups, should they bear no responsibility if doing so leads to real-world violence?

We recently introduced a bill that would remove Section 230 protection from large social media companies if their algorithms amplify content that contributes to an act of terrorism or to a violation of civil rights statutes meant to combat extremist groups. Our bill would not force YouTube, Facebook or Twitter to censor or remove content. Instead, it would allow courts in cases involving extreme harm to consider victims’ arguments against the companies on the merits, as opposed to quickly tossing out lawsuits on Section 230 grounds as would happen today.

Liability would incentivize changes the companies know how to make. For example, last year Facebook tested a new system in which users rated posts on their news feeds as “good” or “bad” for the world. The algorithm then fed those users more content that they deemed good while demoting the bad. The experiment worked. The company’s engineers referred to the result as the “nicer news feed.” But there was one problem. The nicer news feed led to less time on Facebook (and thus less ad revenue), so the experiment died.

This is the fundamental issue: Engagement-based algorithms made social media giants some of the most lucrative companies on Earth. They won’t voluntarily change the underlying architecture of their networks if it threatens their bottom line. We must decide what’s more important: protecting their profits or our democracy.

Unquote.

The authors of the article are Rep. Tom Malinowski, who represents a traditionally Republican district in suburban New Jersey, and Rep. Anna Eshoo, who represents the part of California that includes Silicon Valley.

Keep This in Mind When You Hear the Right Claim They’re Censored on Social Media

It’s bullshit. From The Washington Post:

A new report calls conservative claims of social media censorship “a form of disinformation”.

[The] report concludes that social networks aren’t systematically biased against conservatives, directly contradicting Republican claims that social media companies are censoring them. 

Recent moves by Twitter and Facebook to suspend [the former president’s] accounts in the wake of the violence at the Capitol are inflaming conservatives’ attacks on Silicon Valley. But New York University researchers today released a report stating claims of anti-conservative bias are “a form of disinformation: a falsehood with no reliable evidence to support it.” 

The report found there is no trustworthy large-scale data to support these claims, and even anecdotal examples that tech companies are biased against conservatives “crumble under close examination.” The report’s authors said, for instance, the companies’ suspensions of [the ex-president’s] accounts were “reasonable” given his repeated violation of their terms of service — and if anything, the companies took a hands-off approach for a long time given [his] position.

The report also noted several data sets underscore the prominent place conservative influencers enjoy on social media. For instance, CrowdTangle data shows that right-leaning pages dominate the list of sources providing the most engaged-with posts containing links on Facebook. Conservative commentator Dan Bongino, for instance, far out-performed most major news organizations in the run-up to the 2020 election. 

The report also cites an October 2020 study in which Politico found “right-wing social media influencers, conservative media outlets, and other GOP supporters” dominated the online discussion of Black Lives Matter and election fraud, two of the biggest issues in 2020. Working with the nonpartisan think tank Institute for Strategic Dialogue, researchers found users shared the most viral right-wing social media content about Black Lives Matter more than ten times as often as the most popular liberal posts on the topic. People also shared right-leaning claims on election fraud about twice as often as they shared liberals’ or traditional media outlets’ posts discussing the issue.

But even so, baseless claims of anti-conservative bias are driving Republicans’ approach to regulating tech. Republican lawmakers have concentrated their hearing exchanges with tech executives on the issue, and it’s been driving their legislative proposals. . . .

The New York University researchers called on Washington regulators to focus on what they called “the very real problems of social media.”

“Only by moving forward from these false claims can we begin to pursue that agenda in earnest,” Paul Barrett, the report’s primary author and deputy director of the NYU Stern Center for Business and Human Rights said in a statement. 

The researchers want the Biden administration to work with Congress to overhaul the tech industry. 

Their recommendations focus particularly on changing Section 230, a decades-old law shielding tech companies from lawsuits for the photos, videos and posts people share on their websites. . . . 

The researchers warn against completely repealing the law. Instead, they argue companies should only receive Section 230 immunity if they agree to accept more responsibilities in policing content such as disinformation and hate speech. The companies could be obligated to ensure their recommendation engines don’t favor sensationalist content or unreliable material just to drive better user engagement. 

“Social media companies that reject these responsibilities would forfeit Section 230’s protection and open themselves to costly litigation.” the report proposed.

The researchers also called for the creation of a new Digital Regulatory Agency, which would serve as an independent body and be tasked with enforcing a revised Section 230. 
The report also suggested Biden could empower a “special commission” to work with the industry on improving content moderation, which would be able to move much more quickly than legal battles over antitrust issues. It also called for the president to expand the task force announced by Biden on online harassment to focus on a broad range of harmful content. 

They also called for greater transparency in Silicon Valley. 

The researchers said the platforms typically don’t provide much justification for sanctioning an account or post, and when people are in the dark they assume the worst. 

“The platforms should give an easily understood explanation every time they sanction a post or account, as well as a readily available means to appeal enforcement actions,” the report said. “Greater transparency—such as that which Twitter and Facebook offered when they took action against [a certain terrible person] in January— would help to defuse claims of political bias, while clarifying the boundaries of acceptable user conduct.”

One Way to Start Fixing the Internet

Yaël Eisenstat has been a CIA officer, White House adviser and Facebook executive. She says the problem with social media isn’t just what users post — it’s what the platforms do with that content. From Harvard Business Review:

While the blame for President Txxxx’s incitement to insurrection lies squarely with him, the biggest social media companies — most prominently my former employer, Facebook — are absolutely complicit. They have not only allowed Txxxx to lie and sow division for years, their business models have exploited our biases and weaknesses and abetted the growth of conspiracy-touting hate groups and outrage machines. They have done this without bearing any responsibility for how their products and business decisions effect our democracy; in this case, including allowing an insurrection to be planned and promoted on their platforms. .  . .

The events of last week . . . demand an immediate response. In the absence of any U.S. laws to address social media’s responsibility to protect our democracy, we have ceded the decision-making about which rules to write, what to enforce, and how to steer our public square to CEOs of for-profit internet companies. Facebook intentionally and relentlessly scaled to dominate the global public square, yet it does not bear any of the responsibilities of traditional stewards of public goods, including the traditional media.

It is time to define responsibility and hold these companies accountable for how they aid and abet criminal activity. And it is time to listen to those who have shouted from the rooftops about these issues for years, as opposed to allowing Silicon Valley leaders to dictate the terms.

We need to change our approach not only because of the role these platforms have played in crises like last week’s, but also because of how CEOs have responded — or failed to respond. The reactionary decisions on which content to take down, which voices to downgrade, and which political ads to allow have amounted to tinkering around the margins of the bigger issue: a business model that rewards the loudest, most extreme voices.

Yet there does not seem to be the will to reckon with that problem. Mark Zuckerberg did not choose to block Txxxx’s account until after the U.S. Congress certified Joe Biden as the next president of the United States. . . . And while the decision by many platforms to silence Txxxx is an obvious response to this moment, it’s one that fails to address how millions of Americans have been drawn into conspiracy theories online and led to believe this election was stolen — an issue that has never been truly addressed by the social media leaders.

A look through the Twitter feed of Ashli Babbit, the woman who was killed while storming the Capitol, is eye-opening. A 14-year Air Force veteran, she spent the last months of her life retweeting conspiracy theorists, QAnon followers, and others calling for the overthrow of the government. . . . The likelihood that social media played a significant part in steering her down the rabbit hole of conspiracy theories is high, but we will never truly know how her content was curated, what groups were recommended to her, who the algorithms steered her towards.

If the public, or even a restricted oversight body, had access to the Twitter and Facebook data to answer those questions, it would be harder for the companies to claim they are neutral platforms who merely show people what they want to see. Guardian journalist Julia Carrie Wong wrote in June of this year about how Facebook algorithms kept recommending QAnon groups to her. . . .  The key point is this: This is not about free speech and what individuals post on these platforms. It is about what the platforms choose to do with that content, which voices they decide to amplify, which groups are allowed to thrive and even grow at the hand of the platforms’ own algorithmic help.

So where do we go from here?

I have long advocated that governments must define responsibility for the real-world harms caused by these business models, and impose real costs for the damaging effects they are having on our public health, our public square, and our democracy. As it stands, there are no laws governing how social media companies treat political ads, hate speech, conspiracy theories, or incitement to violence. This issue is unduly complicated by Section 230 of the Communications Decency Act, which has been vastly over-interpreted to provide blanket immunity to all internet companies — or “internet intermediaries” — for any third-party content they host. Many argue that to solve some of these issues, Section 230, which dates back to 1996, must at least be updated. But how, and whether it alone will solve the myriad issues we now face with social media, is hotly debated.

One solution I continue to push is clarifying who should benefit from Section 230 to begin with, which often breaks down into the publisher vs. platform debate. To still categorize social media companies — who curate content, whose algorithms decide what speech to amplify, who nudge users towards the content that will keep them engaged, who connect users to hate groups, who recommend conspiracy theorists — as “internet intermediaries” who should enjoy immunity from the consequences of all this is beyond absurd. The notion that the few tech companies who steer how more than 2 billion people communicate, find information, and consume media enjoy the same blanket immunity as a truly neutral internet company makes it clear that it is time for an upgrade to the rules. They are not just a neutral intermediary.

However, that doesn’t mean that we need to completely re-write or kill Section 230. Instead, why not start with a narrower step by redefining what an “internet intermediary” means? Then we could create a more accurate category to reflect what these companies truly are, such as “digital curators” whose algorithms decide what content to boost, what to amplify, how to curate our content. And we can discuss how to regulate in an appropriate manner, focusing on requiring transparency and regulatory oversight of the tools such as recommendation engines, targeting tools, and algorithmic amplification rather than the non-starter of regulating actual speech.

By insisting on real transparency around what these recommendation engines are doing, how the curation, amplification, and targeting are happening, we could separate the idea that Facebook shouldn’t be responsible for what a user posts from their responsibility for how their own tools treat that content. I want us to hold the companies accountable not for the fact that someone posts misinformation or extreme rhetoric, but for how their recommendation engines spread it, how their algorithms steer people towards it, and how their tools are used to target people with it.

To be clear: Creating the rules for how to govern online speech and define platforms’ responsibility is not a magic wand to fix the myriad harms emanating from the internet. This is one piece of a larger puzzle of things that will need to change if we want to foster a healthier information ecosystem. But if Facebook were obligated to be more transparent about how they are amplifying content, about how their targeting tools work, about how they use the data they collect on us, I believe that would change the game for the better.

As long as we continue to leave it to the platforms to self-regulate, they will continue to merely tinker around the margins of content policies and moderation. We’ve seen that the time for that is long past — what we need now is to reconsider how the entire machine is designed and monetized. Until that happens, we will never truly address how platforms are aiding and abetting those intent on harming our democracy.