Using the Legal System Against Facebook and Other Titans of the Internet

Two Democratic members of Congress are trying to stop big social media companies from doing so much damage:

Imagine clicking on a Facebook video alleging that a “deep-state cabal” of Satan-worshiping pedophiles stole the election from [a horrible person]. Moments later, your phone rings. The caller says, “Hey, it’s Freddie from Facebook. We noticed you just watched a cool video on our site, so we’ll send you a few dozen more videos about election-related conspiracy theories. As a bonus, we’ll connect you to some people who share your interest in ‘stopping the steal’. You guys should connect and explore your interest together!”

The scenario is, of course, made up. But it basically captures what social media platforms do every day. In the real world, “Freddie from Facebook” is not a person who calls you, but an algorithm that tracks you online, learns what content you spend the most time with and feeds you more of whatever maximizes your engagement — the time you spend on the platform. Greater engagement means that users see more ads, earning Facebook more revenue.

If you like cat videos, great; you’ll get an endless supply. But the same is true for the darkest content on the Web. Human nature being what it is, the content most likely to keep us glued to our screens is that which confirms our prejudices and triggers our basest emotions. Social media algorithms don’t have a conservative or liberal bias, but they know if we do. Their bias is to reinforce ours at the cost of making us more angry, anxious and afraid.

Facebook recently played down the role of its algorithms in exploiting users’ susceptibilities and enabling radicalization. The company says that users, not its product, are largely responsible for the extreme content showing up in their news feeds.

But Facebook knows how powerful its algorithms can be. In 2016, an internal Facebook study found that 64 percent of people who joined an extremist group on the platform did so only because its algorithm recommended it. Recently, a member of the Wolverine Watchmen, the militia accused of trying to kidnap Michigan Gov. Gretchen Whitmer (D), said he joined the group when it “popped up as a suggestion post” on Facebook because he interacted with pages supporting the Second Amendment.

Policymakers often focus on whether Facebook, YouTube and Twitter should take down hate speech and disinformation. This is important, but these questions are about putting out fires. The problem is that the product these companies make is flammable. It’s that their algorithms deliver to each of us what they think we want to hear, creating individually tailored realities for every American and often amplifying the same content they eventually might choose to take down.

In 1996, Congress passed Section 230 of the Communications Decency Act, which says that websites are not legally liable for content that users post (with some exceptions). While the law helped to enable the growth of the modern Internet economy, it was enacted 25 years ago when many of the challenges we currently face could not have been predicted. Large Internet platforms no longer function like community bulletin boards; instead, they use sophisticated, opaque algorithms to determine what content their users see. If companies such as Facebook push us to view certain posts or join certain groups, should they bear no responsibility if doing so leads to real-world violence?

We recently introduced a bill that would remove Section 230 protection from large social media companies if their algorithms amplify content that contributes to an act of terrorism or to a violation of civil rights statutes meant to combat extremist groups. Our bill would not force YouTube, Facebook or Twitter to censor or remove content. Instead, it would allow courts in cases involving extreme harm to consider victims’ arguments against the companies on the merits, as opposed to quickly tossing out lawsuits on Section 230 grounds as would happen today.

Liability would incentivize changes the companies know how to make. For example, last year Facebook tested a new system in which users rated posts on their news feeds as “good” or “bad” for the world. The algorithm then fed those users more content that they deemed good while demoting the bad. The experiment worked. The company’s engineers referred to the result as the “nicer news feed.” But there was one problem. The nicer news feed led to less time on Facebook (and thus less ad revenue), so the experiment died.

This is the fundamental issue: Engagement-based algorithms made social media giants some of the most lucrative companies on Earth. They won’t voluntarily change the underlying architecture of their networks if it threatens their bottom line. We must decide what’s more important: protecting their profits or our democracy.

Unquote.

The authors of the article are Rep. Tom Malinowski, who represents a traditionally Republican district in suburban New Jersey, and Rep. Anna Eshoo, who represents the part of California that includes Silicon Valley.