The report found there is no trustworthy large-scale data to support these claims, and even anecdotal examples that tech companies are biased against conservatives “crumble under close examination.” The report’s authors said, for instance, the companies’ suspensions of [the ex-president’s] accounts were “reasonable” given his repeated violation of their terms of service — and if anything, the companies took a hands-off approach for a long time given [his] position.
The report also noted several data sets underscore the prominent place conservative influencers enjoy on social media. For instance, CrowdTangle data shows that right-leaning pages dominate the list of sources providing the most engaged-with posts containing links on Facebook. Conservative commentator Dan Bongino, for instance, far out-performed most major news organizations in the run-up to the 2020 election.
The report also cites an October 2020 study in which Politico found “right-wing social media influencers, conservative media outlets, and other GOP supporters” dominated the online discussion of Black Lives Matter and election fraud, two of the biggest issues in 2020. Working with the nonpartisan think tank Institute for Strategic Dialogue, researchers found users shared the most viral right-wing social media content about Black Lives Matter more than ten times as often as the most popular liberal posts on the topic. People also shared right-leaning claims on election fraud about twice as often as they shared liberals’ or traditional media outlets’ posts discussing the issue.
But even so, baseless claims of anti-conservative bias are driving Republicans’ approach to regulating tech. Republican lawmakers have concentrated their hearing exchanges with tech executives on the issue, and it’s been driving their legislative proposals. . . .
The New York University researchers called on Washington regulators to focus on what they called “the very real problems of social media.”
“Only by moving forward from these false claims can we begin to pursue that agenda in earnest,” Paul Barrett, the report’s primary author and deputy director of the NYU Stern Center for Business and Human Rights said in a statement.
The researchers want the Biden administration to work with Congress to overhaul the tech industry.
Their recommendations focus particularly on changing Section 230, a decades-old law shielding tech companies from lawsuits for the photos, videos and posts people share on their websites. . . .
The researchers warn against completely repealing the law. Instead, they argue companies should only receive Section 230 immunity if they agree to accept more responsibilities in policing content such as disinformation and hate speech. The companies could be obligated to ensure their recommendation engines don’t favor sensationalist content or unreliable material just to drive better user engagement.
“Social media companies that reject these responsibilities would forfeit Section 230’s protection and open themselves to costly litigation.” the report proposed.
The researchers also called for the creation of a new Digital Regulatory Agency, which would serve as an independent body and be tasked with enforcing a revised Section 230.
The report also suggested Biden could empower a “special commission” to work with the industry on improving content moderation, which would be able to move much more quickly than legal battles over antitrust issues. It also called for the president to expand the task force announced by Biden on online harassment to focus on a broad range of harmful content.
They also called for greater transparency in Silicon Valley.
The researchers said the platforms typically don’t provide much justification for sanctioning an account or post, and when people are in the dark they assume the worst.
“The platforms should give an easily understood explanation every time they sanction a post or account, as well as a readily available means to appeal enforcement actions,” the report said. “Greater transparency—such as that which Twitter and Facebook offered when they took action against [a certain terrible person] in January— would help to defuse claims of political bias, while clarifying the boundaries of acceptable user conduct.”