One Way to Start Fixing the Internet

YaĂ«l Eisenstat has been a CIA officer, White House adviser and Facebook executive. She says the problem with social media isn’t just what users post — it’s what the platforms do with that content. From Harvard Business Review:

While the blame for President Txxxx’s incitement to insurrection lies squarely with him, the biggest social media companies — most prominently my former employer, Facebook — are absolutely complicit. They have not only allowed Txxxx to lie and sow division for years, their business models have exploited our biases and weaknesses and abetted the growth of conspiracy-touting hate groups and outrage machines. They have done this without bearing any responsibility for how their products and business decisions effect our democracy; in this case, including allowing an insurrection to be planned and promoted on their platforms. .  . .

The events of last week . . . demand an immediate response. In the absence of any U.S. laws to address social media’s responsibility to protect our democracy, we have ceded the decision-making about which rules to write, what to enforce, and how to steer our public square to CEOs of for-profit internet companies. Facebook intentionally and relentlessly scaled to dominate the global public square, yet it does not bear any of the responsibilities of traditional stewards of public goods, including the traditional media.

It is time to define responsibility and hold these companies accountable for how they aid and abet criminal activity. And it is time to listen to those who have shouted from the rooftops about these issues for years, as opposed to allowing Silicon Valley leaders to dictate the terms.

We need to change our approach not only because of the role these platforms have played in crises like last week’s, but also because of how CEOs have responded — or failed to respond. The reactionary decisions on which content to take down, which voices to downgrade, and which political ads to allow have amounted to tinkering around the margins of the bigger issue: a business model that rewards the loudest, most extreme voices.

Yet there does not seem to be the will to reckon with that problem. Mark Zuckerberg did not choose to block Txxxx’s account until after the U.S. Congress certified Joe Biden as the next president of the United States. . . . And while the decision by many platforms to silence Txxxx is an obvious response to this moment, it’s one that fails to address how millions of Americans have been drawn into conspiracy theories online and led to believe this election was stolen — an issue that has never been truly addressed by the social media leaders.

A look through the Twitter feed of Ashli Babbit, the woman who was killed while storming the Capitol, is eye-opening. A 14-year Air Force veteran, she spent the last months of her life retweeting conspiracy theorists, QAnon followers, and others calling for the overthrow of the government. . . . The likelihood that social media played a significant part in steering her down the rabbit hole of conspiracy theories is high, but we will never truly know how her content was curated, what groups were recommended to her, who the algorithms steered her towards.

If the public, or even a restricted oversight body, had access to the Twitter and Facebook data to answer those questions, it would be harder for the companies to claim they are neutral platforms who merely show people what they want to see. Guardian journalist Julia Carrie Wong wrote in June of this year about how Facebook algorithms kept recommending QAnon groups to her. . . .  The key point is this: This is not about free speech and what individuals post on these platforms. It is about what the platforms choose to do with that content, which voices they decide to amplify, which groups are allowed to thrive and even grow at the hand of the platforms’ own algorithmic help.

So where do we go from here?

I have long advocated that governments must define responsibility for the real-world harms caused by these business models, and impose real costs for the damaging effects they are having on our public health, our public square, and our democracy. As it stands, there are no laws governing how social media companies treat political ads, hate speech, conspiracy theories, or incitement to violence. This issue is unduly complicated by Section 230 of the Communications Decency Act, which has been vastly over-interpreted to provide blanket immunity to all internet companies — or “internet intermediaries” — for any third-party content they host. Many argue that to solve some of these issues, Section 230, which dates back to 1996, must at least be updated. But how, and whether it alone will solve the myriad issues we now face with social media, is hotly debated.

One solution I continue to push is clarifying who should benefit from Section 230 to begin with, which often breaks down into the publisher vs. platform debate. To still categorize social media companies — who curate content, whose algorithms decide what speech to amplify, who nudge users towards the content that will keep them engaged, who connect users to hate groups, who recommend conspiracy theorists — as “internet intermediaries” who should enjoy immunity from the consequences of all this is beyond absurd. The notion that the few tech companies who steer how more than 2 billion people communicate, find information, and consume media enjoy the same blanket immunity as a truly neutral internet company makes it clear that it is time for an upgrade to the rules. They are not just a neutral intermediary.

However, that doesn’t mean that we need to completely re-write or kill Section 230. Instead, why not start with a narrower step by redefining what an “internet intermediary” means? Then we could create a more accurate category to reflect what these companies truly are, such as “digital curators” whose algorithms decide what content to boost, what to amplify, how to curate our content. And we can discuss how to regulate in an appropriate manner, focusing on requiring transparency and regulatory oversight of the tools such as recommendation engines, targeting tools, and algorithmic amplification rather than the non-starter of regulating actual speech.

By insisting on real transparency around what these recommendation engines are doing, how the curation, amplification, and targeting are happening, we could separate the idea that Facebook shouldn’t be responsible for what a user posts from their responsibility for how their own tools treat that content. I want us to hold the companies accountable not for the fact that someone posts misinformation or extreme rhetoric, but for how their recommendation engines spread it, how their algorithms steer people towards it, and how their tools are used to target people with it.

To be clear: Creating the rules for how to govern online speech and define platforms’ responsibility is not a magic wand to fix the myriad harms emanating from the internet. This is one piece of a larger puzzle of things that will need to change if we want to foster a healthier information ecosystem. But if Facebook were obligated to be more transparent about how they are amplifying content, about how their targeting tools work, about how they use the data they collect on us, I believe that would change the game for the better.

As long as we continue to leave it to the platforms to self-regulate, they will continue to merely tinker around the margins of content policies and moderation. We’ve seen that the time for that is long past — what we need now is to reconsider how the entire machine is designed and monetized. Until that happens, we will never truly address how platforms are aiding and abetting those intent on harming our democracy.

Twitter Banning Him May Be More Important Than We Thought

Paul Waldman of The Washington Post thinks we haven’t yet realized how important it is that Twitter banned the creep. That’s because journalists and others who dominate the national conversation use Twitter a lot:

The silence is remarkable.

For all that’s happening — President-elect Joe Biden’s inauguration, the threat of right-wing violence, the coronavirus death toll approaching 400,000 — the loudest voice in American life for the past five years has been reduced to a whisper. President Txxxx is not on Twitter.

On Jan. 8, Twitter’s leadership finally decided that it had had enough of Txxxx using the platform to spread lies and incite violence and barred him from the service. According to a new Post/ABC News poll, 58 percent of the public supported the move (though that includes 91 percent of Democrats and only 16 percent of Republicans).

But the magnitude of that decision still hasn’t been fully appreciated. The fact that this one social media company decided to shut down this one account might have completely reshaped American politics for the coming few years.

Until 10 days ago, nearly everyone assumed that Txxxx would be in a unique place for a defeated ex-president, retaining a hold on his party’s base that would make him the axis around which the Republican world revolved.

His opinions would shape the party’s approach to Biden’s presidency. He would make or break Republican officeholders, depending on their loyalty to him. Everyone within the party — especially those who want to run for president themselves in 2024 — would have to grovel before him, just as they have for so long. The GOP would still be Txxxx’s party, in nearly every sense.

But not anymore.

As much as we’ve talked about Txxxx’s tweets for all these years, if anything we might have underestimated how central Twitter was to his power. Without it — especially as an ex-president — he’ll be like Samson without his hair, all his strength taken from him.

Twitter was so important to Txxxx, according to Shannon McGregor, an assistant professor at the . . . University of North Carolina, because of a few critical features of the platform itself and who uses it.

First, “Twitter is the space for political and media elites,” McGregor told me. Facebook has many more users, but journalists are on Twitter constantly, which means that when Txxxx spoke there, he was speaking to them.

So even if Facebook lets Txxxx back on (it, too, banned him, but so far only through the inauguration), that won’t give him the ability to send a missive and then sit back as one news organization after another runs stories about it, multiplying its effects. “Whatever he said on Twitter ended up on the news,” McGregor said. According to research McGregor conducted . . ., when President Barack Obama tweeted during his second term, 3 percent of the time the tweet would find its way into a news story. The figure for Txxxx’s tweets during his term was 65 percent.

Second, the platform provided him a place to speak uncontested. He could say whatever he wanted without being challenged, at least in the moment.

Third, his Twitter presence enabled him to constantly reinforce an affinity between himself and his supporters by speaking to them not only about politics but also about plenty of other topics.

Txxxx connected with them “because he was so genuinely himself, for better and for worse, on Twitter,” McGregor told me. They identified with his opinions about everything, whether it was House Speaker Nancy Pelosi or the merits of KFC or the latest celebrity scandal.

“That’s the reason influencers of all stripes are successful, because of that sense of intimacy” that social media can create, McGregor said. [Twitter] allowed people to make that connection between him and themselves” as they responded to the news together.

When he’s not president, Txxxx will have means of speaking to the public — he can call in to “Fox & Friends,” for instance — but he won’t be surrounded by reporters waiting to write down his every word, so he’ll have to work harder to get the attention of the press. Without Twitter, he won’t be able to speak to his people on an hourly basis, maintaining that affinity and crowding out the other Republicans who might compete for their affection.

He could go to some upstart conservative social media platform, like Gab or Parler (if it gets restored). But those don’t have the mainstream legitimacy he craves, and reporters aren’t on them, so their reach is much more limited.

That means that when new events occur, Txxxx won’t be able to make himself the core of the story. He won’t be able to constantly remind Republicans that they need to fear him. While many of his supporters will remain loyal, others will drift away, not turning against him but just no longer thinking about him every day.

That will create a vacuum into which other Republicans can move as they position themselves for 2024, not because they’re such Twitter ninjas themselves, but because space will have been created for something more like a normal, non-Txxxx presidential nominating contest.

There are profound questions about the role social media now plays in our political process. I agree both with those who argue that Twitter banning Txxxx was long overdue (his account was the single most important nexus of misinformation on the entire platform) and that it’s deeply troubling that a private company has this much power.

But, for now, it does. And so one company’s decision to finally say no to a president who used it to inject poison into the American political bloodstream for years has remade the future of the Republican Party, and perhaps the whole country.

Txxxx will still play a role in his party and in our politics; we won’t shake off this horrific presidency so easily. But that blissful quiet, as we no longer have him shouting in our ears every day? We could get used to that.

Reports from the Dystopian, Disinformation Beat

Ben Collins is a reporter for NBC News. He says he works the “dystopian beat”. By that, he means he follows the crazies, I assume mostly the radical right. This afternoon, he shared some of what he’s found:

Over the last few years, I kept in touch with some QAnon supporters through DMs [Twitter direct messages], checking in on them to see if they’d ever come out of it when their next doomsday came and went.

They’d typically first message me calling me a Satanic pedophile. I’d ignore it and ask questions.

Usually they would draw hard lines. A big one was D5, which everyone thought would be mass arrests on December 5th two years ago. Didn’t happen, didn’t matter.

It’s about belief, anticipation, an advent calendar. One day soon, their problems would be fixed.

I would check in the week after the failed doomsdays. They’d point to a Q post like scripture, and say some ridiculous event proved it was still happening. An earthquake somewhere, a service interruption on Gmail.

I learned something: these people don’t want to be humiliated.

So many Q people have staked their entire identities on this. There are no real-life happy endings with QAnon, especially true believers. Just constant embarrassment and almost surgical extrication from friends or family.

So they retreat back to Q forums and pray for executions [executions of Q followers to confirm their fears?].

There are a lot of QAnon influencers saying the 20th is their last stand, that if Biden is inaugurated they’ll admit they’ve been conned. But they won’t. They’ll equivocate and buck-pass. They’ll find secret patterns in his speech and say he was secretly arrested [what???]. It’ll continue.

QAnon is a deeply pathetic and embarrassing thing to believe. For believers, there is safety from that embarrassment in increasingly volatile and toxic online communities. Getting people out of it safely is going to be very hard, but important.

I’d reach back out to some of those Q people, but they’re banned from this site now.

They grew to like me. I wasn’t a Satanic, blood-drinking pedophile . . . they wanted to save me.

Because, remember, they think they’re the good guys.

Unquote. Meanwhile:

Online misinformation about election fraud plunged 73 percent after several social media sites suspended President Txxxx and key allies last week, research firm Zignal Labs has found, underscoring the power of tech companies to limit the falsehoods poisoning public debate when they act aggressively.

The new research by the San Francisco-based analytics firm reported that conversations about election fraud dropped from 2.5 million mentions to 688,000 mentions across several social media sites in the week after Txxxx was banned from Twitter. . . . 

The research by Zignal and other groups suggests that a powerful, integrated disinformation ecosystem — composed of high-profile influencers, rank-and-file followers and Txxxx himself — was central to pushing millions of Americans to reject the election results and may have trouble surviving without his social media accounts.

Researchers have found that Txxxx’s tweets were retweeted by supporters at a remarkable rate, no matter the subject, giving him a virtually unmatched ability to shape conversation online. . . . [The] disinformation researchers consistently have found that relatively few accounts acted as “superspreaders” during the election, with their tweets and posts generating a disproportionate share of the falsehoods and misleading narratives that spread about election fraud, mail-in ballots and other topics related to the vote [The Washington Post].

Escaping Facebook Hell

Charlie Warzel of The New York Times monitored two average people’s Facebook feeds. It was as bad as you’d expect, but his article suggests solutions to the problem:

In mid-October I asked two people I’d never met to give me their Facebook account passwords for three weeks leading up to and after Election Day. I wanted to immerse myself in the feeds of a type of person who has become a trope of sorts in our national discussion about politics and disinformation: baby boomers with an attachment to polarizing social media.

I went looking for older Americans — not full-blown conspiracy theorists, trolls or partisan activists — whose news consumption has increased sharply in the last few years on Facebook. Neither of the two people I settled on described themselves as partisans. Both used to identify as conservatives slowly drifting leftward until Dxxxx Txxxx’s takeover of the Republican Party offered a final push. Both voted for Joe Biden this year in part because of his promise to reach across the aisle. Both bemoaned the toxicity of our current politics.

Every day, Jim Young, 62, opens up his Facebook app and heads into an information hellscape. His news feed is a dizzying mix of mundane middle-class American life and high-octane propaganda.

Here’s a sample:

A set of adoring grandparents posing with rosy-cheeked babies. “Mimi and Pop Pop’s first visit since March,” the post reads.

Next, a meme of Joe Biden next to a photoshopped “for sale” sign. “For more information contact Hunter,” the sign reads.

After that is a post advertising a “Funny rude” metal sign displaying a unicorn in a tutu giving the middle finger. “Thought of you,” the post reads.

Below that is a screenshot of a meme created by the pro-Txxxx group Turning Points USA. “Your city on socialism,” the post reads, displaying a series of photos of abandoned buildings, empty grocery store shelves and bleeding men in makeshift, dirty hospital beds.

The feed goes on like this — an infinite scroll of content without context. Touching family moments are interspersed with Bible quotes that look like Hallmark cards, hyperpartisan fearmongering and conspiratorial misinformation. Mr. Young’s news feed is, in a word, a nightmare. I know because I spent the last three weeks living inside it.

Despite Facebook’s reputation as a leading source for conspiracy theories and misinformation, what goes on in most average Americans’ news feeds is nearly impossible for outsiders to observe. . . .

After years of reading about the ways that Facebook is radicalizing and polarizing people I wanted to see it for myself — not in the aggregate, but up close and over time. What I observed is a platform that gathered our past and present friendships, colleagues, acquaintances and hobbies and slowly turned them into primary news sources. And made us miserable in the process. . . .

Mr. Young joined Facebook in 2008 as a way to reconnect with his high school classmates from Illinois. He reunited quickly with old friends and neighbors. It was exciting to see how people had changed. . . .

It was a little voyeuristic, nostalgic and harmless fun. Before 2016, Mr. Young told me, he’d see the occasional heated disagreement. It wasn’t until the last few years that his feed really started to turn divisive.

He first noticed it in the comments, where discussions that would usually end in some version of “agree to disagree” exploded into drawn-out, conspiratorial comment threads. Political disagreements started to read like dispatches from an alternate reality. He didn’t enjoy fact-checking his friends or picking fights, but when a post appeared obviously untrue he had to say something.

His time on the site ticked upward.

“It’s like going by a car wreck. You don’t want to look, but you have to,” he said. He believes his feed is a perfect storm for conflict in part because he’s lived in both liberal and conservative areas of the country and throughout his life he’s lived, worked with and befriended all manner of liberals and conservatives. . . .

But then he noticed some of his friends start to post more political memes, often with no link or citation. When he’d try to verify one, he’d realize the post was fake or debunked by a news site. “Most times there’s no real debate. Just anger. They’re so closed-minded. Sometimes, it scares me.”

Scrolling through Mr. Young’s feed after Election Day, I found a number of these posts.

Many examples of misinformation came from Facebook text posts created and shared by Mr. Young’s friends repeating baseless voter-fraud claims, [for example, one claiming] the number of votes in Wisconsin exceeded the number of registered voters (with no links to these numbers or any authoritative news source).

On Nov. 5, one of Mr. Young’s friends posted about “something fishy” alongside a link to a Bing search. The link returned a page of information about voters having ballots thrown out after using Sharpies to fill them out, including a link to a Facebook post on #Sharpiegate with over 137,000 shares.

One featured a screenshot from a Fox 2 Detroit news broadcast with the banner “Detroit Voter Roll Lawsuit.” The screenshot alleged potential voter fraud. “And so it begins!” the friend wrote. According to a Snopes debunk, the segment actually aired in December 2019 and had nothing to do with the 2020 election.

Another text post suggested that people shouldn’t trust Facebook’s fact checkers. “When the fact checkers are controlled by the same people doing the lying, what do you call it?” the post read. Below, commenters sounded off. “Democrats,” one exclaimed.. . . .

Mr. Young’s feed stood in stark contrast to the other Facebook account I spent time in. That feed belongs to Karen Pierce, a 55-year-old schoolteacher from Virginia. Ms. Pierce described herself to me as a “middle-child peacekeeper who is uncomfortable with politics.”

Unlike Mr. Young, she is not politically active on Facebook and never intervenes, even when she sees things she thinks might be conspiratorial or fake. As a result, her feed surfaced less politically charged content. The day after the election, the first post I noticed from a friend in her feed was a simple, apolitical exclamation: “It’s official! I make a damn good pot of stew!”

The political posts that appeared in Ms. Pierce’s feed were mostly anodyne statements of support for the Biden-Harris campaign peppered in between comments from fellow teachers frustrated by remote learning and an avalanche of cute dog photos and memes. Occasionally, a meme popped up mentioning Hunter Biden’s laptop, but most lacked the vitriol or the contentious commenter debates of Mr. Young’s feed.

Yet, in my conversations with Ms. Pierce over the last month, she expressed just as much frustration with her experience on Facebook as Mr. Young. “It’s so extreme,” she told me in mid-October. “I’ve watched people go from debating the issue to coming up with the craziest thing they can say to get attention. Take the whole anti-abortion debate. People started talking, then started saying ‘if you vote for Biden you’re a murderer.’ Now there’s people posting graphic pictures of fetuses.”

When I told her I hadn’t seen anything that extreme on her page, she suggested it was because of a three-month break she took from the platform this summer. “It got to be too much with the pandemic and the politics,” she said. The final straw was seeing people in her feed post QAnon adjacent memes and content. “There was a lot of calling Biden a pedophile. Or Txxxx voters posting pictures with assault rifles. It made me very uncomfortable.”

Like millions of Americans, Ms. Pierce logs onto Facebook to feel more connected. “I use it to see how people are doing,” she said. “I believe in prayer and sometimes I check to see who is struggling and to see who to pray for. And then, of course, you see some news and read some articles.”

It was when she was using the platform for news that she started seeing disturbing, conspiracy posts from people in her network. “It was so disappointing to realize the hate that’s out there,” she said. . . .

She’s worried about the long-term effects of such a toxic environment. “I think it’s affecting the mood of everybody.”

Living inside the Facebook account of strangers — even with their permission — feels invasive, like poking around in their medicine cabinet. But it offered me a unique perspective. Two things stood out. The first is the problem of comments, where strangers, even in the most mundane of articles, launched into intense, acrimonious infighting. In most cases, commenters bypassed argumentation for convenient name-calling or escalated a civil discussion by posting contextless claims with no links or source. In many cases, it appeared that a post from one user would get shared by a friend into his or her network, where it would [attract] strangers.

The more I scrolled through them, the more comments felt like a central and intractable issue. Unlike links to outside articles, comments aren’t subject to third-party fact checks or outside moderation. They are largely invisible to those people who study or attempt to police the platform.

Yet in my experience they were a primary source of debunked claims, harassment and divisive rhetoric. I showed one comment thread to a colleague who doesn’t use Facebook and my colleague found it shocking. “Facebook created a town hall for fighting,” they said. “It’s almost like if you were building a machine to make a country divisive and extreme — if you were to sit down and plan what that would look like —- it would be this.”

[Facebook’s] evolution, from a friendly social networking site into the world’s largest information platform, is the source of its biggest problems.

Sifting through Mr. Young and Ms. Pierce’s feeds and talking to them about what I saw, it became clear that the two found themselves tormented as a result of decisions they made in their early days on the platform. Both explained that they joined to reconnect with old friends.

Like most of us, they gave little thought to the connections they made. Mr. Young added friends he hadn’t spoken to in decades. When Ms. Pierce joined a nonprofit organization she accepted dozens of friend requests — some from people she’d met only in passing. “I meet people on airplanes all the time and we exchange Facebook handles,” she told me.

But as Facebook evolved, these weak connections became unlikely information nodes. Mr. Young and Ms. Pierce were now getting their commentary from people they hardly knew, whose politics had once been unknown or illegible.

“When Facebook first started it made me feel so good. It feels like I signed up for one thing and it’s become something totally different,” Ms. Pierce said. . . .

Joan Donovan, the research director of the Shorenstein Center on Media, Politics and Public Policy . . . , described this phenomenon as what happens when “social-networking sites transformed into social media,” creating “a digital economy built on engagement.” Dr. Donovan argues that this decision spawned the algorithmic echo chambers we now live in and created a fertile environment for our information crisis.

For Mr. Young, the fallout of these decisions is painful. After weeks of watching his feed, I presented him with some of the most notorious posters in his feed. When I read aloud the name of one Facebook friend who constantly shared debunked claims, often with language intended to provoke, he sighed. He described the person as a longtime friend and neighbor who was once so close they practically lived at each other’s houses. Now, he spends his time debating whether it’s worth the energy to stop the friend from sharing conspiracy theories. . . .

The psychological toll of watching friends lose touch with reality has both Mr. Young and Ms. Pierce re-evaluating their choice to spend so much time on the platform. Mr. Young, for his part, tried to stay off during election week; Ms. Pierce is heartened that her feed has become less toxic after her Facebook sabbatical and is planning another. “My emotional and mental state improves greatly the further away I get from this place,” she told me.

Even if both manage to stay away from Facebook for good, their stories are just two in a sea of billions. No story is the same because no feed is the same. And yet these same dynamics that tortured my two participants — a sea of contextless news and acrimonious comments revealing their neighbors’ worst selves — are on display for millions of Americans every day. . . .

Unquote.

So what can be done?

  1. CLOSE YOUR FACEBOOK ACCOUNT. IT’S THE EASIEST AND MOST EFFECTIVE SOLUTION.
  2. UNFOLLOW EVERYONE YOU AREN’T CLOSE TO OR WHO SENDS YOU CRAP.
  3. DON’T READ THE COMMENTS, UNLESS THE SUBJECT IS CATS OR DOGS.

One thing Facebook could do is close the accounts of the people whose lies are shared the most. Researchers have found that a small group of social media accounts are responsible for the spread of a disproportionate amount of false information [New York Times].

But since Facebook has no morality and Republicans revel in the lying, see the list above, especially item 1.

Truth and the American Way

From Thomas Friedman of The New York Times. He leaves out a big issue:

You remember the old joke? Moses comes down from Mount Sinai and tells the children of Israel: “Children, I have good news and bad news. The good news is that I bargained him down to 10. The bad news is that adultery is still in.”

Well, I’ve got bad news and worse news: We’re now down to nine.

Yes, this was a historic four years — even one of the Ten Commandments got erased. Lying has been normalized at a scale we’ve never seen before. . . .

I am not sure how we reverse it, but we’d better — and fast.

People who do not share truths can’t defeat a pandemic, can’t defend the Constitution and can’t turn the page after a bad leader. The war for truth is now the war to preserve our democracy.

It is impossible to maintain a free society when leaders and news purveyors feel at liberty to spread lies without sanction. Without truth there is no agreed-upon path forward, and without trust there is no way to go down that path together.

But our hole now is so deep, because the only commandment President Txxxx did believe in was the Eleventh: “Thou shalt not get caught.”

Lately, though, Txxxx and many around him stopped believing even in that — they don’t seem to care about being caught.

They know, as the saying goes, that their lies are already halfway around the world before the truth has laced up its shoes. That’s all they care about. Just pollute the world with falsehoods and then no one will know what is true. Then you’re home free.

The truth binds you, and Txxxx never wanted to be bound — not in what he could ask of the president of Ukraine or say about the coronavirus or about the integrity of our election.

And it nearly worked. Txxxx proved over five years that you could lie multiple times a day — multiple times a minute — and not just win election but almost win re-election.

We have to ensure that the likes of him never again appear in American politics.

Because Txxxx not only liberated himself from truth, he liberated others to tell their lies or spread his — and reap the benefits. His party’s elders did not care, as long as he kept the base energized and voting red. Fox News didn’t care, as long as he kept its viewers glued to the channel and its ratings high. Major social networks only barely cared, as long he kept their users online and their numbers growing. Many of his voters — even evangelicals — did not care, as long as he appointed anti-abortion judges. They are “pro-life,” but not always pro-truth. . . .

Israeli Bedouin expert Clinton Bailey tells the story about a Bedouin chief who discovered one day that his favorite turkey had been stolen. He called his sons together and told them: “Boys, we are in great danger now. My turkey’s been stolen. Find my turkey.” His boys just laughed and said, “Father, what do you need that turkey for?” and they ignored him.

Then a few weeks later his camel was stolen. And the chief told his sons, “Find my turkey.” A few weeks later the chief’s horse was stolen. His sons shrugged, and the chief repeated, “Find my turkey.”

Finally, a few weeks later his daughter was abducted, at which point he gathered his sons and declared: “It’s all because of the turkey! When they saw that they could take my turkey, we lost everything.”

And do you know what our turkey was? Birtherism.

When Txxxx was allowed to spread the “birther” lie for years — that Barack Obama, who was born in Hawaii, was actually born in Kenya and was therefore ineligible to be president — he realized he could get away with anything.

Sure, Txxxx eventually gave that one up, but once he saw how easily he could steal our turkey — the truth — he just kept doing it, until he stole the soul of the Republican Party.

And, had he been re-elected, he would have stolen the soul of this nation.

He and his collaborators are now making one last bid to use the Big Lie to destroy our democracy by delegitimizing one of its greatest moments ever — when a record number of citizens came out to vote, and their votes were legitimately counted, amid a deadly and growing pandemic.

It is so corrupt what Txxxx and his allies are doing, so dangerous to our constitutional system, but you weep even more for how many of their followers have bought into it.

“Lies don’t work unless they’re believed, and nearly half the American public has proved remarkably gullible,” my former . . . colleague David K. Shipler, who served in our Moscow bureau during the Cold War, said to me. “I think of each of us as having our own alarm — and it’s as if half of their batteries have died. Lots of Txxxx’s lies, and his retweets of conspiracy fabrications, are obviously absurd. Why have so many people believed them? I’m not sure it’s fully understood.”

That is why it’s vital that every reputable news organization — especially television, Facebook and Twitter — adopt what I call the Txxxx Rule. If any official utters an obvious falsehood or fact-free allegation, the interview should be immediately terminated, just as many networks did with Txxxx’s lie-infested, postelection, news conference last week. If critics scream “censorship,” just shout back “truth.”

This must become the new normal. Politicians need to be terrified every time they go on TV that the plug will be pulled on them if they lie.

At the same time, we need to require every K-12 school in America to include digital civics — how to determine and crosscheck if something you read on the internet is true — in their curriculum. You should not be able to graduate without it.

We need to restore the stigma to lying and liars before it is too late. We need to hunt for truth, fight for truth and mercilessly discredit the forces of disinformation. It is the freedom battle of our generation.

Unquote.

It’s not very surprising, but a crucial issue Mr. Friedman left out is the tendency of reality-based journalists, including those who work for the nation’s best newspapers, to strive for impartiality and balance even when dealing with liars, and to report lies as if they are simply controversial opinions.Â