The Data Priests

On June 15, Matthew Crawford of The New Atlantis testified at a hearing on smart home technology held by the U.S. Senate Judiciary Committee, Subcommittee on Antitrust, Competition Policy & Consumer Rights. This is from his opening statement:

I have no expertise in antitrust. I come to you as a student of the history of political thought.

The convenience of the smart home may be worth the price; that’s for each of us to decide. But to do so with open eyes, one has to understand what the price is. After all, you don’t pay a monthly fee for Alexa, or Google Assistant.

The Sleep Number bed is typical of smart home devices, as Harvard professor Shoshana Zuboff describes in The Age of Surveillance Capitalism. It comes with an app, of course, which you’ll need to install to get the full benefits. Benefits for whom? Well, to know that you would need to spend some time with the sixteen-page privacy policy that comes with the bed. There you’ll read about third-party sharing, analytics partners, targeted advertising, and much else.

Meanwhile, the user agreement specifies that the company can share or exploit your personal information even “after you deactivate or cancel” your Sleep Number account. You are unilaterally informed that the firm does not honor “Do Not Track” notifications. By the way, its privacy policy once stated that the bed would also transmit “audio in your room.” (I am not making this up.)

The business rationale for the smart home is to bring the intimate patterns of life into the fold of the surveillance economy, which has a one-way mirror quality. Increasingly, every aspect of our lives — our voices, our facial expressions, our political affiliations and intellectual predilections — are laid bare as data to be collected by companies who, for their own part, guard with military-grade secrecy the algorithms by which they use this information to determine the world that is presented to us, for example when we enter a search term, or in our news feeds. They are also in a position to determine our standing in the reputational economy. The credit rating agencies and insurance companies would like to know us more intimately; I suppose Alexa can help with that.

Allow me to offer a point of reference that comes from outside the tech debates, but can be brought to bear on them. Conservative legal scholars have long criticized a shift of power from Congress to the administrative state, which seeks to bypass legislation and rule by executive fiat, through administrative rulings. The appeal of this move is that it saves one the effort of persuading others, that is, the inconvenience of democratic politics.

All of the arguments that conservatives make about the administrative state apply as well to this new thing, call it algorithmic governance, that operates through artificial intelligence developed in the private sector. It too is a form of power that is not required to give an account of itself, and is therefore insulated from democratic pressures.

In machine learning, an array of variables are fed into deeply layered “neural nets” that simulate the binary, fire/don’t-fire synaptic connections of an animal brain. Vast amounts of data are used in a massively iterated (and, in some versions, unsupervised) training regimen. Because the strength of connections between logical nodes is highly plastic, just like neural pathways, the machine gets trained by trial and error and is able to arrive at something resembling knowledge of the world. The logic by which an AI reaches its conclusions is impossible to reconstruct even for those who built the underlying algorithms. We need to consider the significance of this in the light of our political traditions.

When a court issues a decision, the judge writes an opinion in which he explains his reasoning. He grounds the decision in law, precedent, common sense, and principles that he feels obliged to articulate and defend. This is what transforms the decision from mere fiat into something that is politically legitimate, capable of securing the assent of a free people. It makes the difference between simple power and authority. One distinguishing feature of a modern, liberal society is that authority is supposed to have this rational quality to it — rather than appealing to, say, a special talent for priestly divination. This is our Enlightenment inheritance. It appears to be in a fragile state. With the inscrutable arcana of data science, a new priesthood peers into a hidden layer of reality that is revealed only by a self-taught AI program — the logic of which is beyond human knowing.

The feeling that one is ruled by a class of experts who cannot be addressed, who cannot be held to account, has surely contributed to populist anger. From the perspective of ordinary citizens, the usual distinction between government and “the private sector” starts to sound like a joke, given how the tech firms order our lives in far-reaching ways.

Google, Facebook, Twitter, and Amazon have established portals that people feel they have to pass through to conduct the business of life, and to participate in the common life of the nation. Such bottlenecks are a natural consequence of “the network effect.” It was early innovations that allowed these firms to take up their positions. But it is not innovation that accounts for the unprecedented rents they are able to collect, it is these established positions, and the ongoing control of the data it allows them to gather, as in a classic infrastructure monopoly. If those profits measure anything at all, it is the reach of a grid of surveillance that continues to spread and deepen. It is this grid’s basic lack of intelligibility that renders it politically unaccountable. Yet accountability is the very essence of representative government.

Mr. Zuckerberg has said frankly that “In a lot of ways Facebook is more like a government than a traditional company.” If we take the man at his word, it would seem to raise the question: Can the United States government tolerate the existence of a rival government within its territory?

In 1776, we answered that question with a resounding “No!” and then fought a revolutionary war to make it so. The slogan of that war was “Don’t tread on me.” This spirited insistence on self-rule expresses the psychic core of republicanism. As Senator Klobuchar points out in her book Antitrust, the slogan was directed in particular at the British Crown’s grant of monopoly charters to corporations that controlled trade with the colonies. Today, the platform firms appear to many as an imperial power. The fundamental question “Who rules?” is pressed upon this body once again.

Using the Legal System Against Facebook and Other Titans of the Internet

Two Democratic members of Congress are trying to stop big social media companies from doing so much damage:

Imagine clicking on a Facebook video alleging that a “deep-state cabal” of Satan-worshiping pedophiles stole the election from [a horrible person]. Moments later, your phone rings. The caller says, “Hey, it’s Freddie from Facebook. We noticed you just watched a cool video on our site, so we’ll send you a few dozen more videos about election-related conspiracy theories. As a bonus, we’ll connect you to some people who share your interest in ‘stopping the steal’. You guys should connect and explore your interest together!”

The scenario is, of course, made up. But it basically captures what social media platforms do every day. In the real world, “Freddie from Facebook” is not a person who calls you, but an algorithm that tracks you online, learns what content you spend the most time with and feeds you more of whatever maximizes your engagement — the time you spend on the platform. Greater engagement means that users see more ads, earning Facebook more revenue.

If you like cat videos, great; you’ll get an endless supply. But the same is true for the darkest content on the Web. Human nature being what it is, the content most likely to keep us glued to our screens is that which confirms our prejudices and triggers our basest emotions. Social media algorithms don’t have a conservative or liberal bias, but they know if we do. Their bias is to reinforce ours at the cost of making us more angry, anxious and afraid.

Facebook recently played down the role of its algorithms in exploiting users’ susceptibilities and enabling radicalization. The company says that users, not its product, are largely responsible for the extreme content showing up in their news feeds.

But Facebook knows how powerful its algorithms can be. In 2016, an internal Facebook study found that 64 percent of people who joined an extremist group on the platform did so only because its algorithm recommended it. Recently, a member of the Wolverine Watchmen, the militia accused of trying to kidnap Michigan Gov. Gretchen Whitmer (D), said he joined the group when it “popped up as a suggestion post” on Facebook because he interacted with pages supporting the Second Amendment.

Policymakers often focus on whether Facebook, YouTube and Twitter should take down hate speech and disinformation. This is important, but these questions are about putting out fires. The problem is that the product these companies make is flammable. It’s that their algorithms deliver to each of us what they think we want to hear, creating individually tailored realities for every American and often amplifying the same content they eventually might choose to take down.

In 1996, Congress passed Section 230 of the Communications Decency Act, which says that websites are not legally liable for content that users post (with some exceptions). While the law helped to enable the growth of the modern Internet economy, it was enacted 25 years ago when many of the challenges we currently face could not have been predicted. Large Internet platforms no longer function like community bulletin boards; instead, they use sophisticated, opaque algorithms to determine what content their users see. If companies such as Facebook push us to view certain posts or join certain groups, should they bear no responsibility if doing so leads to real-world violence?

We recently introduced a bill that would remove Section 230 protection from large social media companies if their algorithms amplify content that contributes to an act of terrorism or to a violation of civil rights statutes meant to combat extremist groups. Our bill would not force YouTube, Facebook or Twitter to censor or remove content. Instead, it would allow courts in cases involving extreme harm to consider victims’ arguments against the companies on the merits, as opposed to quickly tossing out lawsuits on Section 230 grounds as would happen today.

Liability would incentivize changes the companies know how to make. For example, last year Facebook tested a new system in which users rated posts on their news feeds as “good” or “bad” for the world. The algorithm then fed those users more content that they deemed good while demoting the bad. The experiment worked. The company’s engineers referred to the result as the “nicer news feed.” But there was one problem. The nicer news feed led to less time on Facebook (and thus less ad revenue), so the experiment died.

This is the fundamental issue: Engagement-based algorithms made social media giants some of the most lucrative companies on Earth. They won’t voluntarily change the underlying architecture of their networks if it threatens their bottom line. We must decide what’s more important: protecting their profits or our democracy.

Unquote.

The authors of the article are Rep. Tom Malinowski, who represents a traditionally Republican district in suburban New Jersey, and Rep. Anna Eshoo, who represents the part of California that includes Silicon Valley.

Keep This in Mind When You Hear the Right Claim They’re Censored on Social Media

It’s bullshit. From The Washington Post:

A new report calls conservative claims of social media censorship “a form of disinformation”.

[The] report concludes that social networks aren’t systematically biased against conservatives, directly contradicting Republican claims that social media companies are censoring them. 

Recent moves by Twitter and Facebook to suspend [the former president’s] accounts in the wake of the violence at the Capitol are inflaming conservatives’ attacks on Silicon Valley. But New York University researchers today released a report stating claims of anti-conservative bias are “a form of disinformation: a falsehood with no reliable evidence to support it.” 

The report found there is no trustworthy large-scale data to support these claims, and even anecdotal examples that tech companies are biased against conservatives “crumble under close examination.” The report’s authors said, for instance, the companies’ suspensions of [the ex-president’s] accounts were “reasonable” given his repeated violation of their terms of service — and if anything, the companies took a hands-off approach for a long time given [his] position.

The report also noted several data sets underscore the prominent place conservative influencers enjoy on social media. For instance, CrowdTangle data shows that right-leaning pages dominate the list of sources providing the most engaged-with posts containing links on Facebook. Conservative commentator Dan Bongino, for instance, far out-performed most major news organizations in the run-up to the 2020 election. 

The report also cites an October 2020 study in which Politico found “right-wing social media influencers, conservative media outlets, and other GOP supporters” dominated the online discussion of Black Lives Matter and election fraud, two of the biggest issues in 2020. Working with the nonpartisan think tank Institute for Strategic Dialogue, researchers found users shared the most viral right-wing social media content about Black Lives Matter more than ten times as often as the most popular liberal posts on the topic. People also shared right-leaning claims on election fraud about twice as often as they shared liberals’ or traditional media outlets’ posts discussing the issue.

But even so, baseless claims of anti-conservative bias are driving Republicans’ approach to regulating tech. Republican lawmakers have concentrated their hearing exchanges with tech executives on the issue, and it’s been driving their legislative proposals. . . .

The New York University researchers called on Washington regulators to focus on what they called “the very real problems of social media.”

“Only by moving forward from these false claims can we begin to pursue that agenda in earnest,” Paul Barrett, the report’s primary author and deputy director of the NYU Stern Center for Business and Human Rights said in a statement. 

The researchers want the Biden administration to work with Congress to overhaul the tech industry. 

Their recommendations focus particularly on changing Section 230, a decades-old law shielding tech companies from lawsuits for the photos, videos and posts people share on their websites. . . . 

The researchers warn against completely repealing the law. Instead, they argue companies should only receive Section 230 immunity if they agree to accept more responsibilities in policing content such as disinformation and hate speech. The companies could be obligated to ensure their recommendation engines don’t favor sensationalist content or unreliable material just to drive better user engagement. 

“Social media companies that reject these responsibilities would forfeit Section 230’s protection and open themselves to costly litigation.” the report proposed.

The researchers also called for the creation of a new Digital Regulatory Agency, which would serve as an independent body and be tasked with enforcing a revised Section 230. 
The report also suggested Biden could empower a “special commission” to work with the industry on improving content moderation, which would be able to move much more quickly than legal battles over antitrust issues. It also called for the president to expand the task force announced by Biden on online harassment to focus on a broad range of harmful content. 

They also called for greater transparency in Silicon Valley. 

The researchers said the platforms typically don’t provide much justification for sanctioning an account or post, and when people are in the dark they assume the worst. 

“The platforms should give an easily understood explanation every time they sanction a post or account, as well as a readily available means to appeal enforcement actions,” the report said. “Greater transparency—such as that which Twitter and Facebook offered when they took action against [a certain terrible person] in January— would help to defuse claims of political bias, while clarifying the boundaries of acceptable user conduct.”

One Way to Start Fixing the Internet

Yaël Eisenstat has been a CIA officer, White House adviser and Facebook executive. She says the problem with social media isn’t just what users post — it’s what the platforms do with that content. From Harvard Business Review:

While the blame for President Txxxx’s incitement to insurrection lies squarely with him, the biggest social media companies — most prominently my former employer, Facebook — are absolutely complicit. They have not only allowed Txxxx to lie and sow division for years, their business models have exploited our biases and weaknesses and abetted the growth of conspiracy-touting hate groups and outrage machines. They have done this without bearing any responsibility for how their products and business decisions effect our democracy; in this case, including allowing an insurrection to be planned and promoted on their platforms. .  . .

The events of last week . . . demand an immediate response. In the absence of any U.S. laws to address social media’s responsibility to protect our democracy, we have ceded the decision-making about which rules to write, what to enforce, and how to steer our public square to CEOs of for-profit internet companies. Facebook intentionally and relentlessly scaled to dominate the global public square, yet it does not bear any of the responsibilities of traditional stewards of public goods, including the traditional media.

It is time to define responsibility and hold these companies accountable for how they aid and abet criminal activity. And it is time to listen to those who have shouted from the rooftops about these issues for years, as opposed to allowing Silicon Valley leaders to dictate the terms.

We need to change our approach not only because of the role these platforms have played in crises like last week’s, but also because of how CEOs have responded — or failed to respond. The reactionary decisions on which content to take down, which voices to downgrade, and which political ads to allow have amounted to tinkering around the margins of the bigger issue: a business model that rewards the loudest, most extreme voices.

Yet there does not seem to be the will to reckon with that problem. Mark Zuckerberg did not choose to block Txxxx’s account until after the U.S. Congress certified Joe Biden as the next president of the United States. . . . And while the decision by many platforms to silence Txxxx is an obvious response to this moment, it’s one that fails to address how millions of Americans have been drawn into conspiracy theories online and led to believe this election was stolen — an issue that has never been truly addressed by the social media leaders.

A look through the Twitter feed of Ashli Babbit, the woman who was killed while storming the Capitol, is eye-opening. A 14-year Air Force veteran, she spent the last months of her life retweeting conspiracy theorists, QAnon followers, and others calling for the overthrow of the government. . . . The likelihood that social media played a significant part in steering her down the rabbit hole of conspiracy theories is high, but we will never truly know how her content was curated, what groups were recommended to her, who the algorithms steered her towards.

If the public, or even a restricted oversight body, had access to the Twitter and Facebook data to answer those questions, it would be harder for the companies to claim they are neutral platforms who merely show people what they want to see. Guardian journalist Julia Carrie Wong wrote in June of this year about how Facebook algorithms kept recommending QAnon groups to her. . . .  The key point is this: This is not about free speech and what individuals post on these platforms. It is about what the platforms choose to do with that content, which voices they decide to amplify, which groups are allowed to thrive and even grow at the hand of the platforms’ own algorithmic help.

So where do we go from here?

I have long advocated that governments must define responsibility for the real-world harms caused by these business models, and impose real costs for the damaging effects they are having on our public health, our public square, and our democracy. As it stands, there are no laws governing how social media companies treat political ads, hate speech, conspiracy theories, or incitement to violence. This issue is unduly complicated by Section 230 of the Communications Decency Act, which has been vastly over-interpreted to provide blanket immunity to all internet companies — or “internet intermediaries” — for any third-party content they host. Many argue that to solve some of these issues, Section 230, which dates back to 1996, must at least be updated. But how, and whether it alone will solve the myriad issues we now face with social media, is hotly debated.

One solution I continue to push is clarifying who should benefit from Section 230 to begin with, which often breaks down into the publisher vs. platform debate. To still categorize social media companies — who curate content, whose algorithms decide what speech to amplify, who nudge users towards the content that will keep them engaged, who connect users to hate groups, who recommend conspiracy theorists — as “internet intermediaries” who should enjoy immunity from the consequences of all this is beyond absurd. The notion that the few tech companies who steer how more than 2 billion people communicate, find information, and consume media enjoy the same blanket immunity as a truly neutral internet company makes it clear that it is time for an upgrade to the rules. They are not just a neutral intermediary.

However, that doesn’t mean that we need to completely re-write or kill Section 230. Instead, why not start with a narrower step by redefining what an “internet intermediary” means? Then we could create a more accurate category to reflect what these companies truly are, such as “digital curators” whose algorithms decide what content to boost, what to amplify, how to curate our content. And we can discuss how to regulate in an appropriate manner, focusing on requiring transparency and regulatory oversight of the tools such as recommendation engines, targeting tools, and algorithmic amplification rather than the non-starter of regulating actual speech.

By insisting on real transparency around what these recommendation engines are doing, how the curation, amplification, and targeting are happening, we could separate the idea that Facebook shouldn’t be responsible for what a user posts from their responsibility for how their own tools treat that content. I want us to hold the companies accountable not for the fact that someone posts misinformation or extreme rhetoric, but for how their recommendation engines spread it, how their algorithms steer people towards it, and how their tools are used to target people with it.

To be clear: Creating the rules for how to govern online speech and define platforms’ responsibility is not a magic wand to fix the myriad harms emanating from the internet. This is one piece of a larger puzzle of things that will need to change if we want to foster a healthier information ecosystem. But if Facebook were obligated to be more transparent about how they are amplifying content, about how their targeting tools work, about how they use the data they collect on us, I believe that would change the game for the better.

As long as we continue to leave it to the platforms to self-regulate, they will continue to merely tinker around the margins of content policies and moderation. We’ve seen that the time for that is long past — what we need now is to reconsider how the entire machine is designed and monetized. Until that happens, we will never truly address how platforms are aiding and abetting those intent on harming our democracy.

A Surprising Free TV Service for Us Cord Cutters (World Series Edition)

We canceled our cable TV service a few years ago and haven’t really missed it. But there are times being a “cord cutter” is a problem, like when a certain team is playing football and the game is on a local TV station. (We could try putting an antenna on the roof and watch for free — like in olden times — but that’s not a good option for us.)

Tonight being the first game of the World Series, somebody asked whether we could watch it. In the past, that’s meant signing up for one of the services that transmit local stations over the internet. We’ve used those a couple of times (via our handy Roku box) but they’re not worth the monthly subscription.

In search of a good option, I got a very pleasant surprise. There is a free service that transmits local TV stations on the internet. It’s called Locast. They can explain:

Locast is a not-for-profit service offering users access to broadcast television over the internet. We stream the signal . . . to select US cities. Locast has modernized the delivery of broadcast TV by offering streaming media free of charge. This is your right, this is our mission. 

In today’s modern world, we find ourselves in many different settings. Access to broadcast TV is our right. The existing antiquated technology doesn’t come close to meeting the needs of the average user who deserves to access broadcast programming, using the Internet as we do for almost every other service.

. . . many households just can’t get a proper signal to receive broadcast TV. This can be due to geographic anomalies or living in more isolated rural areas. Rather than relying on a traditional rebroadcast antenna, these folks should be allowed to use a modern method of streaming through our digital transcoding service. Free your TV!

From what I can see, this thing actually works. I created an account and registered our Roku box. Lo and behold, there are maybe 30 channels being broadcast out of New York City. Lo and behold, it’s Locast!

The service is free, but they do ask for donations, beginning at $5 a month (a reasonable request):

To do this we will need your support. There are considerable costs for equipment, bandwidth, and operational support that helps run Locast. These costs will only go up as we expand our service to new markets, as well as when more and more people cut the cord to become new Locasters.

There’s actually more to the story. I wondered who’s behind this operation. It turns out to be an organization called Sports Fans Coalition:

SFC is a grassroots, sports fans advocacy organization. We’re made up of sports fans who want to have a say in how the sports industry works, and to put fans first. 

We have one goal: to give you a seat at the table whenever laws or public policy impacting sports are being made.

So in addition to doing things like lobbying Congress and suing TV networks, they are making local TV available to around 44% of the US population. 

But wait! Is this legal? Apparently it is.

Locast.org is a “digital translator,” meaning that Locast.org operates just like a traditional broadcast translator service, except instead of using an over-the-air signal to boost a broadcaster’s reach, we stream the signal over the Internet . . . 

Ever since the dawn of TV broadcasting in the mid-20th Century, non-profit organizations have provided “translator” TV stations as a public service. Where a primary broadcaster cannot reach a receiver with a strong enough signal, the translator amplifies that signal with another transmitter, allowing consumers who otherwise could not get the over-the-air signal to receive important programming, including local news, weather and, of course, sports. Locast.org provides the same public service, except instead of an over-the-air signal transmitter, we provide the local broadcast signal via online streaming.

According to Locast, federal law makes this possible:

Before 1976, under two Supreme Court decisions, any company or organization could receive an over-the-air broadcast signal and retransmit it to households in that broadcaster’s market without receiving permission (a copyright license) from the broadcaster. Then, in 1976, Congress passed a law overturning the Supreme Court decisions and making it a copyright violation to retransmit a local broadcast signal without a copyright license. This is why cable and satellite operators . . . must operate under a statutory . . . copyright license or receive permission from the broadcaster.

But Congress made an exception. Any “non-profit organization” could make a “secondary transmission” of a local broadcast signal, provided the non-profit did not receive any “direct or indirect commercial advantage” and either offered the signal for free or for a fee “necessary to defray the actual and reasonable costs” of providing the service. 17 U.S.C. 111(a)(5).

Sports Fans Coalition NY is a non-profit organization under the laws of New York State. Locast.org does not charge viewers for the digital translator service (although we do ask for contributions) and if it does so, will only recover costs as stipulated in the copyright statute. Finally, in dozens of pages of legal analysis provided to Sports Fans Coalition, an expert in copyright law concluded that under this particular provision of the copyright statute, secondary transmission may be made online, the same way traditional broadcast translators do so over the air.

For these reasons, Locast.org believes it is well within the bounds of copyright law when offering you the digital translator service.

One last word from Locast:

Why hasn’t anyone done this before?

Good question. We don’t know. But we did a lot of due diligence before launching and learned that the technology to offer a digital translator service has gotten a lot less expensive and the law clearly allows a non-profit to provide such a service. So we’re the first. You’re welcome.

Now, if World Series games didn’t average 3 1/2 hours. . .