On June 15, Matthew Crawford of The New Atlantis testified at a hearing on smart home technology held by the U.S. Senate Judiciary Committee, Subcommittee on Antitrust, Competition Policy & Consumer Rights. This is from his opening statement:
I have no expertise in antitrust. I come to you as a student of the history of political thought.
The convenience of the smart home may be worth the price; that’s for each of us to decide. But to do so with open eyes, one has to understand what the price is. After all, you don’t pay a monthly fee for Alexa, or Google Assistant.
The business rationale for the smart home is to bring the intimate patterns of life into the fold of the surveillance economy, which has a one-way mirror quality. Increasingly, every aspect of our lives — our voices, our facial expressions, our political affiliations and intellectual predilections — are laid bare as data to be collected by companies who, for their own part, guard with military-grade secrecy the algorithms by which they use this information to determine the world that is presented to us, for example when we enter a search term, or in our news feeds. They are also in a position to determine our standing in the reputational economy. The credit rating agencies and insurance companies would like to know us more intimately; I suppose Alexa can help with that.
Allow me to offer a point of reference that comes from outside the tech debates, but can be brought to bear on them. Conservative legal scholars have long criticized a shift of power from Congress to the administrative state, which seeks to bypass legislation and rule by executive fiat, through administrative rulings. The appeal of this move is that it saves one the effort of persuading others, that is, the inconvenience of democratic politics.
All of the arguments that conservatives make about the administrative state apply as well to this new thing, call it algorithmic governance, that operates through artificial intelligence developed in the private sector. It too is a form of power that is not required to give an account of itself, and is therefore insulated from democratic pressures.
In machine learning, an array of variables are fed into deeply layered “neural nets” that simulate the binary, fire/don’t-fire synaptic connections of an animal brain. Vast amounts of data are used in a massively iterated (and, in some versions, unsupervised) training regimen. Because the strength of connections between logical nodes is highly plastic, just like neural pathways, the machine gets trained by trial and error and is able to arrive at something resembling knowledge of the world. The logic by which an AI reaches its conclusions is impossible to reconstruct even for those who built the underlying algorithms. We need to consider the significance of this in the light of our political traditions.
When a court issues a decision, the judge writes an opinion in which he explains his reasoning. He grounds the decision in law, precedent, common sense, and principles that he feels obliged to articulate and defend. This is what transforms the decision from mere fiat into something that is politically legitimate, capable of securing the assent of a free people. It makes the difference between simple power and authority. One distinguishing feature of a modern, liberal society is that authority is supposed to have this rational quality to it — rather than appealing to, say, a special talent for priestly divination. This is our Enlightenment inheritance. It appears to be in a fragile state. With the inscrutable arcana of data science, a new priesthood peers into a hidden layer of reality that is revealed only by a self-taught AI program — the logic of which is beyond human knowing.
The feeling that one is ruled by a class of experts who cannot be addressed, who cannot be held to account, has surely contributed to populist anger. From the perspective of ordinary citizens, the usual distinction between government and “the private sector” starts to sound like a joke, given how the tech firms order our lives in far-reaching ways.
Google, Facebook, Twitter, and Amazon have established portals that people feel they have to pass through to conduct the business of life, and to participate in the common life of the nation. Such bottlenecks are a natural consequence of “the network effect.” It was early innovations that allowed these firms to take up their positions. But it is not innovation that accounts for the unprecedented rents they are able to collect, it is these established positions, and the ongoing control of the data it allows them to gather, as in a classic infrastructure monopoly. If those profits measure anything at all, it is the reach of a grid of surveillance that continues to spread and deepen. It is this grid’s basic lack of intelligibility that renders it politically unaccountable. Yet accountability is the very essence of representative government.
Mr. Zuckerberg has said frankly that “In a lot of ways Facebook is more like a government than a traditional company.” If we take the man at his word, it would seem to raise the question: Can the United States government tolerate the existence of a rival government within its territory?
In 1776, we answered that question with a resounding “No!” and then fought a revolutionary war to make it so. The slogan of that war was “Don’t tread on me.” This spirited insistence on self-rule expresses the psychic core of republicanism. As Senator Klobuchar points out in her book Antitrust, the slogan was directed in particular at the British Crown’s grant of monopoly charters to corporations that controlled trade with the colonies. Today, the platform firms appear to many as an imperial power. The fundamental question “Who rules?” is pressed upon this body once again.