Them That Has, Gets

A well-known French economist, Thomas Piketty, has written a big book called Capital in the Twenty-First Century. It’s getting a lot of attention, because Piketty is an expert on wealth and income and he’s reached a disturbing conclusion: 

Modern economic growth and the diffusion of knowledge have allowed us to avoid the Marxist apocalypse, but we have not modified the deep structures of capital and inequality – or in any case not as much as one might have imagined in the optimistic decades following World War II.

When the rate of return on capital exceeds the rate of growth on output and income, as it did in the nineteenth century and seems likely to do again in the twenty-first, capitalism automatically generates arbitrary and extreme inequalities that radically undermine the meritocratic values on which democratic societies are based [1]

In other words, those relatively happy years in the 20th century, during which economic inequality declined in the developed world, was an aberration, the result of special circumstances. Global capitalism is now returning to its normal state: an extended Gilded Age in which the rich get richer, workers struggle, inequality grows and democracy suffers. It’s not a pretty picture, but it’s based on a great deal of historical data.

Piketty argues that “there are ways democracy can regain control over capitalism and insure that the general interest takes precedence over private interests” (for example, by instituting a tax on wealth), but that’s not going to be easy, since capitalists are so good at screwing with democracy.

They buy up and consolidate media outlets, make the majority of campaign contributions, hire armies of lawyers and lobbyists, fund political action committees, support “think tanks”, pay for advertising campaigns on “the issues” and keep the “revolving door” between government and business revolving. All of which contributes to low taxes on high incomes, minimal taxes on capital gains and large estates, corporations being treated as “people”, feeble campaign finance laws, weak labor unions, political gridlock, vote suppression, voter apathy and lots of average citizens thinking that the accumulation of vast wealth by a tiny minority is inevitable and/or good for the majority. 

If you’d like to read more about Capital in the Twenty-First Century, including some skeptical comments, take a look at this New Yorker article by John Cassidy. If you want to feel even more depressed, pissed off or motivated to work toward political reform, check out Paul Krugman’s less skeptical “Wealth Over Work” column at the New York Times.

A Guide to Reality, Part 12

Chapter 7 of Alex Rosenberg’s The Atheist’s Guide to Reality is called “Never Let Your Conscious Be Your Guide”. A more grammatical title would have been “Never Let Consciousness Be Your Guide”. A longer but more accurate title would have been “Never Let Introspection Be Your Guide to What’s Happening in Your Mind”, because that’s the actual theme of the chapter: “Scientism requires that we give up everything introspection tells us about the mind” [147].

As he often does, Rosenberg overstates his case, apparently for rhetorical effect. After all, is it really true that introspection is a completely unreliable guide to what’s going on in our minds?

He offers as evidence three kinds of phenomena. The first is “blindsight”. Researchers have discovered that people with certain kinds of brain damage can perceive features of the world without being conscious of what they’re perceiving. For example, a person with a particular kind of damage to the visual cortex, who denies seeing anything at all, can “see” colors and shapes and even the expressions on other people’s faces. If asked whether they see something, they answer “no”, but forced to guess, they give the correct answer. Here, then, is a case in which conscious introspection, which indicates that I don’t see anything, is unreliable, because I really do.

Rosenberg’s second piece of evidence concerns our common belief that we have free will. Most of us are quite convinced that we make conscious decisions that result in freely-chosen actions all the time. However, experiments suggest that when we decide to perform a random action like moving a finger a certain way, the physiological process that will inevitably lead to the action taking place is underway before we’re aware of our decision to perform the action. 

The most interesting case he cites is one in which a neuroscientist stimulates a subject’s brain, causing the subject’s finger or wrist to move but also causing the subject, milliseconds later, to claim that the motion resulted from the subject’s conscious decision.The interpretation of these findings and their relevance to the free will problem are controversial, but they do suggest that conscious decision-making may not be as important in making decisions as we think it is.

Finally, Rosenberg argues that the existence of optical illusions shows that consciousness is unreliable. We interpret visual stimuli according to unconscious rules of thumb (mixed metaphor). These rules of thumb, which are probably the combined product of human evolution and our own experience, often mislead us. The circles in the diagram below look different but really aren’t, so here’s another case, according to Rosenberg, in which we shouldn’t let consciousness be our guide. (The book includes some interesting illustrations from the Purves Lab, which are available here.)

download (1)

Chapter 7 is relatively brief, because in this chapter Rosenberg is laying the groundwork for an especially counterintuitive idea he’s going to discuss in the next chapter (that we don’t actually think “about” anything at all). For now, here’s his conclusion:

We have seen that consciousness can’t be trusted to be right about the most basic things: the alleged need for visual experiences to see colors and shapes, the supposed role of conscious decisions in bringing about our actions, even the idea that we [see the world as it is]. If it can be wrong about these things, it can be wrong about almost everything it tells us about ourselves and our minds [162].

An important thing to note regarding Rosenberg’s argument is that he isn’t really claiming that conscious sense perception is completely unreliable (at least that’s not what I think he’s claiming). Although he denies that colors, for example, are mind-independent properties, he clearly believes that we do learn about the world using our eyes and ears. Otherwise, it would be odd to offer evidence that a blind person can perceive the “correct” color of an orange and that optical illusions are illusory (compared to what?). It would also be difficult to explain why most of us navigate the world better when our eyes are open and we’re not wearing headphones.

His principal thesis in this chapter is that certain conclusions we naturally draw from introspection (“the examination or observation of one’s own mental and emotional processes”) are mistaken. Specifically, it’s natural for us to assume that we need to be conscious in order to perceive certain features of the world, that our choices clearly determine our actions, and that (prior to being let in on the secret) we can always tell whether two lines are the same length or two circles are the same color just by looking.

I think Rosenberg is wrong, however, when he concludes that introspection can’t be trusted about “the most basic things”. What are the most basic conclusions we can draw from introspection? I’m not sure about that, but some natural conclusions seem more basic than the ones Rosenberg criticizes.

For example, we are better at perceiving features of the world when we’re relatively conscious (like when we’re awake) than when we’re relatively unconscious (like when we’re asleep). Some people see and hear better than others. Sight is usually reliable, even though there are occasional optical illusions. And when we feel angry or sad, we are generally angry or sad. It’s just wrong to think that introspection is always wrong about the most basic things.

I won’t offer a more basic conclusion about free will, except to say that conscious deliberation seems to help in making some decisions (whether to enroll at a college, get married or buy a house, for example) – whatever the underlying physiological processes are. Rosenberg may be right that conscious decisions are always the aftermath of unconscious decisions. We never really know what decision we’re going to make until it starts to “feel” like the right decision or we actually do something. Maybe our brains always do the necessary work unconsciously right before we discover what we’ve decided.

Coming up (sooner or later), part 13 of “A Guide to Reality”: Is it true that the brain does everything without thinking about anything at all?

Evolution Of A Sunset

According to Wikipedia, “Barnegat Bay is a small brackish arm of the Atlantic Ocean, approximately 30 miles long, along the coast of Ocean County, New Jersey”. That doesn’t sound very inviting, but a talented photographer can make of it something like this.

(Note: the images look a bit sharper on Denise Bush’s blog, so please click on the link below, where you can also see some other views of the Garden State — which I don’t find uninviting at all! It was the “small brackish arm” that got me.)

denisebushphoto's avatarDenise Bush's Photo Blog

As the sun lowered itself past the horizon a peaceful calm came over the bay. I could hear the sea birds and waterfowl calling to one another while settling in for the night. The sun painted the sky with a beautiful pastel gradient of color that deepened with every second. I used my 6-stop neutral density filter to lengthen my exposures and capture the passing of time. At the end of the light show the sky glowed with a brilliant warm red that made the scene seem surreal before passing into the night.

Sunset On The Bay I ‘Sunset On The Bay I’ © Denise Bush

Sunset On The Bay II ‘Sunset On The Bay II’ © Denise Bush

Sunset On The Bay III ‘Sunset On The Bay III’ © Denise Bush

Sunset On The Bay IV ‘Sunset On The Bay IV’ © Denise Bush

View original post

Being Paid What You’re Worth

Robert Reich is an economist who was Secretary of Labor in the 90s and is now a Professor of Public Policy at UC Berkeley. He’s also a blogger who knows what he’s talking about (unlike some of us). I doubt he would mind this extended quote from RobertReich.org:

“Paid-what-you’re-worth” is a dangerous myth.

Fifty years ago, when General Motors was the largest employer in America, the typical GM worker got paid $35 an hour in today’s dollars. Today, America’s largest employer is Walmart, and the typical Walmart workers earns $8.80 an hour.

Does this mean the typical GM employee a half-century ago was worth four times what today’s typical Walmart employee is worth? Not at all. Yes, that GM worker helped produce cars rather than retail sales. But he wasn’t much better educated or even that much more productive. He often hadn’t graduated from high school. And he worked on a slow-moving assembly line. Today’s Walmart worker is surrounded by digital gadgets — mobile inventory controls, instant checkout devices, retail search engines — making him or her quite productive.

The real difference is the GM worker a half-century ago had a strong union behind him that summoned the collective bargaining power of all autoworkers to get a substantial share of company revenues for its members. And because more than a third of workers across America belonged to a labor union, the bargains those unions struck with employers raised the wages and benefits of non-unionized workers as well. Non-union firms knew they’d be unionized if they didn’t come close to matching the union contracts.

Today’s Walmart workers don’t have a union to negotiate a better deal. They’re on their own. And because fewer than 7 percent of today’s private-sector workers are unionized, non-union employers across America don’t have to match union contracts. This puts unionized firms at a competitive disadvantage. The result has been a race to the bottom.

By the same token, today’s CEOs don’t rake in 300 times the pay of average workers because they’re “worth” it. They get these humongous pay packages because they appoint the compensation committees on their boards that decide executive pay. Or their boards don’t want to be seen by investors as having hired a “second-string” CEO who’s paid less than the CEOs of their major competitors. Either way, the result has been a race to the top.

Professor Reich doesn’t say anything about the effects of globalization in this post, but it’s obviously a factor. Our economic bottom isn’t in West Virginia or Mississippi anymore, it’s in Guatemala and Bangladesh. Even so, a strong labor movement would help slow down the race to the bottom and to the top.

There’s a question worth asking, however: Would it be better from an ethical point of view if workers in places like Guatemala were paid more at the cost of American workers being paid less? In other words, are we in rich countries automatically entitled to a better standard of living than people in poor countries? After all, for a worker in Guatemala, our race to the bottom is his or her race to the middle. If work can be performed just as well but more cheaply in Guatemala, why should it be performed in California?

I don’t know the answer to that question. Although it’s clear we should slow down the race to the very top (it’s gotten completely ridiculous), I’m not sure what should be done for the rest of us. Maybe the answer is to provide a reasonable minimum income for those of us in the rich countries, while doing more to improve the lives of those at the bottom. 

Update:

For example, as suggested here:  Considering a No-Strings-Attached Basic Income for All Americans

Libertarianism Again

While writing about libertarianism a few weeks ago, I came across a 2011 article at Slate by Stephen Metcalf called “The Liberty Scam”. Its subtitle is “Why even Robert Nozick, the philosophical father of libertarianism, gave up on the movement he inspired”. Having finally gotten around to reading it, I highly recommend the article if you’ve ever considered yourself an economic libertarian or tried to argue with one. Or if you have an interest in politics or the recent history of ideas.

Metcalf points out that modern, generally right-wing economic libertarianism relies on a very selective view of capitalism. In particular, Nozick’s famous Wilt Chamberlain argument equates all economic activity with the special case of an extremely talented basketball player who can negotiate a stratospheric salary. Nozick claimed that someone like Wilt Chamberlain should be able to negotiate whatever salary the market will bear, and that forcing Chamberlain to pay taxes in order to benefit other people is forced labor (“Need a gardener allocate his services to those lawns which need him most?”). The rest of us, presumably, are a lot like Wilt Chamberlain.

After demolishing the Chamberlain argument and briefly explaining why Nozick came to appreciate that society is more than a random collection of individuals, Metcalf tries to explain why someone as thoughtful as Robert Nozick would make the arguments he did. Metcalf’s theory is that in 1970, when Nozick published Anarchy, State and Utopia, America and places like Harvard had benefited from decades of enormous government investment:

The GI Bill was on its way to investing more in education grants, business loans, and home loans than all previous New Deal programs combined. By 1954, with the Cold War in full swing, the U.S. government was spending 20 times what it had spent on research before the war.

As a result, members of the academic elite, including Harvard professors, were sharing in the general economic prosperity, even if their salaries hadn’t matched Wilt Chamberlain’s. Unfortunately for their bank accounts, however, tax rates were much higher than today. In 1969, when Nozick was writing his classic book, the highest federal tax rate was 77%, almost twice what it is now. It’s no wonder that Nozick saw virtue in a political ideology that considers taxation beyond the bare minimum a kind of theft:

By allowing for the enormous rise in (relative) income and prestige of the upper white collar professions, Keynesianism created the very blind spot by which professionals turned against Keynesianism…. Many upper-white-collar professionals convinced themselves their pre-eminence was not an accident of history or the product of negotiated protections from the marketplace but the result of their own unique mental talents fetching high prices in a free market for labor. Just this cocktail of vanity and delusion helped Nozick edge out [the liberal philosophy of John] Rawls in the marketplace of ideas, making Anarchy a surprise best-seller. It helped make Ronald Reagan president five years later. So it was the public good that killed off the public good.

One day the tide will turn (maybe). In the meantime, I was going to sum up with that well-known quote to the effect that we in the modern world are ignorantly walking in the footsteps of some obscure academic of the past, but couldn’t find the damn quotation (clearly, search engines haven’t got artificial intelligence quite yet). So I decided to go with a remark attributed, probably incorrectly, to Abraham Lincoln:

The philosophy of the schoolroom in one generation will be the philosophy of government in the next.

But while writing the previous paragraph, a key word popped into my head, namely, “scribbler”, which is the term John Maynard Keynes used when he wrote The General Theory of Employment, Interest and Money, 35 years before Robert Nozick wrote Anarchy, State and Utopia:

The ideas of economists and political philosophers, both when they are right and when they are wrong are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back….Sooner or later, it is ideas, not vested interests, which are dangerous for good or evil.