David Chalmers, the philosopher whose gravestone will probably say he came up with the phrase “the hard problem of consciousness”, has a new book out. It’s called Reality +: Virtual Worlds and the Problems of Philosophy. From the publisher’s blurb:
Virtual reality is genuine reality; that’s the central thesis of Reality+. In a highly original work of “technophilosophy,” David J. Chalmers gives a compelling analysis of our technological future. He argues that virtual worlds are not second-class worlds, and that we can live a meaningful life in virtual reality. We may even be in a virtual world already.
The Three Quarks Daily site linked to an interview Prof. Chalmers gave to promote the book.
When discussing simulations (like what we could be living in already), it’s helpful to keep in mind that there are at least two kinds. The first kind is what’s usually called “virtual reality”. It can be described as “not physically existing as such, but made by software to appear to do so”. Despite what Chalmers’s interviewer says, this type of virtual reality doesn’t raise a bunch of deep philosophical questions. The machines that created the Matrix in the movies did an amazing job, but from a philosophical perspective, so what? When he was plugged into the Matrix, fully immersed in what Chalmers calls “digital reality”, Neo was still an organism with a physical body. In the future Chalmers envisions, many of us might spend most of our time in a “place” like that. But lots of people play video games. They make friends playing those games, they spend money, they laugh, they cry. So what?
The second kind of virtual reality would look like the Matrix, but it would be very different, so different that it would deserve to be called something other than “virtual reality” (maybe it already is). It’s the kind the philosopher Nick Bostrom referred to in his famous Simulation Argument (quoting from a 2003 article): “You exist in a virtual reality simulated in a computer built by some advanced civilization. Your brain, too, is merely a part of that simulation”.
Bostrom’s argument assumes that “what allows you to have conscious experiences is not the fact that your brain is made of squishy biological matter but rather that it implements a certain computational architecture . . . This assumption is quite widely (although not universally) accepted among cognitive scientists and philosophers of mind”.
Maybe I’m in the minority, but I don’t see any reason to think that consciousness is purely computational and that it could be created on a computer. Presumably, a being could be made out of silicon or whatever and be conscious (feel pain, for example) but I believe it would still require a physical body. Chalmers thinks otherwise, that “algorithmic creatures” that only exist as software running on a computer could be conscious. That assumes something about consciousness that isn’t necessarily true and is much different from saying you could build something like a human using non-standard material.