Is Our Reality Just a Big Video Game?
By Jim Elvidge
Jim Elvidge holds a Master's Degree in Electrical Engineering from Cornell University. He has applied his training in the high-tech world as a leader in technology and enterprise management, including many years in executive roles for various companies and entrepreneurial ventures. He also holds 4 patents in digital signal processing and has written articles for publications as diverse as Monitoring Times and the IEEE Transactions on Geoscience and Remote Sensing. Beyond the high-tech realm, however, Elvidge has years of experience as a musician, writer, and truth seeker. He merged his technology skills with his love of music, developed one of the first PC-based digital music samplers, and co-founded RadioAMP, the first private-label online streaming-radio company. For many years, Elvidge has kept pace with the latest research, theories, and discoveries in the varied fields of subatomic physics, cosmology, artificial intelligence, nanotechnology, and the paranormal. This unique knowledge base has provided the foundation for his first full-length book, "The Universe-Solved!"
It is now 9 years since the release of the movies "The Matrix," "eXistenZ," and "The Thirteenth Floor," all of which explored the idea that we might be living in a computer-generated simulation. Although the fun and speculation about the premise of these movies has largely died down, the interest around the concept has shifted from pop culture to academic. Nick Bostrom, the Director of the Future of Humanity Institute at Oxford, wrote his oft-quoted "Are You Living In a Computer Simulation?"1 in 2001. More recently, in 2008, Brian Whitworth from Massey University in New Zealand submitted the white paper "The Physical World as a Virtual Reality"2, which created a nice buzz in the blogosphere (the Slashdot forum collected over 1000 comments on it alone). Somewhat tangentially, there is also the whole Transhumanism/Singularity movement, which predicts our future merge with AI, but does not really address the idea that we may have already done so.
Also, at the beginning of 2008, I released the book "The Universe - Solved!" My take on the topic is a little different than the common theme of a computer-generated simulation. I consider that scenario to be but one of several possible ways in which our reality may be under programmed control. The book introduces these scenarios but really focuses on presenting all of the categories of evidence that our reality may indeed be programmed. This evidence takes us from feasibility to some level of probability.
But feasibility is where it starts, and for you skeptics out there, this article is for you. If, after reading it, you are still convinced that the idea is not at all feasible, I respectfully acknowledge your position and we can go our separate ways. Where do I stand on it? Put simply, I believe it is very feasible that we live in a programmed reality and highly probable, although as with every other idea in the world, I remain less than 100% convinced.
We shall begin our feasibility study with a nod to the 30th anniversary of the release of the arcade video game Space Invaders. Running on an Intel 8080 microprocessor at 2 MHz, it featured 64-bit characters on a 224 x 240 pixel 2-color screen. There was, of course, was no mistaking anything in that game for reality. One would never have nightmares about being abducted by a 64-bit Space Invader alien. Fast forward 30 years and take a stroll through your local electronic superstore and what do you see on the screen? Is that a football game or is it Madden NFL '08? Is that an Extreme Games telecast or are we looking at a PS3 or Wii version of the latest skateboarding or snowboarding game. Is that movie featuring real actors or are they CG? (After watching "Beowulf", I confess that I had to ask my son, who is much more knowledgeable about such things, which parts were CG.)
The source of our confusion is simply Moore's Law, the general trend that technology doubles every two years or so. Actually, to put a finer point on it, the doubling rate depends on the aspect of technology in question. Transistor density doubles every two years, processor speed doubles every 2.8, and screen resolutions double about every four years. What still remains fascinating about Moore's Law is that this exponential growth rate has been consistent for the past 40 years or so. As a result, "Madden NFL '08" utilizes a 1080x1900 screen resolution (at 16 bit color), at least 1GB of memory, and runs on a PS3 clocked at 2 TFLOPs. Compared to Space Invaders, that represents an increase in screen resolution of over 500x, an increase in processing speed of a factor of 2 million, and an increase in the resolution of gaming models of well over a thousand. And so, "Madden" looks like a real football game.
So, given this relentless technology trend, at what point will be able to generate a simulation so real that it will be indistinguishable from reality? To some extent, we are already there. From an auditory standpoint, we are already capable of generating a soundscape that matches reality. But the visual experience is the long pole in the tent. Given the average human's visual acuity and ability to distinguish colors, it would require generating a full speed simulation at 150 MB/screen to match reality. Considering Moore's Law on screen resolution, we should reach that point in 16 years. Then, of course, there are the other senses to fool, however, as we shall see, they should not be too difficult. So, 16 years is our timeframe to generate a virtual reality indistinguishable from "normal" reality. Of course, we also have to experience that reality in a fully immersive environment in order for it to seem authentic. This means doing away with VR goggles, gloves, and other clumsy haptic devices. Yes, we are talking about a direct interface to the brain. So what is the state of the art in that field?
In 2006, a team at MIT did an experiment where they placed electrodes in the brains of macaque monkeys and showed them a variety of pictures. By reading the signals from the electrodes, the team was able to determine to an accuracy of 80% which picture a particular monkey was looking at. 3 Although certainly a nascent technology, this experiment and others like it have demonstrated that it is possible to determine someone's sensory stimuli by simply monitoring the electrical signals in their brains. We can leave it to Moore's Law for the perfection of this technology. What about the other direction - writing information into the brain? Dozens of people in the US and Germany have already received retinal implants whereby miniature cameras generate signals that stimulate nerves in the visual cortex in order to provide rudimentary vision of grids of light. Whereas it took 16 years to develop a 16-pixel version, is has taken only 4 years to develop a 60-pixel one.4 That rate of advance is even higher than Moore's Law because it is at the early part of a technological innovation. Further advances are being made in this field by stimulating regions deeper into the brain. For example, at Harvard Medical School researchers have shown that visual signals can be generated by stimulating the lateral geniculate nucleus (LGN), an area in the brain that relays signals from the optic nerve to the visual cortex. Perhaps stimulating the visual cortex directly will allow further acceleration of advances in generating simulated realities. Other senses, like taste, smell, and touch, seem to not require the same level of data simulation as the visual simulation and can also be accomplished via the same deep-brain stimulation methods. Given the state of these technologies today and the fact that there are about one million axons that carry visual signals in parallel through the optic nerve, Moore's Law might say that we could achieve the electrical-implant-based simulation in a little over 30 years. However, nanotech may actually speed up the process.