### The Computational Universe

##### October 25, 2002 by Seth Lloyd

The amount of information you could process if you were to use all the energy and matter of the universe is 10^90 bits and the number of elementary operations that it can have performed since the Big Bang is about 10^120 ops. Perhaps the universe is itself a computer and what it’s doing is performing a computation. If so, that’s why the universe is so complex and these numbers say how big that computation is. Also, that means Douglas Adams was right (the answer is “42”).

*Originally published on Edge, Oct. 24, 2002. Published on KurzweilAI.net Oct. 24, 2002.*

*On July 21, 2002, Edge brought together leading thinkers to speak about their "universe." Other participants:*

*The Emotion Universe by Marvin MinskyThe Intelligent Universe by Ray KurzweilThe Inflationary Universe by Alan Harvey GuthThe Cyclic Universe by Paul Steinhardt*

I’m a professor of mechanical engineering at MIT. I build quantum computers that store information on individual atoms and then massage the normal interactions between the atoms to make them compute. Rather than having the atoms do what they normally do, you make them do elementary logical operations like bit flips, not operations, and-gates, and or-gates. This allows you to process information not only on a small scale, but in ways that are not possible using ordinary computers. In order to figure out how to make atoms compute, you have to learn how to speak their language and to understand how they process information under normal circumstances.

It’s been known for more than a hundred years, ever since Maxwell, that all physical systems register and process information. For instance, this little inchworm right here has something on the order of Avogadro’s number of atoms. And dividing by Boltzmann’s concept, its entropy is on the order of Avogadro’s number of bits. This means that it would take about Avogadro’s number of bits to describe that little guy and how every atom and molecule is jiggling around in his body in full detail. Every physical system registers information, and just by evolving in time, by doing its thing, it changes that information, transforms that information, or, if you like, processes that information. Since I’ve been building quantum computers I’ve come around to thinking about the world in terms of how it processes information.

A few years ago I wrote a paper in Nature called "Fundamental Physical Limits to Computation," in which I showed that you could rate the information processing power of physical systems. Say that you’re building a computer out of some collection of atoms. How many logical operations per second could you perform? Also, how much information could these systems register? Using relatively straightforward techniques you can show, for instance, that the number of elementary logical operations per second that you can perform with that amount of energy, E, is just E/H – well, it’s 2E divided by pi times h-bar. [h-bar is essentially 10[-34] (10 to the -34) Joule-seconds, meaning that you can perform 10[-50] (10 to the 50) ops per second.)]If you have a kilogram of matter, which has mc2 – or around 10[17] Joules (10 to the 17) Joules – worth of energy and you ask how many ops per second it could perform, it could perform 10[17] (ten to the 17) Joules / h-bar. It would be really spanking if you could have a kilogram of matter – about what a laptop computer weighs – that could process at this rate. Using all the conventional techniques that were developed by Maxwell, Boltzmann, and Gibbs, and then developed by von Neumann and others back at the early part of the 20th century for counting numbers of states, you can count how many bits it could register. What you find is that if you were to turn the thing into a nuclear fireball – which is essentially turning it all into radiation, probably the best way of having as many bits as possible – then you could register about 10[30] (10 to the 30) bits. Actually that’s many more bits than you could register if you just stored a bit on every atom, because Avogadro’s number of atoms store about 10[24] (10 to the 24) bits.

Having done this paper to calculate the capacity of the ultimate laptop, and also to raise some speculations about the role of information-processing in, for example, things like black holes, I thought that this was actually too modest a venture, and that it would be worthwhile to calculate how much information you could process if you were to use all the energy and matter of the universe. This came up because back in when I was doing a Masters in Philosophy of Science at Cambridge. I studied with Stephen Hawking and people like that, and I had an old cosmology text. I realized that I can estimate the amount of energy that’s available in the universe, and I know that if I look in this book it will tell me how to count the number of bits that could be registered, so I thought I would look and see. If you wanted to build the most powerful computer you could, you can’t do better than including everything in the universe that’s potentially available. In particular, if you want to know when Moore’s Law, this fantastic exponential doubling of the power of computers every couple of years, must end, it would have to be before every single piece of energy and matter in the universe is used to perform a computation. Actually, just to telegraph the answer, Moore’s Law has to end in about 600 years, without doubt. Sadly, by that time the whole universe will be running Windows 2540, or something like that. 99.99% of the energy of the universe will have been listed by Microsoft by that point, and they’ll want more! They really will have to start writing efficient software, by gum. They can’t rely on Moore’s Law to save their butts any longer.

I did this calculation, which was relatively simple. You take, first of all, the observed density of matter in the universe, which is roughly one hydrogen atom per cubic meter. The universe is about thirteen billion years old, and using the fact that there are pi times 10[7] (10 to the 7) seconds in a year, you can calculate the total energy that’s available in the whole universe. Remembering that there’s a certain amount of energy, you then divide by Planck’s Constant – which tells you how many ops per second can be performed – and multiply by the age of the universe, and you get the total number of elementary logical operations that could have been performed since the universe began. You get a number that’s around 10[120] (10 to the 120). It’s a little bigger – 10[122] (10 to the 122) or something like that – but within astrophysical units, where if you’re within a factor of one hundred, you feel that you’re okay;

The other way you can calculate it is by calculating how it progresses as time goes on. The universe has evolved up to now, but how long could it go? One way to figure this out is to take the phenomenological observation of how much energy there is, but another is to assume, in a Guthian fashion, that the universe is at its critical density. Then there’s a simple formula for the critical density of the universe in terms of its age; G, the gravitational constant; and the speed of light. You plug that into this formula, assuming the universe is at critical density, and you find that the total number of ops that could have been performed in the universe over time (T) since the universe began is actually the age of the universe divided by the Planck scale – the time at which quantum gravity becomes important – quantity squared. That is, it’s the age of the universe squared, divided by the Planck length, quantity squared. This is really just taking the energy divided by h-bar, and plugging in a formula for the critical density, and that’s the answer you get.

This is just a big number. It’s reminiscent of other famous big numbers that are bandied about by numerologists. These large numbers are, of course, associated with all sorts of terrible crank science. For instance, there’s the famous Eddington Dirac number, which is 10[40] (10 to the 40). It’s the ratio between the size of the universe and the classical size of the electron, and also the ratio between the electromagnetic force of, say, the hydrogen atom, and the gravitational force on the hydrogen atom. Dirac went down the garden path to try to make a theory in which this large number had to be what it was. The number that I’ve come up with is suspiciously reminiscent of (10[40])[3] (10 to the 40, quantity cubed). This number, 10[120], (10 to the 120) is normally regarded as a coincidence, but in fact it’s not a coincidence that the number of ops that could have been performed since the universe began is this number cubed, because it actually turns out to be the first one squared times the other one. So whether these two numbers are the same could be a coincidence, but the fact that this one is equal to them cubed is not.

Having calculated the number of elementary logical operations that could have been performed since the universe began, I went and calculated the number of bits, which is a similar, standard sort of calculation. Say that we took all of this beautiful matter around us on lovely Eastover Farm, and vaporized it into a fireball of radiation. This would be the maximum entropy state, and would enable it to store the largest possible amount of information. You can easily calculate how many bits could be stored by the amount of matter that we have in the universe right now, and the answer turns out to be 10[90] (10 to the 90). This is necessary, just by standard cosmological calculations – it’s (10[120])[3/4] (10 to the 120, quantity to the 3/4 power). We can store 10[90] (10 to the 90) bits in matter, and if one believes in somewhat speculative theories about quantum gravity such as holography – in which the amount of information that can be stored in a volume is bounded by the area of the volume divided by the Planck Scale squared – and if you assume that somehow information can be stored mysteriously on unknown gravitational degrees of freedom, then again you get 10[120] (10 to the 120). This is because, of course, the age of the universe squared divided by the Planck length squared is equal to the size of the universe squared divided by the Planck length. So the age of the universe squared, divided by the Planck time squared is equal to the size of the universe divided by the Planck length, quantity squared. So we can do 10[120] (10 to the 120) ops on 10[90] (10 to the 90) bits.

[Continued on Edge.org.]

*Copyright © 2002 by Edge Foundation, Inc. Published on KurzweilAI.net with permission.*