Skip to Main Content

Quest for the Quantum Computer

Foreword by David Deutsch

About The Book

Taking readers to the cutting edge of physics, mathematics, and computer science, Julian Brown tells the dramatic story of the groundbreaking efforts to create a fundamentally new kind of computer that would be astronomically more powerful than today's machines. In 1998, a team of researchers announced they had produced the world's first quantum computer in a cup of chloroform. In fascinating, fully accessible detail, Brown explains the ideas that led up to this accomplishment and explores the mind-stretching implications of this leap into the bizarre world of quantum physics. The Quest for the Quantum Computer is a riveting look at what promises to be one of the most important scientific and technological ideas of the twenty-first century.

Excerpt

Chapter One: Late-Night Quantum Thoughts

Any technology that is sufficiently advanced is indistinguishable from magic.

Arthur C. Clarke

Life in Other Universes

Gaunt, hair unkempt, and skin a ghostly shade, David Deutsch cuts a strange figure even by the standards of the eccentrics and oddballs that so often inhabit the upper reaches of science. Yet appearances can be deceptive. Talk to Deutsch and you won't find the withdrawn, head-in-the-clouds academic you might expect but someone who is articulate, personable, and exceedingly bright. You'll also quickly discover this is an individual whose ideas about life and the universe are bigger and more radical than those of anyone you are ever likely to meet.

As I drove from London to visit him at his home in Oxford, a fatal accident on the road alerted me to the bizarre implications of some of Deutsch's ideas, in particular his resolute belief in the existence of an infinite set of parallel universes -- the multiverse, as he calls it. Within the multiverse there are endless alternate realities in which, supposedly, a huge army of mutant copies of everyone on this and other planets plays out an almost limitless variety of dramas. According to this world view, each time we make the smallest decision the entire assemblage divides along different evolutionary paths like some vast ant colony splitting off in divergent directions. In the process, large numbers of our copies suddenly embark on different lives in different universes.

Presumably, I thought, in some of those fabled other-universes the truck in front hadn't jackknifed, the accident hadn't happened, the crash victims were still alive, and I was now on my way instead of stuck in a jam. But did that mean the victims would therefore feel as though they had survived regardless of the accident because they would only be aware of the universes in which they had continued to live?

A foreshadow of this all-possible-worlds idea was intriguingly captured by Jorge Luis Borges in a short story now often cited by quantum physicists. In "The Garden of Forking Paths," written back in the 1950s, an illustrious Chinese governor is said to have written a strange kind of novel constructed as a kind of labyrinth. "In all fictional works, each time a man is confronted with several alternatives, he chooses one and eliminates the others; in the fiction of Ts'ui Pen, he chooses -- simultaneously -- all of them," Borges wrote. By writing a novel that pursued all possible story lines simultaneously, Borges's fictitious author, Ts'ui Pen, could have been writing a script for the multiverse.

But why, I wondered, should we care about other universes? There's surely enough to worry about in this one without concerning ourselves over the fate of the poor lost souls living in others. With such thoughts running through my mind, I was ready to be briefed by this world's leading advocate for the existence of those other worlds. When I finally arrived and entered Deutsch's house, it took me a moment to persuade myself that I hadn't stepped right into another universe. His place was an unbelievable mess. Papers, books, boxes, computer equipment, mugs, magazines, videos, and numerous other items were strewn everywhere. Even a trip to the bathroom entailed climbing over a mound of cardboard boxes. How does this man function, I wondered?

Okay, so this wasn't exactly a parallel universe, more a parallel mode of living. Yet out of the chaos there were definite, albeit eclectic, signs of order. A whiteboard in the corner of his living room was covered with equations, a reference to Oliver Stone's Natural Born Killers here, and notes about time travel there. On a piano in another corner of the room lay open a score of Beethoven's Waldstein Sonata. On the floor, littered among the detritus, were strange and elaborate drawings of spidery-looking aliens, designs for an animated film that Deutsch was making with some friends.

But it wasn't long before our conversation leapt far beyond the confines of Deutsch's domicile. "Somewhere in the multiverse there are Earths that were not hit by a comet 60 million years ago and on which the dinosaurs evolved into intelligent beings," Deutsch informed me. "Now those dinosaurs might look completely different from us, and their houses might look different from ours, and their cars probably look different from ours -- but if you look at their airplanes and spaceships, they probably look much more similar to ours, and if you look at their microchips, they're going to look very similar to ours. The more knowledge that is embodied in a physical object, the more alike it will look in different universes because the more it will have been subjected to the same tests of efficiency, validity, truth, and so on."

"Do you really think there are such universes, in which dinosaurs have all those things?" I asked incredulously. "Undoubtedly," Deutsch replied without hesitation. "That's what the laws of physics tell us."

Deutsch is a physicist, winner of the 1998 Paul Dirac prize for theoretical physics and a researcher at the Centre for Quantum Computation at Oxford University's Clarendon Laboratory. Owing to his unusual lifestyle, though, he rarely sets foot in the university's buildings, preferring instead to inhabit his own private world. If you call Deutsch on the telephone before 2 p.m., you are unlikely to get an answer. He prefers to sleep during the day and work uninterrupted through the night.

During those long nocturnal hours Deutsch has deliberated on some extraordinary ideas. Among them is one of the biggest ideas in physics and mathematics since the development of quantum theory, a breakthrough that could also lead to the most significant invention since the digital computer. It is the concept of a quantum computer, a machine that has begun a revolution in computing -- and far beyond -- that looks set to be as momentous as the upheaval in Newtonian physics brought about in the early twentieth century. As Discover magazine once put it,4 a quantum computer "would in some sense be the ultimate computer, less a machine than a force of nature."

Why so? Because a quantum computer would make any existing computer -- even the fastest Cray supercomputer -- seem exceedingly puny. It would solve problems that will never be cracked by any conceivable nonquantum successors of current computers. It would make possible virtual-reality simulations of things we know can never be simulated with current technology, even in principle. But perhaps most important, it would throw open doors to a totally new kind of laboratory in which we could explore those alternate universes of which Deutsch is so fond. Imagine living in a small house for many years and then discovering in the basement a trapdoor that opened onto a colossal subterranean world of rooms that appeared to stretch on into infinity. For physicists and computer scientists that, in some sense, is what the arrival of the first quantum computer would be like.

Exciting though the idea of an extraordinary new kind of computing machine might be, the quest for the quantum computer touches upon issues that are much grander than simply a new technology. The study of the ideas involved has already led to remarkable and strange new insights into the nature of our universe.

From the early 1970s to the present day, Deutsch has seen his ideas grow from an apparently small ripple in the world of academic physics to what is now an intellectual maelstrom. By the nineties, the subject of quantum computing had become one of the most exciting and rapidly moving research disciplines in physics and computing. Chaos theory and superstring theory -- the scientific gold rushes of the 1980s -- suddenly seemed far less enticing in the light of the new Klondike: quantum computation. Theoretical and experimental physicists, mathematicians, and computer scientists realized that this subject offered one of the remaining big opportunities within physics to mine the secrets of nature.

Following the announcement of the first experimental demonstrations of simple quantum programs, the media began to catch on. The inevitable headline QUANTUM COMPUTING TAKES A QUANTUM LEAP, in the International Herald Tribune, was occasioned when researchers at MIT, IBM, Oxford University, and the University of California at Berkeley reported in 1998 that they had succeeded in building the first working computers based on quantum mechanics. The story was all the more compelling because these scientists had fashioned their first quantum processors from the unlikeliest of materials. The quantum calculations weren't performed, as you might imagine, by manipulating tiny specks of matter, nor were they the product of elaborate banks of electronic hardware or even huge contraptions the size of particle accelerators. There were no laser beams, no smoke and no mirrors.

No, the quantum logic was all executed within the modest confines of a small flask of liquid. No ordinary liquid, either, but a specially prepared version of the common anesthetic and solvent chloroform. Who would have predicted such a thing a few years before?

Of course, these early demonstrations of experimental quantum computing were very simple, and to be fair, other demonstrations have used the more familiar apparatus of the modern physics lab, such as lasers and mirrors. But none of these hinted at the truly extraordinary powers a fully fledged machine would possess. If you imagine the difference between an abacus and the world's fastest supercomputer, you would still not have the barest inkling of how much more powerful a quantum computer could be compared with the computers we have today. More remarkable yet, this amazing discovery originated not from some breakthrough in electronics or software design but in large measure from pure theory -- from Deutsch's highly individual take on the laws of physics.

The Quantum AI Experiment

The origins of this quantum information revolution are to be found in the pioneering work of Rolf Landauer, a physicist who became an IBM Fel-

low in 1969 and who, until his death in 1999, worked at IBM's Thomas J. Watson Research Center in Yorktown Heights, near New York City. "Back in the 1960s, when I was still in school," Deutsch said, "Landauer was telling everyone that computation is physics and that you can't understand the limits of computation without saying what the physical implementation [i.e. type of hardware] is. He was a lone voice in the wilderness. No one really understood what he was talking about -- and certainly not why."

The prevailing view at the time was that computation was ultimately an abstract process that had more to do with the world of mathematical ideals than with the physics of machines. But Landauer's view began to take hold when he, and subsequently his IBM colleague Charles Bennett, discovered a crucial link between physics and computation, which we'll explore in the next chapter.

"The next thing that happened was in 1977-78," Deutsch continued. "I proposed what today would be called a quantum computer, though I didn't think of it as that. I thought of it as a conventional computer operating by quantum means that had some additional quantum hardware that allowed it to do extra things. But I didn't think of the additional hardware as being part of the computer. The purpose of this was not to do computations, it was to test the many-universes theory. It had been thought that the theory was untestable. I realized that if you had a quantum computer, and the quantum computer could count as an observer, then it could observe things that would distinguish between the many-worlds interpretation of quantum mechanics and other so-called interpretations."

Many worlds, many universes -- or nowadays, "many minds" or "many histories" -- take your pick. All are variants of a particular interpretation of quantum theory, the theory of the subatomic realm. Ever since the development of quantum theory at the beginning of the twentieth century, scientists have known that on the atomic scale matter behaves very differently from what we see on everyday scales. Particles such as electrons and atoms do not behave like the billiard balls of Newton's classical physics but seem to exhibit characteristics more in keeping with fuzzy wavelike entities. These and other properties led to a great deal of puzzlement and angst over how quantum theory should be understood as a description of reality -- or realities. So much so that today there are a multitude of different interpretations of what goes on at the atomic level.

In the early 1980s, Deutsch's proposed experiment (described more fully in Chapter 3) sounded like the stuff of science fiction. To test the existence of multiple universes, he envisaged the construction of a thinking, conscious artificial intelligence whose memory worked "at the quantum level." Such a machine, he claimed, could be asked to conduct a crucial experiment inside its own brain and report back to us whether Deutsch was indeed right to believe in the existence of parallel universes. The idea of assembling a conscious machine took some swallowing, but what exactly did Deutsch have in mind when he talked of "quantum memory"?

Well, nearly 20 years later we have the answer because quantum computer memory is on the verge of becoming an experimental reality. But even if Deutsch's ideas looked far-fetched at first sight, they were compellingly presented and, more important, appeared to be scientifically testable. As the philosopher of science Karl Popper pointed out, only those theories that are open to the possibility of experimental refutation or falsifiability are worthy of being considered good science. The trouble was that Deutsch's test would have to be carried out on a new kind of machine that no one had yet built or knew how to build.

But he was following a well-established tradition in physics: dreaming up thought experiments to question fundamental assumptions about the world. Such experiments, even if they cannot be carried out in practice, have often acted as a powerful spur for creative thinking in physics. Albert Einstein, for example, devised several that were instrumental in the development of relativity and quantum theory. Sure enough, Deutsch's thought experiment, although nearly impossible to carry out then as now, convinced him that the many-universes interpretation had to be correct.

The "experiment" didn't make much of an impression on other scientists, though. This was largely because the work remained unpublished for many years. Deutsch originally described the idea in 1978 in a paper he sent to the journal Physics Review. The editors had a policy of not publishing papers on the interpretation of quantum theory, but Deutsch felt his idea was sufficiently exceptional to warrant the breaking of that rule. The review's editors thought otherwise. Unfortunately, Deutsch did not submit his paper to any other journal and thereby lost his claim to being the first to publish something close to the concept of a quantum computer. Why didn't he publish elsewhere?

Deutsch explained that he was not especially interested in claiming priority for his ideas. "I usually lose interest in a paper when it's about three-quarters finished," he said. "Finishing papers is quite a chore. I've got a whole stack of unfinished papers. If I do finish a paper, I send it off and it's good-bye, I'm already working on other things. It was only when I happened to meet David Finkelstein at the Quantum Gravity II conference [in 1984] that I mentioned this paper to him -- we were all having a riotous argument, as often happens over dinner, about the many-universes interpretation. Somebody said that it couldn't be tested, and I said, 'Yes, it can, I have a proof of that!' and Finkelstein said, 'Where's it been published?' I said, 'It hasn't. Here's a preprint,' and he said, 'Well, can I publish it then?'"

Finkelstein was one of the editors of the International Journal of Theoretical Physics, which duly published Deutsch's paper in 1985, some seven years after it had been written. Moreover, Deutsch published another paper in the same year that, though it didn't prove an overnight sensation, is now widely regarded as a landmark in physics. Among other things, it highlighted a property of quantum computers that no one else had noticed: quantum parallelism. It is this crucial feature that now causes so much excitement in the study of quantum computation.

Exploring Hilbert Space

Interpretations aside, it's long been known that at the atomic level waves can behave like particles, and particles have waves associated with them. A single entity such as an electron, for example, can travel along many different routes simultaneously as if it were really a spread-out phenomenon like a wave. The essential idea of quantum parallelism advanced by Deutsch was this: If an electron can explore many different routes simultaneously, then a computer should be able to calculate along many different pathways simultaneously too.

Despite the enormous significance of that idea, Deutsch admits his 1985 quantum parallelism paper "didn't make a big splash" at first. Though he had shown quantum computers could do certain tasks more efficiently than classical computers, the tasks in question were hopelessly limited and rather contrived. One, for example, involved solving a rather simple mathematical problem in which you could get two answers in one shot rather than having to do the calculation twice. This meant you could do parallel calculations in a way that had never been realized before, but the irony was that calculation worked only 50 percent of the time. So superficially there appeared to be no practical gain, on average. Not surprisingly perhaps, the subject maintained a low profile.

Nevertheless, a few other scientists took note. Charles Bennett at IBM and, a little later, Artur Ekert at Oxford University, immediately saw the theoretical importance and started working flat out on quantum computation and information theory, producing some of the first practical applications. According to Deutsch, it was this theoretical fuse that later ignited the current explosion of interest -- but only after a dramatic intervention from another quarter.

In 1994 Peter Shor, a computer scientist working at AT&T's Bell Labs in New Jersey, discovered how a quantum computer could solve a very important mathematical problem, one that had long been known to be beyond the reach of ordinary computers. He showed how a quantum computer could calculate the factors, or divisors, of very large numbers extremely rapidly. Solving this particular problem had implications that went far beyond mathematics.

Ever since the 1970s, the difficulty of finding the factors of large numbers had formed the basis of an extremely important and now widely used method of protecting information with secret codes. The code works a little like the secret numbers you find on a so-called onetime pad, only instead of having to carry around a special pad that might be stolen, all you need to send a message is a large number, n, that can safely be given to anybody. The key to unlocking the code depends on knowing which numbers when multiplied together produce n, and normally this would be known only by the intended recipient. By showing how a quantum computer could factor very large numbers quickly, Shor threatened to blow a devastating hole through the world's most sophisticated forms of secrecy.

The Shor program (or algorithm) not only offered powerful evidence for the claim that a quantum computer could far exceed the capabilities of a conventional computer, it revealed how that extra resourcefulness could be applied to an interesting mathematical problem, and as an added bonus, it offered a powerfully important real-world application. It was this third issue that did most to energize efforts to build a quantum computer: because of the huge repercussions such a machine would have for military communications, government secrecy and surveillance, data protection, e-commerce, and the privacy of ordinary citizens. If anyone could build a full-scale quantum computer, it's possible that he or she would be able to access everything from your bank account to the Pentagon's most secret files. It's no surprise, then, that significant funds backing this line of research have come from such organizations as the U.S. Department of Defense, the National Security Agency, NATO, and the European Union.

If anybody is going to build one of these machines, the intelligence agencies are making certain they will know about it first. It's not without reason, then, that Shor's factorization program was quickly recognized as the quantum computer's "killer app."

For Deutsch, though, the power of Shor's algorithm signified even more clearly the reality of parallel universes. Consider the problem of factoring the number 15. It's 3 times 5. Now try factoring a five-digit number, like 24,287. Not so easy. Even with a calculator it would probably take a while to find the factors, which are 149 and 163. Now notice two things. First, the opposite process, multiplying the two numbers, is straightforward. Second, it's only when the factors are quite large, and indivisible by smaller numbers (149 and 163 are prime numbers), that the problem is hard. If, on the other hand, I asked you to factor 24,288, you could easily get the answer by trial division because the factors -- 2, 3, 11, and 23 -- are small.

Now imagine how difficult it would be to factor a 250-digit number that was the product of two large primes. It turns out that even with today's most advanced supercomputers, it's very unlikely that we would ever be able to solve such a problem -- with the fastest known classical algorithm, it would take longer than the age of the universe. Yet a quantum computer using Shor's algorithm, if ever such a machine could be built, could crack the problem in seconds or minutes because it would be able to compute simultaneously along as many as 10500 or more different pathways. As Deutsch argues in his book The Fabric of Reality, such an unimaginably large number presents believers in a single universe with a gargantuan problem:

There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?

For Deutsch the only answer that makes any sense is that different parts of the calculation are performed in different universes, all 10500 of them. Furthermore, the existence of Shor's algorithm is powerful testimony even though it's never actually been run on any hardware. "That argument -- 'Where was it done?' -- is already valid today even before we have ever built a factorization engine," Deutsch said. "We can look in theory at the design of the machine, never mind whether we can actually build it. To me it's no more convincing for somebody to come and tell me, 'Well, look we've factored this number,' than to look at the equations that say the machine would factor the number if you could only build it."

To understand nature in its entirety, Deutsch believes, we must accept the existence of an almost limitless number of universes. We normally experience reality only within one of those universes. But in certain kinds of quantum events, parallel universes can differentiate themselves from one another and, under special circumstances, can later interfere with one another to produce effects that according to him are impossible to explain if you cling to the idea of a single universe.

Despite Deutsch's claims for the manifest truth of the many-universes interpretation, the majority of other physicists remain unpersuaded. They would claim that it is possible to accept the idea of quantum parallelism without buying into the concept of many universes. The more neutral language physicists prefer is to say that quantum states occupy a vast, abstract, multidimensional space known as Hilbert space, named after the German mathematician David Hilbert.

This space is not space in the conventional sense but a space of "states." The classical-state space of an object such as a ball is a kind of map that describes all of its possible states and motions, including its trajectory through the air if it is thrown and its rotation about its own axis if it is given some spin. Although classical-state space is unbounded spatially, it is severely curtailed by the fact that time flows in only a single direction -- along a single vector. As the physicist Roger Penrose has put it, "Hilbert space becomes that same universe, with time and every other possible vector flowing in all possible directions." That extra freedom pays big dividends and is the reason a quantum computer would be able to compute certain things very quickly: because it has access to an infinitely larger space -- Hilbert space -- than a classical machine. This raises the puzzling question of why classical computers and the rest of the classical world in which we apparently live are confined to such a minuscule aspect of the full quantum reality.

Let's put these philosophical worries to one side for the time being. Although Deutsch developed his ideas about quantum computers through his explorations of physics, we're going to take an excursion down into the depths of the quantum world via a less precarious route. On the way, we'll get our first glimpse of what a quantum computer might be like. Don't worry too much about understanding everything first time around, because we'll be exploring this strange and unfamiliar terrain in greater detail in subsequent chapters. For now, just sit back and enjoy the ride.

The End of Moore's Law?

Over the past four decades, ever since integrated circuits were invented, the number of switches or transistor devices crammed onto individual silicon chips has increased at a fiendish rate. In fact, it has more or less doubled every eighteen months, which explains why computers become obsolete at such an alarming speed. This doubling trend was first spotted back in 1964 by Gordon Moore, cofounder and chairman emeritus of Intel, and is widely known in the industry as Moore's law.

Moore's observation was based on just a few years' experience in producing the first integrated circuits, before microprocessors had even been invented. Yet his prediction has held up well. In the fall of 1997, to give an idea how far the industry had come, Moore informed an Intel developers' forum in San Francisco that Intel's entire production output for the year would amount to nearly 1017 transistors. This figure, he pointed out, was comparable to an estimate by the famous Harvard biologist and ant expert, E. O. Wilson, of the number of ants living on Earth. "We're making one transistor for every ant," Moore said.

One of the reasons there's been this exponential increase in computing power is that with each device generation (which typically lasts about three and a half years), it's been possible to halve the feature size of individual components and hence to quadruple the component density. In addition, as circuits have become smaller, a lot of other things have improved at the same time: chips have become faster, increased in energy efficiency, and become more reliable. Costs too have plummeted, with prices per transistor dropping a millionfold in forty years. In the history of manufacturing there's been nothing else like it.

But it would be wrong to regard Moore's law as a sacrosanct law of nature written on tablets of silicon. There's no particular reason it has to remain true indefinitely, and in fact, growth has varied somewhat over the years: Moore originally stated his law as a doubling every year, but he changed it in the 1970s to a doubling every two years. (Now, when people refer to the law, they usually settle for something in between, i.e., a doubling every eighteen months.) Also, whenever we meet an exponential curve in real life, it invariably hits a wall at some point. It's like the old problem of how much money you would need to place on a chessboard if you put one penny on the first square, two on the second, four on the third, and so on, doubling each time until you reached the last square of the board. Initially, you might think you wouldn't need very much. But when you actually calculate the amount, it stacks up to 1019 pennies, or $100 quadrillion ($100,000,000,000,000,000), more than the combined wealth of all the civilizations in the history of the world.

When numbers are doubled at regular intervals, the results very quickly blow up beyond all reasonable bounds. So in the case of Moore's law it seems only likely that it will break down at some point. But prophesying the imminent demise of Moore's law has proved to be a loser's game because virtually since it was proposed people have said that there were obstacles just around the corner that would cause it to go off track. So far, all such predictions have turned out to be wrong.

In the late 1980s, for example, some industry commentators talked about the 1 micron barrier. They were referring to the potential difficulty of designing circuit elements smaller than a micron (1 micron is one-thousandth of a millimeter). The reason 1 micron was thought to be a barrier was that manufacturers of silicon chips were rapidly reaching the limits of resolution imposed by the wavelength of light. Silicon chip circuits are produced using a photographic technique in which a laser shines through a mask onto a silicon wafer covered with a photosensitive resist. For the same reason that you can't paint a picture through a pinhole with a brush larger than the pinhole, the smallest size of the components that can be produced in this way is limited by the wavelength of the laser light. Given that the wavelength of visible light is about 0.5 micron, a practical limit of 0.5-1 micron looked unavoidable.

However, chip manufacturers have been resourceful in squeezing that limit down, partly through improved focusing techniques and partly by using shorter-wavelength lasers. In 1999 manufacturers began production of 0.18 micron chips with the aid of lasers working in the ultraviolet region of the spectrum and contemplated further reductions in size down to 0.13 and 0.08 microns. Now people talk about the "point one" barrier.

In parallel with this reduction in size, the semiconductor business has encountered another trend, sometimes known as Moore's second law. This states that as the sophistication of chips goes up, the cost of the manufacturing plant doubles roughly every four years. That's why it now typically costs $2 billion to build a manufacturing plant. If you consider the two laws together, it doesn't take long to realize that something is going to have to give unless companies like Intel can grow exponentially, which seems unlikely. If costs continue rising exponentially, it seems inevitable that economics will cut into Moore's first law.

What do industry experts think will happen? Microsoft founder Bill Gates clearly didn't envisage economics curtailing the future rapid development of semiconductor chips when he predicted in 1995 in his book The Road Ahead that Moore's law would hold for another twenty years. On the other hand, in 1997 Stan Williams, who is in charge of fundamental physics research at Hewlett Packard, predicted trouble at about the year 2010 on both economic and physical grounds. Speaking at a conference on quantum computation held at the Royal Society in London, Williams said that by then individual transistors on chips will be turned on or off by as few as eight electrons, compared with about 500 today.

Sometime beyond 2010, Williams said, chip manufacturers will reach a point at which each transistor device will have shrunk to atomic proportions and will be switched on or off by a single electron. Clearly, it will not be possible to subdivide the components any further at this point, so this stage will represent a physical limit. Given this physical limit and the colossal cost of building chip manufacturing plants, Williams predicted that around the year 2010 progress will slow as we enter the quantum era.

However, there may still be ways in which the rule of Moore's law could be maintained. In 1997 Ed Fredkin, a former MIT physics professor and a leading thinker on the fundamental aspects of computing, gave a talk about Moore's law in which he argued that it would continue for 100 years because chip manufacturers will be able to extend their devices into three dimensions. At the moment, they make use only of the surface of silicon chips. If they could stack circuits on top of one another, then they would have a whole new dimension to move into. By this reasoning, Fredkin predicted that sometime in the twenty-first century we would see a CPU with an Avogadro's number of processing elements (approximately 10 to the24). Avogadro's number refers to the number of molecules in a "mole" of a substance, which in Avogadro's day meant some 22 liters of gas but today is defined by the number you get in a small chunk of carbon -- 12 grams, to be precise. It is a useful indication of the huge number of molecules you get in a macroscopic amount of matter. Fredkin, therefore, anticipated the time when nearly all the atoms or molecules in lump of matter can be put to worthwhile computational use.

Such a device would be unimaginably powerful: As a memory device alone it would be able to store 100 trillion copies of the Encyclopaedia Britannica or a million copies of the entire contents of the Library of Congress. As a processor, apart from opening up new possibilities in artificial intelligence, such a device might be used, Fredkin suggested, for ultrahigh-resolution simulations of the weather, the Earth, the oceans, and even societies.

During Fredkin's lecture he cited an article that had appeared with perfect timing on the front page of that morning's International Herald Tribune. INTEL TRASHES AN AXIOM OF THE COMPUTER AGE, the headline exclaimed. The story was about the possible cessation of Moore's law -- but not because Intel had seen the end of the road. No, the company had found a way to beat Moore's law.

The story featured Intel's development of a new kind of flash memory chip that could store not one but two bits of information on each transistor, potentially doubling the storage capacity of Intel's chips at a stroke. Meanwhile, IBM announced that it had found a way of using copper instead of aluminum as the "wire" connecting transistors on silicon chips. Because of copper's superior properties as an electrical conductor, its use in chip manufacturing would help ameliorate the problem of delivering enough power to the transistors as they diminish in size.

In 1999 there was further excitement when researchers at Hewlett Packard and UCLA announced that they had discovered a way of configuring tiny circuits assembled from a single molecular layer of a chemical known as rotaxane. The rotaxane molecules can operate as switches and offer the possibility of a further significant leap in miniaturization.

Whether these developments will really enable chip manufacturers to overtake Moore's law remains to be seen, but they will certainly help keep the technology motoring forward for the next few years. Even so, by around 2010 chip manufacturers look set to enter the twilight zone of the atomic realm, whereupon they will find themselves increasingly contending with the laws of quantum physics.

From Bill Gates to Quantum Gates

What differences will arise as semiconductor manufacturers encroach upon the quantum realm? Researchers already have some clues from the results of experiments on so-called nanocircuits. Special fabrication techniques have made possible the construction of semiconductor devices in which electrons are confined to atomically thin layers, narrow wires, and tiny blobs known as quantum dots. One important finding is that it's proved possible to make transistors that switch on or off according to the presence or absence of a single electron. There's keen industrial interest in the idea of the single-electron transistor (SET) because it could open the way to ultra high-density memory chips.

Although the SET works at the quantum level by responding to an individual electron, its properties are still basically classical. The flow of electrons in an electric current passing through a conventional semiconductor can also be regarded as a classical phenomenon, rather like the diffusion of molecules in a gas. As the electrons travel through a crystal of silicon, for example, they are frequently scattered by the atoms within the lattice, causing them to behave like particles. However, as circuits are made smaller, electrons travel shorter distances and are subject to less disturbance. In these circumstances the wavelike aspects of electrons can begin to exert their influence.

These wavelike properties are directly exploited to switch on and off another nanoscale device, known as the quantum interference transistor, or QUIT. This too is the subject of intense research interest. At the moment, though, these devices are a long way from being practical -- not only are they tricky to make but they also have to be cooled to liquid helium temperatures. At room temperatures the devices would be overwhelmed by "hot" electrons and would fail. It's possible, though, that the temperature problem could be overcome by making these devices smaller still. Despite their "nano" label, which refers to atomic scales of 1 nanometer -- a billionth of a meter -- these devices typically measure hundreds of atoms across.

Other problems will, doubtless, arise in building circuits on the atomic scale. One of the biggest will be supplying enough energy to all the switches without burning up the circuit. We'll explore that problem in the next chapter. But let's assume the obstacles can be overcome. Should we limit ourselves to imitating the switching behavior used in conventional computer chips? Are quantum transistors capable of doing anything other than simply switching on and off?

It turns out that quantum devices can indeed perform new tricks, which form the basis of quantum computation. The first novelty is known as superposition. This is connected with the idea that just as individual subatomic particles cannot necessarily be pinned down to one place, so individual logic states may not necessarily be tied to one value. To understand this, consider the digital nature of conventional computers.

Digital computers use binary logic switches, known as gates, to process information. Information is coded in strings of binary bits, each of which takes the value of 0 or 1. In a PC, a 0 is normally represented by a low voltage (usually 0 volts), and a 1 by a signal of typically a few volts. So information is carried by the pattern of high and low voltages pulsating across the circuit board. At any one time, all inputs and outputs will be in one or the other of these two states. No other values are allowed. As an example of a logic gate, consider one of the simplest, the NOT gate. This has one input and one output. If the input is 0, the output is 1, and if the input is 1, the output is 0. The gate simply reverses the value of the input, as its name implies.

Now consider an atom that has a single electron in its outermost orbit. This electron can be moved -- "excited" -- into a higher orbit by shining light of a particular frequency on it. The electron makes a quantum leap into a higher energy state. If this excited state is reasonably stable, we can use it and the atom's ground state to represent the numbers 1 and 0, respectively. If an excited atom is given a similar pulse, the electron drops back down into its ground state again, releasing its extra energy in the form of a photon of light. So the pulse of light flips the state whatever its value, performing, in effect, the action of a standard NOT gate.

But what happens if we shine light on an atom in its ground state for only half the time needed to excite it? Quantum mechanics dictates that in an atom the electron can occupy only one of a set of discrete energy levels that are spaced, albeit unevenly, like rungs on a ladder. So if there are no other energy levels between the ground state and the excited state, where can the electron go? As it turns out, the electron finds itself in both orbits simultaneously. The electron is said to be in a superposition of the ground and the excited states.

The term superposition comes from the study of wave phenomena. When waves of water, for example, come from different directions, their combined effect can be calculated by adding the waves together -- in other words, superposing one wave on the other. The superposition of electron states comes about because according to quantum mechanics all particles such as electrons have wavelike aspects. There are, in theory, an infinite number of different superpositions we can make, because when the light is shone for different lengths of time, the electron takes on a range of different superposition states.

Used in this way, the atom can store a single unit of quantum information, known as a quantum bit, or qubit. A qubit therefore differs from a conventional digital bit in that it can store values "intermediate" between 0 and 1. Superficially, a qubit might appear to be similar to a classical analog bit of information carried by an electrical signal that takes any value between the voltages representing 0 and 1. But there is a fundamental difference between a qubit and an analog bit: Whenever a measurement is made of a qubit, the answer can only be either 0 or 1 -- not some intermediate answer, as we would expect for an analog signal. This difference, as we will see, has profound consequences.

Classical digital bits are grouped together in computers to represent larger numbers. A two-bit register can represent any number between 0 and 3 because, together, the two bits can take the values 00, 01, 10, and 11, which are the base 2 representations for the numbers we know better as 0, 1, 2, and 3. Two qubits could similarly represent each of these values, just like ordinary bits. However, two qubits can also be put into a superposition of any or all of these states. Thus a quantum register of two qubits can represent the numbers 0, 1, 2, and 3 simultaneously. If we consider a register of eight qubits, we could obtain a superposition representing 28 = 256 numbers; a register of 1,000 qubits could represent 21,000 (approximately 10300 numbers) simultaneously. In contrast, a classical register of 1,000 bits could represent any of the integers between 0 and approximately 10300, but only one at a time.

It is tempting to attribute the potentially enormous power of a quantum computer to this capacity to hold astronomically large numbers of values simultaneously in a relatively small number of qubits. Actually, this is only part of the story. Superposition is closely related to the quantum phenomenon of entanglement.

Entanglement is the ability of quantum systems to exhibit correlations between states within a superposition. If we have two qubits each in a superposition of 0 and 1, the qubits are said to be entangled if the measurement of one qubit is always correlated with the result of the measurement of the other qubit. According to some physicists it is specifically these entangled superpositions that open up extraordinary new possibilities in information processing. Entanglement, in this view, is seen as the secret to quantum computation rather than superposition per se.

To illustrate, imagine you were a mathematics student and were given a list of eight difficult mathematical problems to solve. You could do the task serially, by solving each problem in turn, but it would take you a long time. Alternatively, if you and seven fellow students agreed to share the problems you could solve one each. Such parallel processing would greatly speed up the task. Now suppose you asked each of your friends to write the answer to his or her problem on a small piece of paper, which would then be placed in a hat. After everyone had finished, the hat would contain all the answers. You might then pick a piece of paper at random out of the hat and examine the answer. You then realize the answers are meaningless because nobody has written the particular question he or she was solving. What use, after all, is an answer when you don't know the question that prompted it?

Similar reasoning applies to the quantum computer. If you had a superposition of numbers from 0 to 7 stored in a three-qubit register and performed a complicated series of operations on these numbers to do some mathematical calculation, you would be in a similar predicament. Reading off an answer would merely tell you one possible answer but not the number that generated it. Quantum entanglement, though, enables us to link quantum registers so that whenever an answer appears in one register, we can always look in the other to find out what number generated it. Without quantum entanglement, the quantum computer would be like the hat full of answers without the questions. With entanglement, it becomes more like a hat full of answers each carefully labeled with a note of the question.

The implication of quantum entanglement is that we can perform mathematical operations on a potentially enormous superposition of numbers in parallel without requiring any extra circuitry for each part of the superposition. From a many-universes perspective, quantum computation affords the possibility of massive parallelism by taking advantage of parallel universes instead of parallel processors.

But there's something else we need in order to exploit the power of quantum computation, and it's known as interference. This phenomenon helps overcome a severe restriction imposed by quantum mechanics: We're not actually allowed to look at each and every answer individually. It is as if the quantum rules say that whenever we examine one piece of paper in the hat, we inevitably destroy the rest.

Interference arises from the fact that the wavelike aspects of quantum particles can overlap one another to cause unusual and distinctive patterns of behavior. In quantum computation, we use interference to read off a new result that depends mathematically on all those intermediate results without revealing what any of them are.

David Deutsch actually disagrees with those who ascribe the power of quantum computation to entanglement and instead sees interference as the crucial phenomenon. The reason is that in a many-universes perspective, entanglement is a natural by-product of having a system of parallel realities. The surprise is interference because it allows these different realities to overlap and collaborate. It is, indeed, interference that makes the notion of multiple realities conjured up by such stories as Borges's "Garden of the Forking Paths" and, more recently, the movie Sliding Doors very different from the real thing.

We will see more precisely in Chapters 3 and 4 how entanglement and interference can be used together to solve problems far beyond the capabilities of any present-day computers.

The Hunter-Gatherers Take a Quantum Leap

To David Deutsch, the possibility of developing a new kind of superpowerful computer is certainly interesting, but it's very far from his overriding concern. What excites him is that quantum computing offers a totally new world view. "Building quantum computers is the least of it," he told me. "I'm not in this business in order to make better computers. Although I would still be fascinated by this work, I really wouldn't be interested in working on it if it weren't for the implications for physics. I want to understand physics fundamentally."

As we will see, there are formidable obstacles to the possibility of constructing a quantum computer. But there's nothing in the laws of physics that says it won't be possible, as far as we know. And as Deutsch stresses, you don't even have to build a quantum computer to be able to glimpse its darkest secrets. "What is really important about quantum computers is that they show us that there's a deep and unsuspected connection between physics and computation. Computation is connected to all sorts of human things like thought, knowledge, life, and so on, whereas physics is the most fundamental description of nature. So here we have an unsuspected, very deep connection between human-type things and fundamental-type things. I think philosophy is going to take a long time to assimilate this."

Deutsch made a substantial start in doing just that in his book The Fabric of Reality. In it he attempted to weave together four strands of scientific and philosophical thinking into an integrated whole. Quantum physics, evolution, computation, and knowledge are, he claims, all somehow inextricably linked. Much of his argument is predicated on the existence of multiple universes, but even people who reject that idea have been impressed by Deutsch's insights. It is an undeniably grandiose scheme, to which we will return.

There's another sense in which this story of the quest for the quantum computer is more than just a glimpse of a new technology. It is that, as Deutsch puts it in The Fabric of Reality, quantum computation is "nothing less than a distinctively new way of harnessing nature." At key points in human history, civilization took a leap forward because people discovered a new way of exploiting nature. Toolmaking, farming, the industrial revolution, and the information revolution were all triggered by the discoveries of new ways of manipulating nature. All of these advances transformed the way humans live. Quantum computation, Deutsch argues, could turn out to be as significant in its effects on human civilization.

On hearing Deutsch's grand vision for the future of quantum computation, I'm reminded of the scene in Stanley Kubrick's 2001: A Space Odyssey when those strange-looking apes discover for the first time the power of wielding a bone as a weapon. What a powerfully symbolic moment that was, with the future of humanity hanging in the balance between aggression and creativity. These hominids had taken their first step toward understanding and exploiting nature. Suddenly the picture cuts from a bone spinning in the air to a futuristic spinning space station orbiting the Earth. Modern humans had arrived and were now beginning their odyssey into space, all thanks to that first spark of imagination in those apes.

Should we attach portentous significance to the arrival of the first quantum computer -- should it ever happen? "People in the nineteenth century imagining the late twentieth century imagined a world of immensely sophisticated steam engines," Deutsch said. "They couldn't conceive of an information age and the Internet and all those kinds of things. Now, a computer is just a machine, but on the other hand it is more than just a machine because the essential part of a computer is not that it manipulates forces and energies -- although it does do that. The important thing is that it manipulates information. The quantum computer is a fundamental new step in that it doesn't just manipulate information, it allows different universes to cooperate. This is as fundamental a difference in principle as any of the previous steps. In fact, numerically it potentially gives much greater leverage and a much greater increase in power for human beings to do things than any of the previous steps. It must change the way we think about ourselves even if we never use it."

Arthur C. Clarke, who co-wrote the screenplay for 2001 and who also wrote The Sentinel, the short story from which the screenplay evolved, once commented that Kubrick had wanted a science fiction tale of "mythic grandeur." If Deutsch's late-night thoughts are right, Homo sapiens is in for another leap forward. Whether it will add up to something of mythic grandeur, though, is a question I hope you will find illuminated in this book.

Copyright © 2000 by Julian Brown

About The Author

Julian Brown specializes in physics and computing as a science journalist. New Science Magazine has featured his work prominently, and he has produced science specials for BBC and BBC World Service. He teamed up with Paul Davies to edit The Ghost in the Atom and Superstrings: A Theory of Everything.

Product Details

  • Publisher: Simon & Schuster (August 14, 2001)
  • Length: 400 pages
  • ISBN13: 9780684870045

Browse Related Books

Raves and Reviews

Ed Hinds New Scientist A fascinating and entertaining read from start to finish.

Paul Davies author of The Fifth Miracle A masterful piece of scientific exposition.

Lynn Yarris San Jose Mercury News Exciting and educational...a road map for the future.

Resources and Downloads

High Resolution Images

More books from this author: Julian Brown