How small is a bit?

Leaving behind the Y2K Bug and the problems of the last millennium, we look ahead to the future. Mark Harman introduces the subject of quantum computing, possibly the biggest revolution in computing since the invention of the transistor.

Gordon Moore, the co-founder of Intel, coined Moore’s Law in the 1970s. This states that the memory capacity of a silicon chip approximately doubles every 18 months. As a direct consequence of Moore’s Law, the amount of space occupied by a single bit is also decreasing, because the overall size of chips remains largely constant. In 1960 a single bit of information typically required about 1016 atoms to represent it. Today the figure is closer to 107. Assuming this trend continues, then by 2020 we will have reached a stage where less than one atom is required to store a single bit of information.

What happens when we reach the one atom per bit barrier? When this happens we will be forced to enter the world of quantum mechanics: the often baffling but astonishingly successful world of sub-atomic physics. Based upon this new physics, we can implement computers that may execute exponentially faster than today’s classical computers and require exponentially less storage space in which to do so. Welcome to the world of quantum computing.

Quantum computing

Quantum computers are not like ordinary computers. They are built, not using the nineteenth century physics of electro-magnetism, but the twentieth century physics of quantum mechanics. One of the most startling aspects of this new physics is that a particle can exist in many (apparently contradictory) states at the same time. In this ‘many states in one’ situation, the particle exists in what is called a ‘superposition’ of states. In computing terms, it is as if a boolean register could hold the values true and false simultaneously. It is because of this superposition of states that there is the possibility of superfast and supercompact computers based on the principles of quantum mechanics. These new quantum computers will so outperform today’s computers that the theoretical foundations of computation may need to be rewritten, and many existing algorithms may seem sluggish and inefficient.

When applied to the world of computing, quantum physics has bewildering but very exciting possibilities. Not only will we be able to read 2n different n-bit numbers into n bits, we shall be able to process all 2n bit combinations at once (this is called quantum parallelism). Obviously, if we can store such an enormous amount of information in such a small space, many of the rules of computing that we have grown comfortable with over the past fifty years will have to be abandoned, or at least revised.

Of course, there is a catch. Although we can process all 2n bit combinations at once, we shall only be able to inspect a single answer from our computation. Worse, we cannot be exactly sure which answer we will get. There is an uncertainty at the heart of quantum physics that we shall have to tame or circumvent in order to be able to exploit the wonderful potential of this new computing technology. There is also the problem of interaction between our computer and the outside world: quantum computers are very fragile. Even an exchange of a single particle could destroy the entire computation.

Despite all of these drawbacks, there is a real and growing hope that quantum computing may prove to be a realistic means of gaining dramatic improvements in the power of computers that would make today’s finest machine appear to be a bloated and boring vacuum tube beast, executing with the speed of geological plate tectonics.

Currently quantum computing is in its infancy. The only quantum computer that has currently been built is a simple NOT gate. A project is underway to implement a computer capable of factoring the number 15 with an expected completion date some time in the year 2000. As with the development of classical computation, the theory of quantum computing is far in advance of the practical development. This article takes a brief look at what could be achieved when the engineering catches up with the science.

The shock of quantum cryptography

In 1994 the world was forced to wake up to the potential of quantum computing, when Peter Shor showed that it was theoretically possible for a quantum computational algorithm to crack the world’s most secure encryption systems. All that is required is for the physicists to develop the quantum-level hardware.

Shor’s algorithm is by no means the only application of quantum computing. Algorithms have been developed for more prosaic applications, such as fast, massively parallel database searches. The importance of Shor’s algorithm was its potential impact on secret and secure communication, because of its implications for public key encryption systems.

Public key encryption systems form the basis for most of the secure communications we rely upon today. These systems are vital for privacy and are obviously important as underpinning technology for the growing e-commerce industry. Public key encryption relies upon the fact that factoring a large number is extremely hard. For example, in a recent contest it was calculated that to successfully decrypt a message encrypted with a key 129-digits long it would take a machine capable of running a million instructions per second 5,000 years. If we wait for a computer to crack an encrypted message by finding factors, then the information we get might be a little out of date!

Because factoring is believed to be so hard (even with the most powerful computer) many people felt completely comfortable sending encrypted messages. The factorisation of a number can be achieved (using a little number theory) by finding the period of a related periodic function. Of course, all classical attempts to solve the problem of finding the period of this function turn out to be as computationally demanding as the original problem of factoring. Shor’s algorithm proposes the use of Fourier transforms and quantum parallelism to find the function’s period, and it does so exponentially faster than any known classical algorithm. Secure communication is no longer so secure.

Shor’s algorithm has been investigated by simulating its execution using a classical computer. Of course, this takes a lot longer than it would on a quantum computer, but it provides quantum computer scientists with a technique for analysing the behaviour of the algorithm. An implementation of Shor’s algorithm on a real physical quantum computer is still some way off. As mentioned, work is underway to try to implement a specialised quantum computer that will use Shor’s algorithm to factor the number 15. If successful, this will not cause an immediate headache for cryptanalysts (who typical rely on numbers with many more digits), but it could signal the beginning of the end for public key encryption.

Shor’s algorithm presents a worry for those who rely on existing approaches to encryption, but quantum communication offers a possible remedy to this problem. Because of the way in which a quantum-level piece of information cannot be inspected without disturbing it, it is possible to imagine ‘quantum communication channels’ on which it is impossible to eavesdrop. On these secure quantum channels, even an (unsuccessful) attempt to eavesdrop on a communication will be detectable by both the sender and recipient of the message, making spying not only impossible but also rather dangerous.

Quantum computers thus offer both a severe headache for secure private communication and the ultimate solution to the problem. The severe headache comes from Shor’s algorithm. The pain relief may be offered by the panacea of quantum communication.

An experiment

Here is a quantum experiment that you can try at home. Quantum computers are based on qubits. A qubit exists in a superposition of states between zero and one, but collapses to a single state, either zero or one, when measured. It is hard to imagine how a single bit can exist in the superposition of states. However, there’s a very simple experiment that you can try, which illustrates the key principle of state superposition. All you will need is a strong light source (say a powerful torch) and three polarisation filters (you can get these from a camera shop).

Let’s call the filters A, B, and C, with filter A polarised horizontally, B polarised at 45°, and C polarised vertically. You can achieve the different angles of polarisation by holding the filters at different angles. Photons of light also have a polarisation, so we can use the polarisation of the photons to represent a bit of information. Suppose we choose to represent bit value zero by horizontal polarisation and bit value one by vertical polarisation. A 45° polarisation represents a bit of information that has a 50:50 probability of being either value zero or one; it is somehow both one and zero at the same time and in equal measures.

The next step is to shine a beam of light onto a screen and put filter A between source and screen. Clearly, you would expect the light beam to reduce dramatically, because the incoming light photons will be in a jumble of randomly selected polarisations, with only a very few being exactly horizontal. In fact the beam is not dramatically reduced, but simply halved in intensity. The filter acts as a measurement of the photons, each of which is in a superposition of polarisation states. In measuring the incoming photon states, the filter collapses these states to either horizontal polarisation (those which pass through and hit the screen) or vertical polarisation (those which are reflected by the filter and do not hit the screen). This situation is illustrated in Figure 1.

We can check that those photons that pass through the filter are indeed horizontally polarised by putting filter C between filter A and the screen (see Figure 2). Filter C is vertically polarised, so it reflects all horizontally polarised photons. As we would expect, the addition of filter C prevents any light reaching the screen.

So far, it is possible for us to imagine that all the incoming photons are simply in one of two states: zero-valued (horizontal polarisation) and one-valued (vertical polarisation). In this view of the situation, the polarisation filters simply act like conventional filters. Filter A only lets zeros through and filter C only lets ones through. Can you predict the effect of adding filter B in-between A and C? You would expect it to have no effect. That is, because no light can pass through the sequence of filters A followed by C, none could possibly pass through the sequence A, B, C. How could the addition of another filter increase the amount of light that passes through? Try it. You will find that inserting filter B between A and C increases the amount of light that reaches the screen from nothing to about one eighth of the original light beam intensity. Clearly, this astounding result cannot be explained by assuming that the photons coming out of the light source are either zero-valued or one-valued. What is going on?

In order to understand the result of the experiment we have to view the filters as measuring devices that change the state of the photons they measure. We also have to think in terms of probabilities. Each filter measures the incoming light with respect to its own polarisation, which we can think of as representing a ‘basis’, with respect to which the measurement is taken.

Filter A measures photons with respect to the zero basis. It changes the state of photons that it measures so that those that pass through have a probability of being measured as being one-valued using a one-basis filter (filter C, in our case). The probability in this case is zero (ie impossible). Therefore, the sequence of filters A followed by C cuts out all the light.

However, we have not said what effect filter A has on the probability that a photon will be measured as being 45% polarised (that is being in the both-zero-and-one state) by filter B. In fact this probability is exactly 50:50, so half the light passing through filter A will pass through filter B. Now filter B has changed the state of photons so that those that emerge have a 50:50 chance of being measured as being one-valued by a one-basis filter (filter C in our case), so half the light passing through filter B now passes through filter C. The overall effect of the three filters is therefore that each halves the intensity of the incoming light. The situation is illustrated in Figure 3.

It seems that there is no way of escaping the fact that photons that emerge from filter B in this experiment are both in a state of being zero-valued and one-valued with equal probability. They are in a superposition of states. And there seems to be no escape from the conclusion that the very act of measuring the photons forces them to alter their state, collapsing it in a way that affects future measurements.

We could think of this philosophically. The quantum theory throws up very deep questions about the nature of matter, cause and effect, and measurement. Many eminent scientists have worried about this. Einstein famously remarked that he did not believe that ‘God plays dice with the universe’ (referring to the probabilistic nature of quantum mechanics). Compton noticed that one might infer from the equations of quantum mechanics that an action could precede its cause. However, this philosophising has yet to produce an intuitive model of the predications of quantum mechanics. In fact, a unification of quantum mechanics and Einstein’s equally successful and somewhat counter-intuitive theory of general relativity remains elusive.

Despite these uncertainties and theoretical puzzles, the equations of quantum mechanics have been found to be successful in predicting the outcome of many an experiment like the one we have just carried out with the polarisation filters. There is a prevailing feeling that we should believe in these equations even if we don’t really quite understand them.

True randomness

Let’s look at another area for quantum computation: true randomness. You would think that generating random numbers is easy. You just rely on the system function random(). Generating a random number on a computer seems to be a pretty simple problem and, perhaps, one which is not really that important. This is a misconception on both counts. Most random number generators simply do not work. They do not generate truly random numbers. Instead, they generate pseudo-random numbers which, for many purposes are sufficient for our needs, but which in some cases may produce surprising and undesirable results. For example, the 1960s randu routine for generating pseudo-random numbers was found to exhibit a correlation that meant that successive triples of numbers produced by randu can only occupy equally-spaced planes in 3D space.

Generation of random numbers by computer has been a problem for some time. On 1 June 1957, ERNIE (the Electronic Random Number Indicator Equipment) selected the first one pound premium bond to receive a large prize in the forerunner of the National Lottery. Premium bonds were designed to encourage the public to save, thereby reducing domestic consumption and inhibiting inflation. For the scheme to work, it was crucial that the public believed ERNIE to be fair in picking random numbers. ERNIE uses the frequency instability of a free-running oscillator to select a winning premium bond number each month. This use of a random physical phenomenon avoids the problems associated with pseudo-randomness in algorithms for pseudo-random number generation.

Generating good random numbers is important because it forms the basis for large-scale statistical simulation used to model complex, dynamic systems, like the economy and the weather. Generating bad random numbers could bias the results of these simulations causing inaccuracies.

Furthermore, random numbers are used in one-time pad encryption and in probabilistic algorithms for solving hard problems that cannot be answered with 100% accuracy in a short time. These probabilistic algorithms are known as Monte Carlo (guaranteed to be quick but only probably correct) and Las Vegas (guaranteed to be correct, but may sometimes take an unacceptably long period to execute). Both Monte Carlo and Las Vegas algorithms rely crucially on good random number generation.

On a quantum computation system, generating random numbers is not a problem, because the underlying computer relies upon the most fundamental physical properties of nature that are (according to the quantum theory) genuinely random.

How to build a quantum computer

Much of the work on quantum computing is theoretical. For example, Shor’s algorithm has not been implemented on a real quantum computer, and there is no guarantee that the physical problems of building a computer to factor numbers in this way can be overcome. However, there are grounds for optimism in work currently under investigation using ion trap- and NMR-based (Nuclear Magnetic Resonance) approaches to the realisation of quantum computing.

In an ion trap, radio waves are used to create a sequence of electromagnetic fields that trap ions, creating a linear array of trapped ions – a qubit register, in effect. Each ion can be in a ground state or an excited state, and changes in state are achieved by firing photons at the ion from a laser. Ion traps were successfully used in 1995 to implement a simple quantum computational NOT gate, raising the hope that ion traps might prove successful as implementation mechanisms for more advanced devices.

In the NMR-based approach, a property of atomic nuclei called ‘spin’ is used as the basis for storing a qubit. Almost any element (or one of its isotopes) can be made to have a spin. Spins have an orientation that can exist only in one of several discrete levels. These levels correspond to the energy levels of the atom and exist in a superposition of levels until they are measured. If energy is supplied to the atom (in the form of radio waves) then an atom may move from one energy level to another. This will happen only if the input energy is at just the right level, and when it does so, the radio wave will be absorbed. Using a series of radio wave pulses, we can thus work out which spin state the atom is in, and this forms the basis for measurement in an NMR-based quantum computer.

With both techniques, many problems remain to be solved, and it is not immediately obvious how the quantum phenomena that they exploit can be harnessed in the form of effective algorithms. However, quantum computer scientists are optimistic that the next ten years may see a breakthrough. After all, even a relatively small quantum computer could revolutionise public key encryption.

An embryonic state

Quantum computing is currently in an embryonic state, but since the mid 1990s interest has dramatically increased because of the theoretical development of several algorithms for superfast database searches, cracking hitherto uncrackable codes, and supercompact storage of data.

The realisation of these dreams is still some way off, but we cannot rely upon Moore’s Law to double the power of our classical computers every 18 months. We are fast approaching the one-atom-per-bit crunch. At this moment, we will find ourselves in the world of quantum computing whether we like it or not. If current trends continue, this moment will arrive in approximately 10 to 15 year’s time. There is time to prepare and much to be gained.

Mark Harman is a lecturer in computer science at Goldsmiths’ College, where he works on program manipulation and testing using slicing and transformation. Mark Harman can be contacted by email at m.harman@gold.ac.uk, or by post to Mark Harman, Department of Mathematical and Computing Sciences, Goldsmiths’ College, University of London, New Cross, London SE14 6NW.

A brief history of quantum computing

Quantum computation was born in 1982 when Richard Feynman, the Nobel Prize winning physicist published an article speculating on the implications of quantum mechanics on the world of classical computation. Feynman’s inspired observations remained mere conjecture for three years until David Deutsch showed how a quantum computational algorithm could be realised to compute a result faster than could be achieved using any possible classical computer.

Interest in quantum computation increased steadily throughout the late 1980s and early 1990s until 1994. In November 1994 Peter Shor, at AT&T Bell labs, published his paper innocuously titled Algorithms for Quantum Computation: Discrete Logarithms and Factoring. This paper shocked and astounded computer scientists around the world. Shor’s paper contained the algorithm for factoring based on Fourier analysis and quantum parallelism that promised to turn the world of public key encryption on its head.

As we approach the millennium, the challenge for quantum computer scientists is to build a real quantum computer and to investigate new specialised algorithms (like Shor’s) that exploit this new technology. An additional goal that I have not had time to discuss in this article is that of providing a more general framework for computing with quantum mechanics, so that new specialised algorithms do not have to be developed for each problem.

 

Further reading

The field of quantum computing is young, but growing fast. There are a number of places you could look for further information.

General introductions to quantum computing can be found in the articles stored on the web pages of the Centre for Quantum Computing, hosted by Oxford University at http://www.qubit.org.

A short and very readable introduction, which focuses on Deutsch’s algorithm, is provided by Professor Robin Whitty of South Bank University. The URL is http://www.sbu.ac.uk/~whittyr/quantum/quantum.html.

The photon polarisation experiment described in this article is taken from the article An introduction to quantum computing for non-physicists by Eleanor Rieffel and Wolfgang Polak, which is available at http://xxx.lanl.gov/abs/quant-ph/, paper number 9809016.

Bernhard Ömer has developed a programming language in which quantum computing algorithms can be simulated in a 3GL-style notation. The system is available for free on any Linux platform and comes with a manual (in LaTeX and postscript) and an interpreter for QCL (Quantum Computer Language). The URL is http://tph.tuwien.ac.at/~oemer/qc/index.html.

There are a growing number of textbooks on quantum computing. Explorations in quantum computing (ISBN 038794768X) by Colin Williams and Scott Clearwater, published by Springer in 1997, contains a technical but highly readable account of quantum computing, its applications, and implications. The book comes with a CD-ROM with some simulations of quantum computational algorithms that you can try using the mathematica package, which runs on several platforms including, PC, Unix, and Macintosh. You do not need to have mathematica to use the example simulations, because a reduced ‘viewer’ program is included.

The prize-winning textbook The fabric of reality (ISBN 014027541X) by David Deutsch is a worthwhile purchase. Deutsch invented one of the first quantum algorithms, helping to illustrate the potential for quantum computation. His book is an attempt to interweave quantum physics and quantum computation with Darwinian evolution and Karl Popper’s theory of knowledge.

There’s an Internet discussion group based on Deutsch’s book. You can subscribe by sending an email containing the word subscribe to the email address FOR-request@list.to.

 

(P)1999, Centaur Communications Ltd. EXE Magazine is a publication of Centaur Communications Ltd. No part of this work may be published, in whole or in part, by any means including electronic, without the express permission of Centaur Communications and the copyright holder where this is a different party.

EXE Magazine, St Giles House, 50 Poland Street, London W1V 4AX, email editorial@dotexe.demon.co.uk