We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Science offers the boldest metaphysics of the age. It is a thoroughly human construct, driven by the faith that if we dream, press to discover, explain, and dream again, thereby plunging repeatedly into new terrain, the world will somehow come clearer and we will grasp the true strangeness of the universe. And the strangeness will all prove to be connected, and make sense.
– Edward O. Wilson
Information is physical.
– Rolf Landauer
What are the fundamental concepts of quantum computation and quantum information? How did these concepts develop? To what uses may they be put? How will they be presented in this book? The purpose of this introductory chapter is to answer these questions by developing in broad brushstrokes a picture of the field of quantum computation and quantum information. The intent is to communicate a basic understanding of the central concepts of the field, perspective on how they have been developed, and to help you decide how to approach the rest of the book.
Our story begins in Section 1.1 with an account of the historical context in which quantum computation and quantum information has developed. Each remaining section in the chapter gives a brief introduction to one or more fundamental concepts from the field: quantum bits (Section 1.2), quantum computers, quantum gates and quantum circuits (Section 1.3), quantum algorithms (Section 1.4), experimental quantum information processing (Section 1.5), and quantum information and communication (Section 1.6).
Quantum mechanics has the curious distinction of being simultaneously the most successful and the most mysterious of our scientific theories. It was developed in fits and starts over a remarkable period from 1900 to the 1920s, maturing into its current form in the late 1920s. In the decades following the 1920s, physicists had great success applying quantum mechanics to understand the fundamental particles and forces of nature, culminating in the development of the standard model of particle physics. Over the same period, physicists had equally great success in applying quantum mechanics to understand an astonishing range of phenomena in our world, from polymers to semiconductors, from superfluids to superconductors. But, while these developments profoundly advanced our understanding of the natural world, they did only a little to improve our understanding of quantum mechanics.
This began to change in the 1970s and 1980s, when a few pioneers were inspired to ask whether some of the fundamental questions of computer science and information theory could be applied to the study of quantum systems. Instead of looking at quantum systems purely as phenomena to be explained as they are found in nature, they looked at them as systems that can be designed. This seems a small change in perspective, but the implications are profound. No longer is the quantum world taken merely as presented, but instead it can be created.
This book provides an introduction to the main ideas and techniques of the field of quantum computation and quantum information. The rapid rate of progress in this field and its cross-disciplinary nature have made it difficult for newcomers to obtain a broad overview of the most important techniques and results of the field.
Our purpose in this book is therefore twofold. First, we introduce the background material in computer science, mathematics and physics necessary to understand quantum computation and quantum information. This is done at a level comprehensible to readers with a background at least the equal of a beginning graduate student in one or more of these three disciplines; the most important requirements are a certain level of mathematical maturity, and the desire to learn about quantum computation and quantum information. The second purpose of the book is to develop in detail the central results of quantum computation and quantum information. With thorough study the reader should develop a working understanding of the fundamental tools and results of this exciting field, either as part of their general education, or as a prelude to independent research in quantum computation and quantum information.
Structure of the book
The basic structure of the book is depicted in Figure 1. The book is divided into three parts. The general strategy is to proceed from the concrete to the more abstract whenever possible.
In Chapter 4 we showed that an arbitrary unitary operation U may be implemented on a quantum computer using a circuit consisting of single qubit and controlled-not gates. Such universality results are important because they ensure the equivalence of apparently different models of quantum computation. For example, the universality results ensure that a quantum computer programmer may design quantum circuits containing gates which have four input and output qubits, confident that such gates can be simulated by a constant number of controlled-not and single qubit unitary gates.
An unsatisfactory aspect of the universality of controlled-not and single qubit unitary gates is that the single qubit gates form a continuum, while the methods for fault-tolerant quantum computation described in Chapter 10 work only for a discrete set of gates. Fortunately, also in Chapter 4 we saw that any single qubit gate may be approximated to arbitrary accuracy using a finite set of gates, such as the controlled-not gate, Hadamard gate H, phase gate S, and π/8 gate. We also gave a heuristic argument that approximating the chosen single qubit gate to an accuracy ∈ required only Θ(1/∈) gates chosen from the finite set. Furthermore, in Chapter 10 we showed that the controlled-not, Hadamard, phase and π/8 gates may be implemented in a fault-tolerant manner.
Recent work in quantum information science has produced a revolution in our understanding of quantum entanglement. Scientists now view entanglement as a physical resource with many important applications. These range from quantum computers, which would be able to compute exponentially faster than classical computers, to quantum cryptographic techniques, which could provide unbreakable codes for the transfer of secret information over public channels. These important advances in the study of quantum entanglement and information touch on deep foundational issues in both physics and philosophy. This interdisciplinary volume brings together fourteen of the world's leading physicists and philosophers of physics to address the most important developments and debates in this exciting area of research. It offers a broad spectrum of approaches to resolving deep foundational challenges - philosophical, mathematical, and physical - raised by quantum information, quantum processing, and entanglement. This book is ideal for historians, philosophers of science and physicists.
Quantum cryptography (or quantum key distribution) is a state-of-the-art technique that exploits properties of quantum mechanics to guarantee the secure exchange of secret keys. This 2006 text introduces the principles and techniques of quantum cryptography, setting it in the wider context of cryptography and security, with specific focus on secret-key distillation. The book starts with an overview chapter, progressing to classical cryptography, information theory (classical and quantum), and applications of quantum cryptography. The discussion moves to secret-key distillation, privacy amplification and reconciliation techniques, concluding with the security principles of quantum cryptography. The author explains the physical implementation and security of these systems, enabling engineers to gauge the suitability of quantum cryptography for securing transmission in their particular application. With its blend of fundamental theory, implementation techniques, and details of recent protocols, this book will be of interest to graduate students, researchers, and practitioners in electrical engineering, physics, and computer science.
In recent years there has been a huge increase in the research and development of nanoscale science and technology. Central to the understanding of the properties of nanoscale structures is the modeling of electronic conduction through these systems. This graduate textbook provides an in-depth description of the transport phenomena relevant to systems of nanoscale dimensions. In this textbook the different theoretical approaches are critically discussed, with emphasis on their basic assumptions and approximations. The book also covers information content in the measurement of currents, the role of initial conditions in establishing a steady state, and the modern use of density-functional theory. Topics are introduced by simple physical arguments, with particular attention to the non-equilibrium statistical nature of electrical conduction, and followed by a detailed formal derivation. This textbook is ideal for graduate students in physics, chemistry, and electrical engineering.
The first realization that the validity of the quantum superposition principle in the Hilbert space describing a composite quantum system may give rise to fundamentally new correlations between the constituent subsystems came in the landmark 1935 paper by Einstein, Podolsky, and Rosen (EPR), where it was shown how the measurement statistics of observables in certain quantum states could not be reproduced by assigning definite wavefunctions to individual subsystems. It was in response to the EPR paper that Schrödinger, in the same year, coined the term entanglement (Verschränkung) to acknowledge the failure of classical intuition in describing the relationship between the “parts” and the “whole” in the quantum world:
Whenever one has a complete expectation catalog – a maximum total knowledge – a ψ function – for two completely separated bodies, …then one obviously has it also for the two bodies together. But the converse is not true. The best possible knowledge of a total system does not necessarily include total knowledge of all its parts, not even when these are fully separated from each other and at the moment are not influencing each other at all.
While Bell's strengthening of the original EPR-paradox setting and the subsequent experimental verification of Bell inequalities irreversibly changed the perception of entanglement from a property of counterintuitive “spookiness” to (beyond reasonable doubt) an experimental reality, the concept and implications of entanglement continue to be associated with a host of physical, mathematical, and philosophical challenges. In particular, investigation of entanglement in both its qualitative and quantitative aspects has intensified under the impetus of quantum information science (QIS).
Discussions of quantum-computational algorithms in the literature refer to various features of quantum mechanics as the source of the exponential speed-up relative to classical algorithms: superposition and entanglement, the fact that the state space of n bits is a space of 2n states while the state space of n qubits is a space of 2n dimensions, the possibility of computing all values of a function in a single computational step by “quantum parallelism,” or the possibility of an efficient implementation of the discrete quantum Fourier transform. Here I propose a different answer to the question posed in the title, in terms of the difference between classical logic and quantum logic, i.e., the difference between the Boolean classical event structure and the non-Boolean quantum event structure. In a nutshell, the ultimate source of the speed-up is the difference between a classical disjunction, which is true (or false) in virtue of the truth values of the disjuncts, and a quantum disjunction, which can be true (or false) even if none of the disjuncts is either true or false.
In the following, I will discuss the information-processing in Deutsch's XOR algorithm (the first genuinely quantum algorithm) and related period-finding quantum algorithms (Simon's algorithm and Shor's factorization algorithm). It is well known that these algorithms can be formulated as solutions to a hidden-subgroup problem. Here the salient features of the information-processing are presented from the perspective of the way in which the algorithms exploit the non-Boolean logic represented by the projective geometry (the subspace structure) of Hilbert space.
Indefinite causal structure poses particular problems for theory formulation since many of the core ideas used in the usual approaches to theory construction depend on having definite causal structure. For example, the notion of a state across space evolving in time requires that we have some definite causal structure so we can define a state on a space-like hypersurface. We will see that many of these problems are mitigated if we are able to formulate the theory in a formalism-local (or F-local) fashion. A formulation of a physical theory is said to be F-local if, in making predictions for any given arbitrary space-time region, we need only refer to mathematical objects pertaining to that region. This is a desirable property both on the grounds of efficiency and since, if we have indefinite causal structure, it is not clear how to select some other space-time region on which our calculations may depend. The usual ways of formulating physical theories (the time-evolving state picture, the histories approach, and the local-equations approach) are not F-local.
We set up a framework for probabilistic theories with indefinite causal structure. This, the causaloid framework, is F-local. We describe how quantum theory can be formulated in the causaloid framework (in an F-local fashion). This provides yet another formulation of quantum theory. This formulation, however, may be particularly relevant to the problem of finding a theory of quantum gravity. The problem of quantum gravity is to find a theory that reduces in appropriate limits to general relativity and quantum theory (including, at least, those situations in which those two theories have been experimentally confirmed).
The use of parameters to describe an experimenter's control over the devices used in an experiment is familiar in quantum physics, for example in connection with Bell inequalities. Parameters are also interesting in a different but related context, as we noticed when we proved a formal separation in quantum mechanics between linear operators and the probabilities that these operators generate. In comparing an experiment against its description by a density operator and detection operators, one compares tallies of experimental outcomes against the probabilities generated by the operators but not directly against the operators. Recognizing that the accessibility of operators to experimental tests is only indirect, via probabilities, motivates us to ask what probabilities tell us about operators, or, put more precisely, “what combinations of a parameterized density operator and parameterized detection operators generate any given set of parametrized probabilities?”
Here, we review and augment recent proofs that any given parameterized probabilities can be generated in very diverse ways, so that a parameterized probability measure, detached from any of the (infinitely many) parameterized operators that generate it, becomes an interesting object in its own right. By detaching a parameterized probability measure from the operators that may have led us to it, we (1) strengthen Holevo's bound on a quantum communication channel and (2) clarify a role for multiple levels of modeling in an example based on quantum key distribution. We then inquire into some parameterized probability measures generated by entangled states and into the topology of the associated parameter spaces; in particular we display some previously overlooked topological features of level sets of these probability measures.
In many situations, learning from the results of measurements can be regarded as updating one's probability distributions over certain variables. According to Bayesians, this updating should be carried out according to the rule of conditionalization. In the theory of quantum mechanics, there is a rule that tells us how to update the state of a system, given observation of a measurement result. The state of a quantum system is closely related to probability distributions over potential measurements. Therefore we might expect there to be some relation between Bayesian conditionalization and the quantum state-update rule. There have been several suggestions that the state change just is Bayesian conditionalization, appropriately understood, or that it is closely analogous.
Bub was the first to make the connection between quantum measurement and Bayesian conditionalization in a 1977 paper, using an approach based on quantum logic. The connection is renewed in discussions by Fuchs and also Jacobs in 2002, where again the analogy between the quantum state update and Bayesian conditionalization is pointed out. At the same time, Fuchs draws attention to a disanalogy – namely that there is an “extra unitary” transformation as part of the measurement in the quantum case. In this chapter, I will first review the proposals of Bub, Jacobs, and Fuchs. I will then show that the presence of the extra unitaries in quantum measurement leads to a difference between classical and quantum measurement in terms of information gain, drawing on results by Nielsen and Fuchs and Jacobs.
For Abner Shimony. Your influence on me goes well beyond physics. Knowing you and being close to you is one of the greatest privileges and pleasures in my life.
Introduction
Quantum mechanics is, without any doubt, a tremendously successful theory: it started by explaining black-body radiation and the photoelectric effect, it explained the spectra of atoms, and then went on to explain chemical bonds, the structure of atoms and of the atomic nucleus, the properties of crystals and the elementary particles, and a myriad of other phenomena. Yet it is safe to say that we still lack a deep understanding of quantum mechanics – surprising and even puzzling new effects continue to be discovered with regularity. That we are surprised and puzzled is the best sign that we still don't understand; however, the veil over the mysteries of quantum mechanics is starting to lift a little.
One of the strangest things microscopic particles do is to follow non-local dynamics and to yield non-local correlations. That particles follow non-local equations of motion was discovered by Aharonov and Bohm, while non-local correlations – which are the subject of this chapter – were discovered by John Bell and first cast in a form that has physical meaning, i.e., that can be experimentally tested, by Clauser, Horne, Shimony, and Holt. When they were discovered, both phenomena seemed to be quite exotic and at the fringe of quantum mechanics. By now we understand that they are some of the most important aspects of quantummechanical behavior.
Quantum information science is about the processing of information by the exploitation of some distinguishing features of quantum systems, such as electrons, photons, ions. In recent years a lot has been promised in the domain of quantum information. In quantum computing it was promised that NP-problems would be solved in polynomial time. In quantum cryptography there were claims that protocols would have practically 100% security. At the moment it is too early to say anything definitive regarding the final results of this great project.
In quantum computing a few quantum algorithms and developed devices, “quantum pre-computers” with a few quantum registers, were created. However, difficulties could no longer be ignored. For some reason it was impossible to create numerous quantum algorithms that could be applied to various problems. Up to now the whole project is based on two or three types of algorithm, and among them one, namely, the algorithms for prime factorization, might be interesting for real-world application. There is a general tendency to consider this situation with quantum algorithms as an occasional difficulty. But, as the years pass, one might start to think that there is something fundamentally wrong. The same feelings are induced by developments in quantum hardware. It seems that the complexity of the problem of creation of a device with a large number N of quantum registers increases extremely non-linearly with increasing N. In quantum cryptography the situation is opposite to that of quantum computing. There were tremendous successes in the development of technologies for production and transmission of quantum information, especially pairs of entangled photons.