To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This section details several optimization algorithms. The variational quantum eigensolver is presented, which allows finding a minimum eigenvalue for a given Hamiltonian. This section also includes extensive notes on performing measurements in arbitrary bases. After a brief introduction of the quantum approximate optimization algorithm, the chapter further discusses the quantum maximum cut algorithm and the quantum subset sum algorithm in great detail.
The algorithms presented in this chapter were the first to establish a query complexity advantage for quantum algorithms. The list of algorithms includes the Bernstein-Vazirani algorithm, Deutsch’s algorithm, and Deutsch-Jozsa algorithm. Quantum oracles and their construction are being introduced.
Machine-readable humanity is an evocative idea, and it is this idea which Hanley et al. spell out and critically discuss in their contribution. They are interested in exploring the technological as well as the moral side of the meaning of machine-readability. They start by differentiating between various ways to collect (and read) data and to develop classification schemes. They argue that traditional top-down data collection (first the pegs and then the collection according to the pegs) is less efficient than more recent machine readability, which is dynamic, because of the successive advances of data and predictive analytics (“big data”), machine learning, deep learning, and AI. Discussing the advantages as well as the dangers of this new way to read humans, they conclude that we should be especially cautious vis-à-vis the growing field of digital biomarkers since in the end they could not only endanger privacy and entrench biases, but also obliterate our autonomy. Seen in this light, apps (like AdNauseam) that restrict data collection as a form of protest against behavioral profiling also constitute resistance to the inexorable transformation of humanity into a standing reserve: humans on standby, to be immediately at hand for consumption by digital machines.
This chapter lays out a more complete software framework, including a high-performance simulator. It discusses transpilation, a powerful compiler-based technique that allows seamless porting of circuits to other frameworks. The methodology further enables the implementation of key features found in quantum programming languages, such as automatic uncomputation or conditional blocks. An elegant sparse representation is also being introduced.
A quantum walk algorithm is the quantum analog to a classical random walk with potential applications in search problems, graph problems, quantum simulation, and even machine learning. In this section, we describe the basic principles of this class of algorithms on a simple one-dimensional topology.
This appendix collects some important facts about the normal distribution. These results are used throughout this book, and in particular in Chapters 4, 6, and 7.
In Chapter 1, we estimated the correlations of linear approximations by finding a suitable linear trail and applying the piling-up lemma, but this approach relied on an unjustified independence assumption. This chapter puts the piling-up lemma and linear cryptanalysis in general on a more solid theoretical foundation. This is achieved by using the theory of correlation matrices. Daemen proposed these matrices in 1994 to simplify the description of linear cryptanalysis.
This brief chapter discusses the minimum mathematical background required to fully understand the derivations in this text. Basic familiarity with matrices and vectors is assumed. The chapter reviews key properties of complex numbers, the Dirac notation with inner and outer products, the Kronecker product, unitary and Hermitian matrices, eigenvalues and eigenvectors, the matrix trace, and how to construct the Hermitian adjoint of matrix–vector expressions.
Steeves revisits empirical data about young people’s experiences on social media to provide a snapshot of what happens to the interaction between self and others when community is organized algorithmically. She then uses Meadian notions of sociality to offer a theoretical framing that can explain the meaning of self, other, and community found in the data. She argues that young people interact with algorithms as if they were another social actor, and reflexively examine their own performances from the perspective of the algorithm as a specific form of generalized other. In doing so, they pay less attention to the other people they encounter in online spaces and instead orient themselves to action by emulating the values and goals of this algorithmic other. Their performances can accordingly be read as a concretization of these values and goals, making visible the agenda of those who mobilize the algorithm for their own purposes.
In her contribution, Roessler is interested in what digitalization means for the concept of human beings: a specific concept, identifiable, that defies digitalization? A conceptual clarification, she argues, shows that a rather uncontested definition of a human being includes their vulnerability, their finiteness, and their rational self-consciousness. In a next step, she discusses the difference between robots and humans and engages with novels by Ian MacEwan and Kazuo Ishiguro which imagine this difference between humans and robots. Finally, she advocates that a world in which the difference between robots and humans would no longer be recognizable would be an uncanny world in which we would not want to live.
This chapter discusses Grover’s fundamental algorithm, which enables searching over a domain of N elements with complexity of the square root of N. Several derivative algorithms and applications are being discussed, including amplitude amplification, amplitude estimation, quantum counting, Boolean satisfiability, graph coloring, and quantum mean, medium, and minimum finding.
Quantum algorithms operate on inputs encoded as quantum states. Preparing these input states can be quite complicated. The section discusses the trivial basis and amplitude encoding schemes, as well as Hamiltonian encoding. It also discusses smaller circuits for two- and three-qubit states. Then this chapter presents two of the most complex algorithms in this book, the general state preparation algorithms from Möttönen and the Solovay–Kitaev algorithm for gate approximation. Beginners may decide to skip these two algorithms on a first read.
In the previous chapters, and in Chapters 4 and 6 in particular, we already encountered methods for testing hypotheses. We used these statistical tests to determine if a given empirical correlation corresponds to the real key, or to an incorrect key. This chapter takes a more systematic look at statistical testing and derives methods that are—in some particular sense—best possible.