Basic concepts in quantum computation

Qubits, gates and networks

Consider the two binary strings,



The first one can represent, for example, the number 3 (in binary) and the second one the number 7. In general three physical bits can be prepared in 23 = 8 different configurations that can represent, for example, the integers from 0 to 7. However, a register composed of three classical bits can store only one number at a given moment of time. Enter qubits and quantum registers:

A qubit is a quantum system in which the Boolean states 0 and 1 are represented by a prescribed pair of normalised and mutually orthogonal quantum states labeled as {∣0⟩, ∣1⟩} Sch95. The two states form a `computational basis' and any other (pure) state of the qubit can be written as a superposition α∣0⟩ + β∣1⟩ for some α and β such that α2 + ∣β2 = 1. A qubit is typically a microscopic system, such as an atom, a nuclear spin, or a polarised photon. A collection of n qubits is called a quantum register of size n.

We shall assume that information is stored in the registers in binary form. For example, the number 6 is represented by a register in state ∣1⟩ ⊗ ∣1⟩ ⊗ ∣0⟩. In more compact notation: a stands for the tensor product an − 1⟩ ⊗ ∣an − 2⟩…∣a1⟩ ⊗ ∣a0, where ai ∈ {0, 1}, and it represents a quantum register prepared with the value a = 20a0 + 21a1 + …2n − 1an − 1. There are 2n states of this kind, representing all binary strings of length n or numbers from 0 to 2n − 1, and they form a convenient computational basis. In the following a ∈ {0, 1}n (a is a binary string of length n) implies that a belongs to the computational basis.

Thus a quantum register of size three can store individual numbers such as 3 or 7,

∣0⟩ ⊗ ∣1⟩ ⊗ ∣1⟩ ≡ ∣011⟩ ≡ ∣3⟩, 

∣1⟩ ⊗ ∣1⟩ ⊗ ∣1⟩ ≡ ∣111⟩ ≡ ∣7⟩, 

but, it can also store the two of them simultaneously. For if we take the first qubit and instead of setting it to ∣0⟩ or ∣1⟩ we prepare a superposition $1/\sqrt{2}\left( |0\rangle +|1\rangle \right)$ then we obtain

$\frac{1}{\sqrt{2}}\left( |0\rangle +|1\rangle \right) \otimes |1\rangle \otimes |1\rangle \equiv \frac{1}{\sqrt{2}}\left( |011\rangle +|111\rangle \right) ,$

$\equiv \frac{1}{\sqrt{2}}\left( |3\rangle +|7\rangle \right) .$

In fact we can prepare this register in a superposition of all eight numbers -- it is enough to put each qubit into the superposition $1/\sqrt{2} \left( |0\rangle +|1\rangle \right) .$ This gives

$\frac{1}{\sqrt{2}}\left( |0\rangle +|1\rangle \right) \otimes \frac{1}{\sqrt{ 2}}\left( |0\rangle +|1\rangle \right) \otimes \frac{1}{\sqrt{2}}\left( |0\rangle +|1\rangle \right) ,$

which can also be written in binary as (ignoring the normalisation constant 2 − 3/2 ),

∣000⟩ + ∣001⟩ + ∣010⟩ + ∣011⟩ + ∣100⟩ + ∣101⟩ + ∣110⟩ + ∣111⟩.

or in decimal notation as

∣0⟩ + ∣1⟩ + ∣2⟩ + ∣3⟩ + ∣4⟩ + ∣5⟩ + ∣6⟩ + ∣7⟩, 

or simply as

x = 07x⟩.

These preparations, and any other manipulations on qubits, have to be performed by unitary operations. A quantum logic gate is a device which performs a fixed unitary operation on selected qubits in a fixed period of time and a quantum network is a device consisting of quantum logic gates whose computational steps are synchronised in time Deu89. The outputs of some of the gates are connected by wires to the inputs of others. The size of the network is the number of gates it contains.

The most common quantum gate is the Hadamard gate, a single qubit gate H performing the unitary transformation known as the Hadamard transform. It is defined as


The matrix is written in the computational basis {∣0⟩, ∣1⟩} and the diagram on the right provides a schematic representation of the gate H acting on a qubit in state x, with x = 0, 1.

And here is a network, of size three, which affects the Hadamard transform on three qubits. If they are initially in state ∣000⟩ then the output is the superposition of all eight numbers from 0 to 7.


If the three qubits are initially in some other state from the computational basis then the result is a superposition of all numbers from 0 to 7 but exactly half of them will appear in the superposition with the minus sign, for example,


In general, if we start with a register of size n in some state y ∈ {0, 1}n then

y⟩ ↦ 2 − n/2x ∈ {0, 1}n( − 1)y ⋅ xx⟩, 

where the product of y = (yn − 1, …, y0) and x = (xn − 1, …, x0) is taken bit by bit:

y ⋅ x = (yn − 1xn − 1 + …y1x1 + y0x0).

We will need another single qubit gate -- the phase shift gate \phi defined as ∣ 0⟩ ↦ ∣ 0⟩ and ∣ 1⟩ ↦ eiϕ∣ 1⟩, or, in matrix notation,


The Hadamard gate and the phase gate can be combined to construct the following network (of size four), which generates the most general pure state of a single qubit (up to a global phase),


Consequently, the Hadamard and phase gates are sufficient to construct any unitary operation on a single qubit.

Thus the Hadamard gates and the phase gates can be used to transform the input state ∣0⟩∣0⟩...∣0⟩ of the n qubit register into any state of the type ∣Ψ1 ∣Ψ2⟩... ∣Ψn⟩,  where ∣Ψi is an arbitrary superposition of ∣0⟩ and ∣1⟩. These are rather special n-qubit states, called the product states or the separable states. In general, a quantum register of size $n>1$ can be prepared in states which are not separable -- they are known as entangled states. For example, for two qubits (n = 2), the state

α ∣00⟩ + β ∣01⟩ = ∣0⟩ ⊗ (α ∣0⟩ + β ∣1⟩)

is separable, ∣Ψ1⟩ = ∣0⟩ and ∣Ψ2⟩ = α∣0⟩ + β∣1⟩, whilst the state

α ∣00⟩ + β ∣11⟩ ≠ ∣Ψ1⟩ ⊗ ∣Ψ2

is entangled (α, β ≠ 0), because it cannot be written as a tensor product.

In order to entangle two (or more qubits) we have to extend our repertoire of quantum gates to two-qubit gates. The most popular two-qubit gate is the controlled-NOT (C-NOT), also known as the XOR or the measurement gate. It flips the second (target) qubit if the first (control) qubit is ∣ 1⟩ and does nothing if the control qubit is ∣ 0⟩. The gate is represented by the unitary matrix


where x, y = 0or 1 and  ⊕  denotes XOR or addition modulo 2. If we apply the C-NOT to Boolean data in which the target qubit is ∣0⟩ and the control is either ∣0⟩ or ∣1⟩ then the effect is to leave the control unchanged while the target becomes a copy of the control, i.e.

x⟩∣0⟩ ↦ ∣x⟩∣x⟩  x = 0, 1.

One might suppose that this gate could also be used to copy superpositions such as ∣Ψ⟩ = α ∣0⟩ + β ∣1⟩,  so that

∣Ψ⟩∣0⟩ ↦ ∣Ψ⟩∣Ψ⟩

for any ∣Ψ⟩. This is not so! The unitarity of the C-NOT requires that the gate turns superpositions in the control qubit into entanglement of the control and the target. If the control qubit is in a superposition state ∣Ψ⟩ = α∣0⟩ + β∣1⟩,  \noindent (α, β ≠ 0),  and the target in ∣0⟩ then the C-NOT generates the entangled state

(α∣0⟩ + β∣1⟩)∣0⟩ ↦ α∣00⟩ + β∣11⟩.

Let us notice in passing that it is impossible to construct a universal quantum cloning machine effecting the transformation in Eq.(\ref{cloning}), or even the more general

∣Ψ⟩∣0⟩∣W⟩ ↦ ∣Ψ⟩∣Ψ⟩∣W

where W refers to the state of the rest of the world and ∣Ψ⟩ is any quantum state WZ82. To see this take any two normalised states ∣Ψ⟩ and ∣Φ⟩ which are non-identical (∣⟨Φ∣Ψ⟩∣ ≠ 1) and non-orthogonal (⟨Φ∣Ψ⟩ ≠ 0 ), and run the cloning machine,

∣Ψ⟩∣0⟩∣W⟩ ↦ ∣Ψ⟩∣Ψ⟩∣W

∣Φ⟩∣0⟩∣W⟩ ↦ ∣Φ⟩∣Φ⟩∣W′′

As this must be a unitary transformation which preserves the inner product hence we must require

⟨Φ∣Ψ⟩ = ⟨Φ∣Ψ⟩2WW′′

and this can only be satisfied when ∣⟨Φ∣Ψ⟩∣ = 0 or 1, which contradicts our assumptions. Thus states of qubits, unlike states of classical bits, cannot be faithfully cloned. This leads to interesting applications, quantum cryptography being one such.

Another common two-qubit gate is the controlled phase shift gate B(ϕ) defined as


Again, the matrix is written in the computational basis {∣00⟩, ∣01⟩, ∣10⟩, ∣11⟩} and the diagram on the right shows the structure of the gate.

More generally, these various 2-qubit controlled gates are all of the form controlled-U, for some single-qubit unitary transformation U. The controlled-U gate applies the identity transformation to the auxiliary (lower) qubit when the control qubit is in state ∣0⟩ and applies an arbitrary prescribed U when the control qubit is in state ∣1⟩. The gate maps ∣0⟩∣y to ∣0⟩∣y and ∣1⟩∣y to ∣1⟩(Uy⟩), and is graphically represented as


The Hadamard gate, all phase gates, and the C-NOT, form an infinite universal set of gates i.e. if the C-NOT gate as well as the Hadamard and all phase gates are available then any n-qubit unitary operation can be simulated exactly with O(4nn) such gates BBC95. (Here and in the following we use the asymptotic notation -- O(T(n)) means bounded above by cT(n) for some constant $c>0$ for sufficiently large n.) This is not the only universal set of gates. In fact, almost any gate which can entangle two qubits can be used as a universal gate BDEJ95,Llo95. Mathematically, an elegant choice is a pair of the Hadamard and the controlled-V (C-V) where V is described by the unitary matrix


$$V|0\rangle=\begin{pmatrix} 1& 0 \\ 0 &i\\ \end{pmatrix}\begin{pmatrix} 1 \\ 0 \\ \end{pmatrix}=\begin{pmatrix} 1 \\ 0 \\ \end{pmatrix}=|0\rangle$$

$$V|1\rangle=\begin{pmatrix} 1 &0 \\ 0 & i\\ \end{pmatrix}\begin{pmatrix} 0 \\ 1 \\ \end{pmatrix}=\begin{pmatrix} 0 \\ i \\ \end{pmatrix}=i|1\rangle$$
The two gates form a finite universal set of gates -- networks containing only a finite number of these gates can approximate any unitary transformation on two (and more) qubits. More precisely, if U is any two-qubit gate and $\varepsilon >0$ then there exists a quantum network of size O(logd(1/ɛ)) (where d is a constant) consisting of only H and C-V gates which computes a unitary operation U that is within distance ɛ from USol99. The metric is induced by the Euclidean norm - we say that U is within distance ɛ from U if there exists a unit complex number λ (phase factor) such that ∣∣U − λU∣∣ ≤ ɛ. Thus if U is substituted for U in a quantum network then the final state xαxx approximates the final state of the original network xαxx as follows: $\sqrt{ \sum_{x}|\lambda \alpha _{x}^{\prime }-\alpha _{x}|^{2}}\leq \varepsilon$. The probability of any specified measurement outcome on the final state is affected by at most ɛ.

A quantum computer will be viewed here as a quantum network (or a family of quantum networks)and quantum computation is defined as a unitary evolution of the network which takes its initial state "input" into some final state "output". We have chosen the network model of computation, rather than Turing machines, because it is relatively simple and easy to work with and because it is much more relevant when it comes to physical implementation of quantum computation.

Quantum arithmetic and function evaluations

Let us now describe how quantum computers actually compute, how they add and multiply numbers, and how they evaluate Boolean functions by means of unitary operations. Here and in the following we will often use the modular arithmetic HW79. Recall that


denotes the remainder obtained by dividing integer b into integer a, which is always a number less than b. Basically $a=b\bmod n$ if a = b + kn for some integer k. This is expressed by saying that a is congruent to b modulo n or that b is the residue of a modulo n. For example, $1\bmod 7=8\bmod 7=15\bmod 7=50\bmod 7=1$. Modular arithmetic is commutative, associative, and distributive.

$(a\pm b)\bmod n =((a\bmod n)\pm (b\bmod n))\bmod n$

$(a\times b)\bmod n =((a\bmod n)\times (b\bmod n))\bmod n$

$(a\times (b+c))\bmod n =(((a b)\bmod n+((a c)\bmod n))\bmod n$

Thus, if you need to calculate, say, $3^8\bmod 7$ do not use the naive approach and perform seven multiplications and one huge modular reduction. Instead, perform three smaller multiplications and three smaller reductions,

$((3^2\bmod 7)^2\bmod 7)^2\bmod 7 = (2^2\bmod 7)^2\bmod 7=16 \bmod 7= 2.$

This kind of arithmetic is ideal for computers as it restricts the range of all intermediate results. For l-bit modulus n, the intermediate results of any addition, subtraction or multiplication will not be more than 2l bits long. In quantum registers of size n, addition modulo 2n is one of the most common operations; for all x ∈ {0, 1}n and for any a ∈ {0, 1}n,

$|x\rangle\mapsto|(x+a)\bmod 2^n\rangle$

is a well defined unitary transformation.

The tricky bit in the modular arithmetic is the inverse operation, and here we need some basic number theory. An integer a ≥ 2 is said to be prime if it is divisible only by 1 and a (we consider only positive divisors). Otherwise, a is called composite. The greatest common divisor of two integers a and b is the greatest positive integer d denoted d = gcd(a, b) that divides both a and b. Two integers a and b are said to be coprime or relatively prime if gcd(a, b) = 1. Given two integers a and n that are coprime, it can be shown that there exists an unique integer d ∈ {0, …, n − 1} such that $a d=1\bmod n$ HW79. The integer d is called ''inverse modulo '' n of a, and denoted a − 1. For example, modulo 7 we find that $3^{-1}=5 \bmod n$, since $3 \times 5 = 15 = 2\times 7 +1=1 \bmod 7$. This bizarre arithmetic and the notation is due to Karl Friedrich Gauss (1777-1855). It was first introduced in his Disquistiones Arithmeticae in 1801.

In quantum computers addition, multiplication, and any other arithmetic operation have to be embedded in unitary evolution. We will stick to the Hadamard and the controlled-V (C-V), and use them as building blocks for all other gates and eventually for quantum adders and multipliers.

If we apply C-V four times we get identity, so any three subsequent applications of C-V give the inverse of C-V, which will be called C-V † . Now, if we have a couple of the C-V gates and a couple of the Hadamard gates we can build the C-NOT as follows





A single qubit operation NOT can be performed via a C-NOT gate if the control qubit is set to ∣1⟩ and viewed as an auxiliary qubit. This is not to say that we want to do it in practice. The C-NOT gate is much more difficult to build than a single qubit NOT. Right now we are looking into the mathematical structure of quantum Boolean networks and do not care about practicalities. Our two elementary gates also allow us to construct a very useful gate called the controlled-controlled-NOT gate (c2-NOT) or the Toffoli gate Tof81. The construction is given by the following network,



$|110\rangle\to|11\rangle{1\over \sqrt{2}}(|0\rangle+|1\rangle)\to|11\rangle{1\over \sqrt{2}}(|0\rangle+i|1\rangle)\to|10\rangle{1\over \sqrt{2}}(|0\rangle+i|1\rangle)\to|10\rangle{1\over \sqrt{2}}(|0\rangle+i|1\rangle)\to$ $\to|11\rangle{1\over \sqrt{2}}(|0\rangle+|1\rangle)\to|11\rangle{1\over \sqrt{2}}(|0\rangle+i^2|1\rangle)=|11\rangle{1\over \sqrt{2}}(|0\rangle+|1\rangle)\to|111\rangle.$

$|111\rangle\to|11\rangle{1\over \sqrt{2}}(|0\rangle-|1\rangle)\to|11\rangle{1\over \sqrt{2}}(|0\rangle-i|1\rangle)\to|10\rangle{1\over \sqrt{2}}(|0\rangle-i|1\rangle)\to|10\rangle{1\over \sqrt{2}}(|0\rangle-i|1\rangle)\to$ $\to|11\rangle{1\over \sqrt{2}}(|0\rangle-i|1\rangle)\to|11\rangle{1\over \sqrt{2}}(|0\rangle-i^2|1\rangle)=|11\rangle{1\over \sqrt{2}}(|0\rangle+|1\rangle)\to|110\rangle.$

This gate has two control qubits (the top two wires on the diagram) and one target qubit which is negated only when the two controls are in the state ∣1⟩∣1⟩. The c2-NOT gate gives us the logical connectives we need for arithmetic. If the target is initially set to ∣0⟩ the gate acts as a reversible AND gate - after the gate operation the target becomes the logical AND of the two control qubits.

x1, x2⟩∣0⟩ ↦ ∣x1, x2⟩∣x1 ∧ x2

Once we have in our repertoire operations such as NOT, AND, and C-NOT, all of them implemented as unitary operations, we can, at least in principle, evaluate any Boolean function {0, 1}n → {0, 1}m which map n bits of input into m bits of output. A simple concatenation of the Toffoli gate and the C-NOT gives a simplified quantum adder, shown below, which is a good starting point for constructing full adders, multipliers and more elaborate networks.


We can view the Toffoli gate and the evolution given by Eq.~(\ref{andgate}) as a quantum implementation of a Boolean function f : {0, 1}2 → {0, 1} defined by f(x1, x2) = x1 ∧ x2. The operation AND is not reversible, so we had to embed it in the reversible operation c2-NOT. If the third bit is initially set to 1 rather than 0 then the value of x1 ∧ x2 is negated. In general we write the action of the Toffoli gate as the function evaluation,

$|x_1,x_2\rangle|y\rangle\mapsto|x_1,x_2\rangle|(y+(x_1\wedge x_2))\bmod 2\rangle.$

This is how we compute any Boolean function {0, 1}n → {0, 1}m on a quantum computer. We require at least two quantum registers; the first one, of size n, to store the arguments of f and the second one, of size n, to store the values of f. The function evaluation is then a unitary evolution of the two registers,

$|x,y\rangle \mapsto |x,(y+f(x))\bmod 2^{m}\rangle .$

for any y ∈ {0, 1}m. (In the following, if there is no danger of confusion, we may simplify the notation and omit the $\bmod{}$ suffix.)

For example, a network computing f : {0, 1}2 → {0, 1}3 such that f(x) = x2 acts as follows

∣00⟩∣000⟩ ↦ ∣00⟩∣000⟩,   ∣10⟩∣000⟩ ↦ ∣10⟩∣100⟩

∣01⟩∣000⟩ ↦ ∣01⟩∣001⟩,   ∣11⟩∣000⟩ ↦ ∣11⟩∣001⟩

which can be written as

$|x,0\rangle \mapsto |x,x^{2}\bmod 8\rangle ,$

e.g. $3^2\bmod 2^3=1$ which explains why ∣11⟩∣000⟩ ↦ ∣11⟩∣001⟩.

In fact, for these kind of operations we also need a third register with the so-called working bits which are set to zero at the input and return to zero at the output but which can take non-zero values during the computation.

What makes quantum function evaluation really interesting is its action on a superposition of different inputs x, for example, xx, 0⟩ ↦ ∑xx, f(x)⟩ produces f(x) for all x in a single run. The snag is that we cannot get them all from the entangled state xx, f(x)⟩ because any bit by bit measurement on the first register will yield one particular value x ∈ {0, 1}n and the second register will then be found with the value f(x) ∈ {0, 1}m.

Algorithms and their complexity

In order to solve a particular problem, computers, be it classical or quantum, follow a precise set of instructions that can be mechanically applied to yield the solution to any given instance of the problem. A specification of this set of instructions is called an algorithm. Examples of algorithms are the procedures taught in elementary schools for adding and multiplying whole numbers; when these procedures are mechanically applied, they always yield the correct result for any pair of whole numbers. Any algorithm can be represented by a family of Boolean networks (N1, N2, N3, ...), where the network Nn acts on all possible input instances of size n bits. Any useful algorithm should have such a family specified by an example network Nn and a simple rule explaining how to construct the network Nn + 1 from the network Nn. These are called uniform families of networks Pap94.\footnote{This means that the network model is not a self-contained model of computation. We need an algorithm, a Turing machine, which maps each n into an explicit description of Nn.}

The quantum Hadamard transform defined by Eq.(\ref{Had}) has a uniform family of networks whose size is growing as n with the number of input qubits. Another good example of a uniform family of networks is the quantum Fourier transform (QFT) Cop94 defined in the computational basis as the unitary operation

$|y\rangle\mapsto 2^{-n/2} \sum_x e^{i \frac{2\pi}{2^n} yx}|x\rangle,$

Suppose we want to construct such a unitary evolution of n qubits using our repertoire of quantum logic gates. We can start with a single qubit and notice that in this case the QFT is reduced to applying a Hadamard gate. Then we can take two qubits and notice that the QFT can be implemented with two Hadamard gates and the controlled phase shift B(π) in between. Progressing this way we can construct the three qubit QFT and the four qubit QFT, whose network looks like this:


(N.B. there are three different types of the B(ϕ) gate in the network above: B(π), B(π/2) and B(π/4).)

The general case of n qubits requires a trivial extension of the network following the same sequence pattern of gates H and B. The QFT network operating on n qubits contains n Hadamard gates H and n(n − 1)/2 phase shifts B, in total n(n + 1)/2 elementary gates.

The big issue in designing algorithms or their corresponding families of networks is the optimal use of physical resources required to solve a problem. Complexity theory is concerned with the inherent cost of computation in terms of some designated elementary operations, memory usage, or network size. An algorithm is said to be fast or efficient if the number of elementary operations taken to execute it increases no faster than a polynomial function of the size of the input. We generally take the input size to be the total number of bits needed to specify the input (for example, a number N requires log2N bits of binary storage in a computer). In the language of network complexity - an algorithm is said to be efficient if it has a uniform and polynomial-size network family (O(nd) for some constant d)Pap94. For example, the quantum Fourier transform can be performed in an efficient way because it has a uniform family of networks whose size grows only as a quadratic function of the size of the input, i.e. O(n2). Changing from one set of gates to another, e.g. constructing the QFT out of the Hadamard and the controlled-V gates with a prescribed precision ε, can only affect the network size by a multiplicative constant which does not affect the quadratic scaling with n. Thus the complexity of the QFT is O(n2) no matter which set of adequate gates we use. Problems which do not have efficient algorithms are known as hard problems.

Elementary arithmetic operations taught at schools, such as long addition, multiplication or division of bitnumbersrequire < math > O(n2) operations. For example, to multiply x = (xn − 1...x1x0) and y = (yn − 1...y1y0) we successively multiply y by x0, x1 and so on, shift, and then add the result. Each multiplication of y by xk takes about n single bit operations, the addition of the n products takes of the order of n2 bit operations, which adds to the total O(n2) operations. Knowing the complexity of elementary arithmetic one can often assess the complexity of other algorithms. For example, the greatest common divisor of two integers x and $y can be found using Euclid's algorithm; the oldest nontrivial algorithm which has been known and used since 300 BC.\footnote{This truly `classical' algorithm is described in Euclid's ''Elements'', the oldest Greek treatise in mathematics to reach us in its entirety. Knuth (1981) provides an extensive discussion of various versions of Euclid's algorithm.} First divide <math>x$ by y obtaining remainder r1 Then divide y by r1 obtaining remainder r2 then divide r1 by r2 obtaining remainder r3 etc., until the remainder is zero. The last non-zero remainder is gcd(x, y) because it divides all previous remainders and hence also x and y (it is obvious from the construction that it is the greatest common divisor). For example, here is a sequence or remainders (rj, rj + 1) when we apply Euclid's algorithm to compute gcd(12378, 3054) = 6 (12378,3054), (3054,162), (162, 138), (138, 24), (24, 18), (18,6), (6,0). What is the complexity of this algorithm? It is easy to see that the largest of the two numbers is at least halved every two steps, so every two steps we need one bit less to represent the number, and so the number of steps is at most 2n where n is the number of bits in the two integers. Each division can be done with at most O(n2) operations hence the total number of operations is O(n3)

There are basically three different types of Boolean networks: classical deterministic, classical probabilistic, and quantum. They correspond to, respectively, deterministic, randomised, and quantum algorithms.

Classical deterministic networks are based on logical connectives such as AND, OR, and NOT and are required to always deliver correct answers. If a problem admits a deterministic uniform network family of polynomial size, we say that the problem is in the class PPap94.

Probabilistic networks have additional ``coin flip" gates which do not have any inputs and emit one uniformly-distributed random bit when executed during a computation. Despite the fact that probabilistic networks may generate erroneous answers they may be more powerful than deterministic ones. A good example to highlight the difference between probabilistic and deterministic algorithms used to be primality testing -- given an nbit number x decide whether or not x is prime. The Solovay-Strassen SS77 probabilistic algorithm for primality testing runs in time O(n3log(1/ε)) where ε is the desired probability of error.

The log(1/ε) part can be explained as follows. Imagine a probabilistic network that solves a decision problem~\footnote{A decision problem is a problem that admits only two answers: YES or NO} and that errs with probability smaller than $\frac{1}{2}+\delta$ for fixed $\delta&gt;0$ If you run r of these networks in parallel (so that the size of the overall network is increased by factor r and then use the majority voting for the final YES or NO answer your overall probability of error will bounded by ε = exp( − δ2r) (This follows directly from the Chernoff bound- see for instance, MR95 Hence r is of the order log(1/ε) If a problem admits such a family of networks then we say the problem is in the class BPP (stands for ``bounded-error probabilistic polynomial")Pap94

For a long time there was no deterministic polynomial time algorithm for primality. But Agrawal, Kayal, and Saxena in 2002 AKS02 gave a (n12) time deterministic algorithm, which was later improved by Lenstra and Pomerance in 2005 LP05 to (n6). Still there is a significant gap between the performance of the best deterministic and probabilistic algorithms for this problem.

A more dramatic example is polynomial identity testing -- given an arithmetic circuit, decide whether it computes the zero polynomial. It is easy to decide using randomness: choose each input at random from an appropriately chosen set S and evaluate the circuit. If it is nonzero, you have found a witness that the polynomial is nonzero. If it evaluates to zero, then either it is the zero polynomial, or it's not and you were unlucky enough to choose a root, the probability of which can be bounded by the Schwartz-Zippel lemma which states that if the polynomial has degree d, the probability of choosing a root is  ≤ d/∣S. By choosing S∣ = 2d and iterating log(1/ε) times, we can decrease the error to ε.

By contrast, it was shown by Kabanets and Impagliazzo in 2004 KI04 that polynomial identity testing is so hard to derandomize that exhibiting even a nondeterministic subexponential time algorithm for it (i.e. taking time 2εn for arbitrarily small ε) would yield a superpolynomial arithmetic circuit lower bound (i.e. demonstrate a problem that could not be solved by any arithmetic circuit of polynomial size). Such circuit lower bounds are a holy grail of complexity theory and have not been found despite enormous effort. Thus we may interpret the result as saying that derandomizing polynomial identity testing would be a breakthrough not to be expected soon.

Last but not least we have quantum algorithms, or families of quantum networks, which are more powerful than their probabilistic counterparts. The example here is the factoring problem -- given an nbit number x find a list of prime factors of x The smallest known uniform probabilistic network family which solves the problem is of size $O(2^{d\sqrt{n\log n}})$ One reason why quantum computation is such a fashionable field today is the discovery, by Peter Shor, of a uniform family of quantum networks of O(n2loglognlog(1/ε)) in size, that solve the factoring problem Sho94. If a problem admits a uniform quantum network family of polynomial size that for any input gives the right answer with probability larger than $\frac{1}{2}+\delta$ for fixed $\delta&gt;0$ then we say the problem is in the class BQP (stands for ``bounded-error quantum probabilistic polynomial").

We have


Quantum networks are potentially more powerful because of multiparticle quantum interference, an inherently quantum phenomenon which makes the quantum theory radically different from any classical statistical theory.

Richard Feynman Fey82 was the first to anticipate the unusual power of quantum computers. He observed that it appears to be impossible to simulate a general quantum evolution on a classical probabilistic computer in an {\em efficient} way i.e. any classical simulation of quantum evolution appears to involve an exponential slowdown in time as compared to the natural evolution since the amount of information required to describe the evolving quantum state in classical terms generally grows exponentially in time. However, instead of viewing this fact as an obstacle, Feynman regarded it as an opportunity. Let us then follow his lead and try to construct a computing device using inherently quantum mechanical effects.

From interferometers to computers

A single particle interference in the Mach-Zehnder interferometer works as follows. A particle, in this case a photon, impinges on a beam-splitter (BS1), and, with some probability amplitudes, propagates via two different paths to another beam-splitter (BS2) which directs the particle to one of the two detectors. Along each path between the two beam-splitters, is a phase shifter (PS).


If the lower path is labeled as state ∣ 0⟩ and the upper one as state ∣ 1⟩ then the particle, initially in path ∣ 0⟩ undergoes the following sequence of transformations

$\left| \,0\right\rangle \mapsto \frac{ 1}{\sqrt{2}}\left( \left| \,0\right\rangle +\left| \,1\right\rangle \right) \mapsto \frac{1}{\sqrt{2}}(e^{i\phi _{0}}\left| \,0\right\rangle +e^{i\phi _{1}}\left| \,1\right\rangle )$

$= e^{i\frac{\phi _{0}+\phi _{1}}{2}}\frac{1}{\sqrt{2}}(e^{i\frac{\phi _{0}-\phi _{1}}{2} }\left| \,0\right\rangle +e^{i\frac{-\phi _{0}+\phi _{1}}{2}}\left| \,1\right\rangle )$

$\mapsto e^{i\frac{\phi _{1}+\phi _{2} }{2}}(\cos \frac{1}{2}(\phi _{0}-\phi _{1})\left| \,0\right\rangle +i\sin \frac{1}{2}(\phi _{0}-\phi _{1})\left| \,1\right\rangle ),$

where ϕ0 and ϕ1 are the settings of the two phase shifters and the action of the beam-splitters is defined as

$\left| \,0\right\rangle {\mapsto }\frac{1}{\sqrt{2}} (\left| \,0\right\rangle +\left| \,1\right\rangle ),\quad \left| \,1\right\rangle {\mapsto }\frac{1}{\sqrt{2}} (\left| \,0\right\rangle -\left| \,1\right\rangle ).$

(We have ignored the phase shift in the reflected beam.) The global phase shift $e^{i\frac{\phi _{0}+\phi _{0}}{2}}$ is irrelevant as the interference pattern depends on the difference between the phase shifts in different arms of the interferometer. The phase shifters in the two paths can be tuned to effect any prescribed relative phase shift ϕ = ϕ0 − ϕ1 and to direct the particle with probabilities \begin{array}{lcl} P_{0} &=& \cos ^{2}\left( \frac{\phi }{2}\right) =\frac{1}{2}\left( 1+\cos\phi \right) \\ P_{1} &=& \sin ^{2}\left( \frac{\phi }{2}\right) =\frac{1}{2}\left( 1-\cos\phi \right) \end{array} respectively to detectors ``0'' and ``1''.

The roles of the three key ingredients in this experiment are clear. The first beam splitter prepares a superposition of possible paths, the phase shifters modify quantum phases in different paths and the second beam-splitter combines all the paths together erasing all information about which path was actually taken by the particle between the two beam-splitters. This erasure is very important as we shall see in a moment.

Needless to say, single particle interference experiments are not restricted to photons. One can go for a different ``hardware'' and repeat the experiment with electrons, neutrons, atoms or even molecules. When it comes to atoms and molecules both external and internal degrees of freedom can be used.

Although single particle interference experiments are worth discussing in their own right, here we are only interested in their generic features simply because they are all ``isomorphic'' and once you know and understand one of them you, at least for our purposes, understand them all (modulo experimental details, of course). Let us now describe any single particle interference experiment in more general terms. It is very convenient to view this experiment in a diagramatic way as a quantum network with three quantum logic gates CEMM98. The beam-splitters will be now called the Hadamard gates and the phase shifters the phase shift gates. In particular any single particle quantum interference can be represented by the following simple network,


In order to make a connection with a quantum function evaluation let us now describe an alternative construction which simulates the action of the phase shift gate. This construction introduces a phase factor ϕ using a controlled-U gate. The phase shift ϕ is ``computed'' with the help of an auxiliary qubit in a prescribed state ∣ u such that U∣ u⟩ = eiϕ∣ u


In our example, shown above, we obtain the following sequence of transformations on the two qubits

$\left | \, 0 \right \rangle\left | \, u \right \rangle \mapsto \frac{1}{\sqrt 2}(\left | \, 0 \right \rangle + \left | \, 1 \right \rangle)\left | \, u \right \rangle \mapsto \frac{1}{\sqrt 2}(\left | \, 0 \right \rangle + e^{i\phi}\left | \, 1 \right \rangle) \left | \, u \right \rangle$

$\mapsto (\cos\frac{\phi }{2}\left | \, 0 \right \rangle + i \sin \frac{\phi }{2}\left | \, 1 \right \rangle) \left | \, u \right \rangle.$

We note that the state of the auxiliary qubit ∣ u being an eigenstate of U is not altered along this network, but its eigenvalue eiϕ is ``kicked back'' in front of the ∣ 1⟩ component in the first qubit. The sequence (\ref{sequ}) is the exact simulation of the Mach-Zehnder interferometer and, as we shall see later on, the kernel of quantum algorithms.

Some of the controlled-U operations are special - they represent quantum function evaluations! Indeed, a unitary evolution which computes f :  {0, 1}n ↦ {0, 1}m

$|x\rangle|y\rangle\mapsto|x\rangle|(y+f(x))\bmod 2^m\rangle,$

is of the controlled-U type. The unitary transformation of the second register, specified by

$|y\rangle\mapsto|(y+f(x))\bmod 2^m\rangle,$

depends on x -- the state of the first register. If the initial state of the second register is set to

$\left| \,u\right\rangle = \frac{1}{2^{m/2}} \sum_{y=0}^{2^{m}-1}\exp \left( -\frac{2\pi i}{2^{m}}y\right) |y\rangle,$

by applying the QFT to the state ∣111...1⟩ then the function evaluation generates

\begin{array}{lcl} |x> \left|\,u\right> &=& \frac{1}{2^{m/2}}|x> \sum_{y=0}^{2^{m}-1}\exp \left( -\frac{2\pi i}{2^{m}}y\right) |y> \\ &\mapsto& \frac{1}{2^{m/2}}|x> \sum_{y=0}^{2^{m}-1}\exp \left( -\frac{2\pi i}{2^{m}}y\right) \left|f(x)+y\right>\\ &=& \frac{e^{ \frac{2\pi i}{2^{m}}f(x) }}{2^{m/2}}|x> \sum_{y=0}^{2^{m}-1}\exp \left( -\frac{2\pi i}{2^{m}}(f(x)+y)\right)\left|f(x)+y\right> \\ &=& \frac{e^{ \frac{2\pi i}{2^{m}}f(x) }}{2^{m/2}}|x> \sum_{y=0}^{2^{m}-1}\exp \left( -\frac{2\pi i}{2^{m}}y\right)|y> \\ &=& e^{\frac{2\pi i}{2^{m}}f(x)} |x> \left|\,u\right> , \end{array}

where we have relabelled the summation index in the sum containing 2m terms

$\sum_{y=0}^{2^{m}-1}\exp \left( -\frac{2\pi i}{2^{m}}(f(x)+y)\right) |f(x)+y\rangle =\sum_{y=0}^{2^{m}-1}\exp \left( -\frac{2\pi i}{2^{m}} y\right) |y\rangle .$

Again, the function evaluation effectively introduces the phase factors in front of the x terms in the first register.

$|x\rangle \left| \,u\right\rangle \mapsto \exp \left( \frac{2\pi i}{2^{m} }f(x)\right) |x\rangle \left| \,u\right\rangle$

Please notice that the resolution in $\phi (x)=\frac{2\pi }{2^{m}}f(x)$ is determined by the size m of the second register. For m = 1 we obtain ϕ(x) = πf(x) i.e. the phase factors are ( − 1)f(x) Let us see how this approach explains the internal working of quantum algorithms.

The first quantum algorithms

The first quantum algorithms showed advantages of quantum computation without referring to computational complexity measured by the scaling properties of network sizes. The computational power of quantum interference was discovered by counting how many times certain Boolean functions have to be evaluated in order to find the answer to a given problem. Imagine a ``black box" (also called an oracle) computing a Boolean function and a scenario in which one wants to learn about a given property of the Boolean function but has to pay for each use of the ``black box" (often referred to as a query). The objective is to minimise number of queries.

Consider, for example, a ``black box" computing a Boolean function f :  {0, 1} ↦ {0, 1} There are exactly four such functions: two constant functions (f(0) = f(1) = 0 and f(0) = f(1) = 1 and two ``balanced'' functions (f(0) = 0, f(1) = 1 and f(0) = 1, f(1) = 0 The task is to deduce, by queries to the ``black box", whether f is constant or balanced (in other words, whether f(0) and f(1) are the same or different).

Classical intuition tells us that we have to evaluate both f(0) and f(1) which involves evaluating f twice (two queries). We shall see that this is not so in the setting of quantum information, where we can solve this problem with a single function evaluation (one query), by employing an algorithm that has the same mathematical structure as the Mach-Zehnder interferometer. The quantum algorithm that accomplishes this is best represented as the quantum network shown below, where the middle operation is the ``black box" representing the function evaluation CEMM98.


The initial state of the qubits in the quantum network is ∣ 0⟩(∣ 0⟩ − ∣ 1⟩) (apart from a normalization factor, which will be omitted in the following). After the first Hadamard transform, the state of the two qubits has the form (∣ 0⟩ + ∣ 1⟩)(∣ 0⟩ − ∣ 1⟩). To determine the effect of the function evaluation on this state, first recall that, for each x ∈ {0, 1},

∣ x⟩(∣ 0⟩ − ∣ 1⟩) ↦  ( − 1)f(x)∣ x⟩(∣ 0⟩ − ∣ 1⟩).

Therefore, the state after the function evaluation is

[( − 1)f(0)∣ 0⟩ + ( − 1)f(1)∣ 1⟩](∣ 0⟩ − ∣ 1⟩) .

That is, for each x the ∣ x term acquires a phase factor of ( − 1)f(x) which corresponds to the eigenvalue of the state of the auxiliary qubit under the action of the operator that sends ∣ y to ∣ y + f(x)⟩ The second qubit is of no interest to us any more but the state of the first qubit

( − 1)f(0)∣ 0⟩ + ( − 1)f(1)∣ 1⟩

is equal either to

 ± (∣ 0⟩ + ∣ 1⟩), 

when f(0) = f(1),  or

 ± (∣ 0⟩ − ∣ 1⟩), 

when f(0) ≠ f(1). Hence, after applying the second Hadamard gate the state of the first qubit becomes ∣ 0⟩ if the function f is constant and ∣ 1⟩ if the function is balanced! A bit-value measurement on this qubit distinguishes these cases with certainty.

This example CEMM98 is an improved version of the first quantum algorithm proposed by Deutsch Deu85 (The original Deutsch algorithm provides the correct answer with probability 50\%.) Deutsch's result laid the foundation for the new field of quantum computation, and was followed by several other quantum algorithms.

Deutsch's original problem was subsequently generalised to cover ``black boxes" computing Boolean functions f : {0, 1}n ↦ {0, 1} Assume that, for one of these functions, it is ``promised'' that it is either constant or balanced (i.e. has an equal number of 0's outputs as 1's), and the goal is to determine which of the two properties the function actually has. How many queries to f are required to do this? Any classical algorithm for this problem would, in the worst-case, require 2n − 1 + 1 queries before determining the answer with certainty. There is a quantum algorithm that solves this problem with a single evaluation of f

The algorithm is illustrated by a simple extension of the network which solves Deutsch's problem.


The control register, now composed out of n qubits (n = 3 in the diagram above), is initially in state ∣00⋯0⟩ and an auxiliary qubit in the second register starts and remains in the state ∣0⟩ − ∣1⟩

Stepping through the execution of the network, the state after the first n-qubit Hadamard transform is applied is

xx⟩(∣0⟩ − ∣1⟩) , 

which, after the function evaluation, is

x( − 1)f(x)x⟩(∣0⟩ − ∣1⟩).

Finally, after the last Hadamard transform, the state is

x, y( − 1)f(x) + (x ⋅ y)y⟩(∣0⟩ − ∣1⟩).

Note that the amplitude of ∣00⋯0⟩ is $\sum_{x} \frac{(-1)^{f(x)}}{2^n}$ which is ( − 1)f(0) when f is constant and 0 when f is balanced. Therefore, by measuring the first n qubits, it can be determined with certainty whether f is constant or balanced. The algorithm follows the same pattern as Deutsch's algorithm: the Hadamard transform, a function evaluation, the Hadamard transform (the H-f-H sequence). We recognize it as a generic interference pattern.

The generic H-f-H sequence may be repeated several times. This can be illustrated, for example, with Grover's data base search algorithm Gro96. Suppose we are given, as an oracle, a Boolean function fk which maps {0, 1}n to {0, 1} such that fk(x) = δxk for some k. Our task is to find k. Thus in a set of numbers from 0 to 2n − 1 one element has been "tagged" and by evaluating fk we have to find which one. In order to find k with probability of 50% any classical algorithm, be it deterministic or randomised, will need to evaluate fk a minimum of 2n − 1 times. In contrast, a quantum algorithm needs only O(2n/2) evaluations.

Unlike the algorithms studied so far, Grover's algorithm consists of repeated applications of the same unitary transformation many (O(2n/2)) times. The initial state is chosen to be the one that has equal overlap with each of the computational basis states: S⟩ = 2 − n/2i = 02ni. The operation applied at each individual iteration, referred to as the Grover iterate, can be best represented by the following network:


The components of the network are by now familiar: Hadamard transforms (H) and controlled-f gates. It is important to notice that in drawing the network we have used a shorthand notation: the first register (with the ψ input) actually consists of n qubits. The Hadamard transform is applied to each of those qubits and the controlled-f gates act on all of them simultaneously. Also, the input to the second register is always ∣0⟩ − ∣1⟩ but the input to the first register, denoted ψ changes from iteration from iteration, as the calculation proceeds. As usual, the second register will be ignored since it remains constant throughout the computation.

To begin, consider only the controlled-fk gate. This is just the phase-kickback construction that was introduced in Section 4 but for the specific function fk. In particular, the transformation does nothing to any basis elements except for k, which goes to  − ∣k. Geometrically, this is simply a reflection in the hyperplane perpendicular to k so let us call it Rk.

Similarly, with respect to the first register only, the controlled-f0 operation sends ∣0⟩ to  − ∣0⟩ and fixes all other basis elements, so it can be written R0. Now consider the sequence of operations HR0H. Since H2 = I, we can rewrite the triple as HR0H − 1 which is simply $R_0&lt;7$ performed in a different basis. More specifically, it is reflection about the hyperplane perpendicular to

$H |0\rangle = \frac{1}{2^{n/2}} \sum_{x=0}^{2^n-1} |x\rangle = |S\rangle$

so we will simply write the triple as RS.

We can therefore rewrite the Grover iterate in the simple form G = RSRk. Now, since each reflection is an orthogonal transformation with negative determinant, their composition must be an orthogonal transformation with unit determinant, in other words, a rotation. The question, of course, is which rotation. To find the answer it suffices to consider rotations in the plane spanned by k and S since all other vectors are fixed by the Grover iterate. The generic geometrical situation is then illustrated in the following diagram.


If the vector a is reflected through the line L1 to produce the vector aʹ⟩ and then reflected a second time through line L2 to produce the vector aʺ⟩, then the net effect is a rotation by the total subtended angle between a and aʺ⟩, which is 2x + 2y = 2(x + y) = 2θ.

Therefore, writing k ⊥  and S ⊥  for plane vectors perpendicular to k and S respectively, the Grover iterate performs a rotation of twice the angle from k ⊥  to S ⊥ . Setting, $\sin \phi = \frac{1}{2^{n/2}}$, this is easily seen to be a rotation by

$2( 3 \frac{\pi}{2} - \phi) = \pi - 2 \phi \bmod 2 \pi.$

Thus, up to phases, the Grover iterate rotates the state vector by an angle 2ϕ towards the desired solution k. Normally, the initial state for the first register is chosen to be S. Since this initial state S is already at an angle ϕ to k, the iterate should be repeated m times, where

$(2 m + 1) \phi \approx \frac{\pi}{2},$


$m \approx \frac{\pi}{4 \phi} - \frac{1}{4}$

to get a probability of success bounded below by cos2(2ϕ), which goes to 1 as n ↦ ∞. For large n, $\frac{1}{2^{n/2}} = \sin \phi \approx \phi$, so

$m \approx \frac{\pi}{4} \frac{1}{2^{n/2}}.$

This is an astounding result: any search of an unstructured database can be performed in time proportional to the square-root of the number of entries in the database. Subsequent work extended the result to searches for multiple items Boy96, searches of structured databases Hog98, and many other situations. Also, Zalka Zal99, Boyer et. al Boy96 and others have demonstrated that Grover's algorithm is optimal, in the sense that any other quantum algorithm for searching an unstructured database must take time at least O(2n/2).

Optimal phase estimation

Query models of quantum computation provided a natural setting for subsequent discoveries of ``real quantum algorithms". The most notable example is Shor's quantum factoring algorithm Sho94 which evolved from the the order-finding problem, which was originally formulated in the language of quantum queries. Following our "interferometric approach" we will describe this algorithm in the terms of multiparticle quantum interferometry. We start with a simple eigenvalue or phase estimation problem.

Suppose that U is any unitary transformation on m qubits and ∣ u is an eigenvector of U with eigenvalue eiϕ and consider the following scenario. We do not explicitly know U or ∣ u or ; eiϕ, but instead we are given devices that perform controlled-U, controlled-U21 controlled-U22 and so on until we reach controlled-U2n − 1. Also, assume that we are given a single preparation of the state ∣ u. Our goal is to obtain an n-bit estimator of ϕ. We start by constructing the following network,


The second register of m qubits is initially prepared in state u and remains in this state after the computation, whereas the first register of n qubits evolves into the state,

$(\left | \, 0 \right \rangle + e^{i 2^{n-1} \phi}\left | \, 1 \right \rangle) (\left | \, 0 \right \rangle + e^{i 2^{n-2} \phi}\left | \, 1 \right \rangle) \cdots (\left | \, 0 \right \rangle + e^{i \phi}\left | \, 1 \right \rangle)=\sum_{y=0}^{2^n-1} e^{ 2\pi i \frac{\phi y}{2^n}} |y\rangle.$

Consider the special case where ϕ = 2πx/2n for x = ∑i = 0n − 12ixi, and recall the quantum Fourier transform (QFT) introduced in Section 2. The state which gives the binary representation of x, namely, ∣ xn − 1x0 (and hence ϕ) can be obtained by applying the inverse of the QFT , that is by running the network for the QFT in the backwards direction (consult the diagram of the QFT). If x is an n-bit number this will produce the exact value ϕ.

However, ϕ does not have to be a fraction of a power of two (and may not even be a rational number). For such a ϕ, it turns out that applying the inverse of the QFT produces the best n-bit approximation of ϕ with probability at least 4/π2 ≈ 0.405.

To see why this is so, let us write ϕ = 2π(a/2n + δ), where a = (an − 1a0) is the best n-bit estimate of $\frac{\phi}{2\pi}$ and $0 &lt; |\delta| \le 1/2^{n+1}$. Applying the inverse QFT to the state in Eq.~(\ref{qftphi}) now yields the state > ${1\over 2^n} \sum_{x=0}^{2^n-1} \sum_{y=0}^{2^n-1} e^{\frac{2\pi i}{2^n} (a-x) y} e^{2\pi i \delta y}|x\rangle$

and the coefficient in front of x = a in the above is the geometric series

${1 \over 2^n} \sum_{y=0}^{2^n-1} (e^{2\pi i \delta})^y = {1 \over 2^n} \left({1 - (e^{2\pi i \delta})^{2^n} \over 1 - e^{2\pi i \delta}}\right)\;.$

Since $|\delta| \le {1 \over 2^{n+1}}$, it follows that 2nδ∣ ≤ 1/2, and using the inequality 2z ≤ sinπz ≤ πz holding for any z ∈ [0, 1/2], we get ∣1 − e2πiδ2n∣ = 2∣sin(πδ2n)∣ ≥ 4∣δ∣2n. Also, ∣1 − e2πiδ∣ = 2∣sinπδ∣ ≤ 2πδ. Therefore, the probability of observing an − 1a0 when measuring the state is

$\left|{1 \over 2^n} \left({1 - (e^{2\pi i \delta})^{2^n} \over 1 - e^{2\pi i \delta}}\right)\right|^2 \ge \left({1 \over 2^n} \left({4 \delta 2^n \over 2 \pi \delta}\right)\right)^2 = {4 \over \pi^2},$

which proves our assertion. In fact, the probability of obtaining the best estimate can be made 1 − δ for any $0&lt;\delta &lt;1$, by creating the state in Eq.(\ref{qftphi}) but with n + O(log(1/δ)) qubits and rounding the answer off to the nearest n bits CEMM98.

Periodicity and quantum factoring

Amazingly, the application of optimal phase estimation to a very particular unitary operator will allow us to factor integers efficiently. In fact, it will allow us to solve a more general class of problems related to the periodicity of certain integer functions.

Let N be an m-bit integer, and let a be an integer smaller than N, and coprime to N. Define a unitary operator Ua acting on m qubits such that for all $y &lt; N$

$\quad |y\rangle \mapsto U_a |y\rangle = | a y \bmod N\rangle.$

This unitary operation can be called multiplication by a modulo N. Since a is coprime to N, as discussed in Section 2, there exists a least strictly positive r such that $a^r =1 \bmod N$. This r is called the order of a modulo N. Equivalently, r is the period of the function $f(x)=a^x \bmod N$, i.e. the least $r &gt; 0$ such that f(x) = f(x + r) for all x. We are after the optimal n-bit estimate of this period, given some specified precision n.

Now let the vectors uk (k ∈ {1, …, r}) be defined by

$| u_{k}\rangle = r^{-1/2} \sum_{j=0}^{r-1} e^{-\frac{2\pi i k j}{r}}|a^j \bmod N\rangle.$

It is easy to check Kit95 that for each k ∈ {1, …, r}, uk is an eigenvector with eigenvalue $e^{2 \pi i {\frac{k}{r}} }$ of the modular multiplication operator Ua defined above.

It is important to observe that one can efficiently construct a quantum network for controlled multiplication modulo some number N. Moreover, for any j, it is possible to efficiently implement a controlled-Ua2j gate VBE96,BCDP96. Therefore, we can apply the techniques for optimal phase estimation discussed in Section 7. For any k ∈ {1, …, r}, given the state uk we can obtain the best n-bit approximation to $\frac{k}{r}$. This is tantamount to determining r itself. Unfortunately, there is a complication.

Our task is: given an m bit long number N and randomly chosen acoprimewith < math > N, find the order of a modulo N. The problem with the above method is that we are aware of no straightforward efficient way to prepare any of the states ∣ uk. However, the state

∣ 1⟩ = r − 1/2k = 1r∣ uk

is most definitely an easy state to prepare.

If we start with ∣1⟩ in place of the eigenvector uk, apply the phase estimation network and measure the first register bit by bit we will obtain n binary digits of x such that, with probability exceeding 4/π2, $\frac{x}{2^n}$ is the best n-bit estimate of ${\frac{k}{r}}$ for a randomly chosen k from {1, …, r}. The question is: given x how to compute r? Let us make few observations: \renewcommand{\labelenumi}{ •  \begin{enumerate} \item \emph{k/r is unique, given x \\ Value x/2n being the nbit estimate, differs by at most 1/2n from k/r Hence, as long as $n&gt;2m$ the n bit estimate x determines a unique value of ${\frac{k}{r}}$ since r is an mbit number. \item \emph{Candidate values for k/r are all convergents to x/2m \\ For any real number θ there is a unique sequence of special rationals $(\frac{p_n}{q_n})_{n\in N}$ (gcd(pn, qn) = 1 called the \emph{convergents} to θ that tend to θ as n grows. A theorem HW79 states that if p and q are integers with $\left|\theta-\frac{p}{q}\right| &lt; \frac{1}{2q^2}$ then p/q is a convergent to θ Since we have $\frac{1}{2^n} \leq \frac{1}{2 (2^m)^2} \leq \frac{1}{2 r^2}$ this implies $\left|\frac{x}{2^n}-\frac{k}{r}\right| &lt; \frac{1}{2 r^2}$ and k/r is a convergent to x/2n \item \emph{Only one convergent is eligible.} \\ It is easy to show that there is at most one fraction a/b satisfying both b ≤ r and $\left|\frac{x}{2^n}-\frac{a}{b}\right| &lt; \frac{1}{2 r^2}$ \end{enumerate}

Convergents can be found efficiently using the well-known continued fraction method HW79. Thus we employ continued fractions and our observations above to find a fraction a/b such that b ≤ 2m and $\left|\frac{x}{2^n}-\frac{a}{b}\right| &lt; \frac{1}{2^n}$. We get the rational k/r, and k = a, r = b, provided k and r are coprime. For randomly chosen k, this happens with probability greater than or equal to 1/lnr EJ96.

Finally, we show how order-finding can be used to factor a composite number N. Let a be a randomly chosen positive integer smaller than N such that gcd(a, N) = 1. Then the order of a modulo N is defined, and we can find it efficiently using the above algorithm. If r is even, then we have:

$a^r = 1\bmod N$

$\Leftrightarrow \quad (a^{r/2})^2 -1^2 = 0\bmod N$

$\Leftrightarrow \quad (a^{r/2}-1)(a^{r/2}+1) = 0\bmod N.$

The product (ar/2 − 1)(ar/2 + 1) must be some multiple of N, so unless $a^{r/2}=\pm 1\bmod N$ at least one of terms must have a nontrivial factor in common with N. By computing the greatest common divisor of this term and N, one gets a non-trivial factor of N.

Furthermore, if N is odd with prime factorisation

N = p1α1p2α2psαs, 

then it can be shown EJ96 that if aischosenatrandomsuchthat < math > gcd(a, N) = 1 then the probability that its order modulo N is even and that $a^{r/2}\neq \pm 1\bmod N$ is:

$\Pr(r \mbox{ is even AND } a^{r/2}\neq \pm 1 \bmod N) \ge 1-\frac{1}{2^{s-1}}.$

Thus, combining our estimates of success at each step, with probability greater than or equal to

$\frac{4}{\pi^2} \frac{1}{\ln r} \left( 1 - \frac{1}{2^{s-1}} \right) \ge \frac{2}{\pi^2} \frac{1}{\ln N}$

we find a factor of N~\footnote{N.B. by Eq.(\ref{prob}), the method fails if N is a prime power, N = pα, but prime powers can be efficiently recognised and factored by classical means.}. (Here we have used that N is composite and $r &lt; N$.) If N is logN = n bits long then by repeating the whole process O(n) times, or by a running O(n) computations in parallel by a suitable extension of a quantum factoring network, we can then guarantee that we will find a factor of N with a fixed probability greater than $\frac{1}{2}$. This, and the fact that the quantum network family for controlled multiplication modulo some number is uniform and of size O(n2), tells us that factoring is in the complexity class BQP.

But why should anybody care about efficient factorisation?


Human desire to communicate secretly is at least as old as writing itself and goes back to the beginnings of our civilisation. Methods of secret communication were developed by many ancient societies, including those of Mesopotamia, Egypt, India, and China, but details regarding the origins of cryptology\footnote{The science of secure communication is called cryptology from Greek kryptos hidden and logos word. Cryptology embodies cryptography, the art of code-making, and cryptanalysis, the art of code-breaking.} remain unknown Kah67.

Originally the security of a cryptosystem or a cipher depended on the secrecy of the entire encrypting and decrypting procedures; however, today we use ciphers for which the algorithm for encrypting and decrypting could be revealed to anybody without compromising their security. In such ciphers a set of specific parameters, called a key, is supplied together with the plaintext as an input to the encrypting algorithm, and together with the cryptogram as an input to the decrypting algorithm Sti95. This can be written as

$\hat E_{k}(P) = C, \; \mathrm{and\; conversely,}\; \hat D_{k}(C)=P,$

where P stands for plaintext, C for cryptotext or cryptogram, k for cryptographic key, and $\hat E$ and $\hat D$ denote an encryption and a decryption operation respectively.

The encrypting and decrypting algorithms are publicly known; the security of the cryptosystem depends entirely on the secrecy of the key, and this key must consist of a randomly chosen, sufficiently long string of bits. Probably the best way to explain this procedure is to have a quick look at the Vernam cipher, also known as the one-time pad Ver26.

If we choose a very simple digital alphabet in which we use only capital letters and some punctuation marks such as

A B C D E ...   ... X Y Z   ? , .

00 01 02 03 04 ...   ... 23 24 25 26 27 28 29

we can illustrate the secret-key encrypting procedure by the following simple example (we refer to the dietary requirements of 007):

















In order to obtain the cryptogram (sequence of digits in the bottom row) we add the plaintext numbers (the top row of digits) to the key numbers (the middle row), which are randomly selected from between 0 and 29, and take the remainder after division of the sum by 30, that is we perform addition modulo 30. For example, the first letter of the message ``S'' becomes a number ``18''in the plaintext, then we add 18 + 15 = 33;  33 = 1 × 30 + 3 therefore we get 03 in the cryptogram. The encryption and decryption can be written as $P_i+k_i \pmod{30}=C_i$ and $C_i-k_i \pmod{30}=P_i$ respectively for the symbol at position i

The cipher was invented in 1917 by the American AT\&T engineer Gilbert Vernam. It was later shown, by Claude Shannon Sha49, that as long as the key is truly random, has the same length as the message, and is never reused then the one-time pad is perfectly secure. So, if we have a truly unbreakable system, what is wrong with classical cryptography?

There is a snag. It is called key distribution. Once the key is established, subsequent communication involves sending cryptograms over a channel, even one which is vulnerable to total passive eavesdropping (e.g. public announcement in mass-media). This stage is indeed secure. However in order to establish the key, two users, who share no secret information initially, must at a certain stage of communication use a reliable and a very secure channel. Since the interception is a set of measurements performed by an eavesdropper on this channel, however difficult this might be from a technological point of view, in principle any {classical} key distribution can always be passively monitored, without the legitimate users being aware that any eavesdropping has taken place.

In the late 1970s Whitfield Diffie and Martin Hellman DH76b proposed an interesting solution to the key distribution problem. It involved two keys, one public key π for encryption and one private key κ for decryption:

$\hat E_\pi(P) = C, \; \mathrm{and\; }\; \hat D_\kappa(C)=P.$

In these systems users do not need to share any private key before they start sending messages to each other. Every user has his own two keys; the public key is publicly announced and the private key is kept secret. Several public-key cryptosystems have been proposed since 1976; here we concentrate our attention on the most popular one namely the RSARSA78. In fact the techniques were first discovered at CESG in the early 1970s by James Ellis, who called them ``Non-Secret Encryption'' {Ell70. In 1973, building on Ellis' idea, C. Cocks designed what we now call RSA Coc73, and in 1974 M. Williamson proposed what is essentially known today as the Diffie-Hellman key exchange protocol.

Suppose that Alice wants to send an RSA encrypted message to Bob. The RSA encryption scheme works as follows:

\begin{description} \item[Key generation] Bob picks randomly two distinct and large prime numbers p and q We denote n = pq and ϕ = (p − 1)(q − 1) Bob then picks a random integer $1&lt; e &lt;\phi$ that is coprime with ϕ and computes the inverse d of e modulo ϕ (gcd(e, ϕ) = 1 This inversion can be achieved efficiently using for instance the extended Euclidean algorithm for the greatest common divisor HW79. Bob's private key is κ = d and his public key is π = (e, n)

\item[Encryption] Alice obtains Bob's public key π = (e, n) from some sort of yellow pages or an RSA public key directory. Alice then writes her message as a sequence of numbers using, for example, our digital alphabet. This string of numbers is subsequently divided into blocks such that each block when viewed as a number P satisfies P ≤ n Alice encrypts each P as <;div align=&amp;quot;CENTER"> $C = \hat E_\pi(P) = P^e \bmod n$

and sends the resulting cryptogram to Bob.

\item[Decryption] Receiving the cryptogram C Bob decrypts it by calculating

$\hat D_\kappa(C) = C^d \bmod n = P$

where the last equality will be proved shortly. \end{description}

The mathematics behind the RSA is a lovely piece of number theory which goes back to the XVI century when a French lawyer Pierre de Fermat discovered that if a prime p and a positive integer a are coprime, then

$a^{p-1} = 1 \bmod p.$

The cryptogram $C=P^e \bmod n$ is decrypted by $C^d \bmod n = P^{ed} \bmod n$ because $ed = 1 \bmod \phi$ implying the existence of an integer k such that ed = kϕ + 1 = k(p − 1)(q − 1) + 1 If $P \neq 0 \bmod p$ using equation~(9.5) this implies

$P^{ed} \bmod p = \left(P^{(p-1)}\right)^{k(q-1)}P \bmod p = P \bmod p.$

The above equality holds trivially in the case $P=0\bmod p$ By identical arguments, $P^{ed} \bmod q = P \bmod q$ Since p and q are distinct primes, it follows that

$P^{ed} \bmod n = P.$

For example, let us suppose that Bob's public key is π = (e, n) = (179, 571247)footnote{ Needless to say, number n in this example is too small to guarantee security, do not try this public key with Bob.} He generated it following the prescription above choosing p = 773 q = 739 and e = 179 The private key d was obtained by solving $179 d = 1\bmod 772\times 738$ using the extended Euclidean algorithm which yields d = 515627 Now if we want to send Bob encrypted ``SHAKEN NOT STIRRED'' we first use our digital alphabet to obtain the plaintext which can be written as the following sequence of six digit numbers

180700 100413 261314 192618 190817 170403

Then we encipher each block Pi by computing $C_i=P_i^e \bmod n$ e.g. the first block P1 = 180700 will be eciphered as

$P_1^e \bmod n = 180700^{179} \bmod 571247 = 141072 = C_1,$

and the whole message is enciphered as:

141072 253510 459477 266170 286377 087175

The cryptogram C composed of blocks Ci can be send over to Bob. He can then decrypt each block using his private key d = 515627 e.g. the first block is decrypted as

$141072^{515627} \bmod 571247 = 180700 = P_1.$

In order to recover plaintext P from cryptogram C an outsider, who knows C n and e would have to solve the congruence

$P^e \bmod n = C,$

for example, in our case,

$P_1^{179} \bmod 571247 = 141072.$

Solving such an equation is believed to be a hard computational task for classical computers. So far, no classical algorithm has been found that computes the solution efficiently when n is a large integer (say 200 decimal digits long or more). However, if we know the prime decomposition of n it is a piece of cake to figure out the private key d we simply follow the key generation procedure and solve the congruence $ed = 1\bmod (p-1)(q-1)$ This can be done efficiently even when p and q are very large. Thus, in principle, anybody who knows n can find d by factoring n The security of RSA therefore relies among others on the assumption that factoring large numbers is computationally difficult. In the context of classical computation, such difficulty has never been proved. Worse still, we have seen in Section 8 that there is a quantum algorithm that factors large number efficiently. This means that the security of the RSA cryptosystem will be completely compromised if large-scale quantum computation becomes one day practical. This way, the advent of quantum computation rules out public cryptographic schemes commonly used today that are based on the ``difficulty'' of factoring or the ``difficulty'' of another mathematical operation called discrete logarithm {HW79.

On the other hand, quantum computation provides novel techniques to generate a shared private key with perfect confidentiality, regardless the computational power (classical or quantum) of the adversaries. Such techniques are referred to as quantum key distribution protocols and were proposed independently in the United States (S.Wiesner [[{Wie83, C.H.~Bennett and G.~Brassard BB84) and in Europe (A. Ekert Eke91). Discussion on quantum key distribution is outside the scope of this lecture.

Conditional quantum dynamics

Quantum gates and quantum networks provide a very convenient language for building any quantum computer or (which is basically the same) quantum multiparticle interferometer. But can we build quantum logic gates?

Single qubit quantum gates are regarded as relatively easy to implement. For example, a typical quantum optical realisation uses atoms as qubits and controls their states with laser light pulses of carefully selected frequency, intensity and duration; any prescribed superposition of two selected atomic states can be prepared this way.

Two-qubit gates are much more difficult to build.

In order to implement two-qubit quantum logic gates it is sufficient, from the experimental point of view, to induce a conditional dynamics of physical bits, i.e. to perform a unitary transformation on one physical subsystem conditioned upon the quantum state of another subsystem,

U = ∣ 0⟩⟨0 ∣ ⊗ U0 + ∣ 1⟩⟨1 ∣ ⊗ U1 + ⋯ + ∣ k⟩⟨k ∣ ⊗ Uk, 

where the projectors refer to quantum states of the control subsystem and the unitary operations Ui are performed on the target subsystem BDEJ95. The simplest non-trivial operation of this sort is probably a conditional phase shift such as B(ϕ) which we used to implement the quantum Fourier transform and the quantum controlled-NOT (or XOR) gate.

Let us illustrate the notion of the conditional quantum dynamics with a simple example. Consider two qubits, e.g. two spins, atoms, single-electron quantum dots, which are coupled via a σz(1)σz(2) interaction (e.g. a dipole-dipole interaction):


The first qubit, with resonant frequency ω1 will act as the control qubit and the second one, with resonant frequency ω2 as the target qubit. Due to the coupling the resonant frequency for transitions between the states ∣ 0⟩ and ∣ 1⟩ of one qubit depends on the neighbour's state. The resonant frequency for the first qubit becomes ω1 ± Ω depending on whether the second qubit is in state ∣ 0⟩ or ∣ 1⟩ Similarly the second qubit's resonant frequency becomes ω2 ± Ω depending on the state of the first qubit. Thus a πpulse at frequency ω2 + Ω causes the transition ∣ 0⟩ ↔ ∣ 1⟩ in the second qubit only if the first qubit is in ∣ 1⟩ state. This way we can implement the quantum controlled-NOT gate.

Decoherence and recoherence

Thus in principle we know how to build a quantum computer; we can start with simple quantum logic gates and try to integrate them together into quantum networks. However, if we keep on putting quantum gates together into networks we will quickly run into some serious practical problems. The more interacting qubits are involved the harder it tends to be to engineer the interaction that would display the quantum interference. Apart from the technical difficulties of working at single-atom and single-photon scales, one of the most important problems is that of preventing the surrounding environment from learning about which computational path was taken in the multi-particle interferometer. This ``welcher Weg'' information can destroy the interference and the power of quantum computing.

Consider the following qubit-environment interaction, known as decoherence Zur91,

∣0, m⟩ ↦ ∣0, m0⟩,   ∣1, m⟩ ↦ ∣1, m1⟩, 

where m is the initial state and m0 m1 are the two final states of the environment. This is basically a measurement performed by the environment on a qubit. Suppose that in our single qubit interference experiment (see Eqs.~(\ref{eqinterfere})) a qubit in between the two Hadamard transformation is ``watched'' by the environment which learns whether the qubit is in state ∣0⟩ or ∣1⟩. The evolution of the qubit and the environment after the first Hadamard and the phase gate is described by the following transformation,

$\left| \,0\right\rangle \left| \,m\right\rangle \mapsto \frac{1}{\sqrt{2}}\left( \left| \,0\right\rangle +\left| \,1\right\rangle \right) \left| \,m\right\rangle \mapsto \frac{1}{ \sqrt{2}}(e^{i\phi /2}\left| \,0\right\rangle +e^{-i\phi /2}\left| \,1\right\rangle )\left| \,m\right\rangle.$

We write the decoherence action as

$\frac{1}{\sqrt{2}}(e^{i\frac{\phi }{2}}\left| \,0\right\rangle +e^{-i\frac{ \phi }{2}}\left| \,1\right\rangle )\left| \,m\right\rangle \mapsto \frac{1}{\sqrt{2}}(e^{i\frac{\phi }{2}}\left| \,0\right\rangle \left| \,m_{0}\right\rangle +e^{-i\frac{\phi }{2}}\left| \,1\right\rangle \left| \,m_{1}\right\rangle ).$

The final Hadamard gate generates the output state

$\frac{1}{\sqrt{2}}(e^{i\frac{\phi }{2}}\left| \,0\right\rangle \left| \,m_{0}\right\rangle +e^{-i\frac{\phi }{2}}\left| \,1\right\rangle \left| \,m_{1}\right\rangle )$

$\mapsto \frac{1}{2}|0\rangle\left( e^{i \frac{\phi }{2}\,}\left| \,m_{0}\right\rangle +e^{-i\frac{\phi }{2}}\left| \,m_{1}\right\rangle \right)$

$+ \frac{1}{2} |1\rangle \left( e^{i \frac{\phi }{2}\,}\left| \,m_{0}\right\rangle -e^{-i\frac{\phi }{2}}\left| \,m_{1}\right\rangle \right).$

Taking ∣ m0 and ∣ m1 to be normalised and m0∣ m1 to be real we obtain the probabilities P0 and P1 $\begin{array} P_{0} &amp;=&amp;\frac{1}{2}\left( 1+\langle m_{0}\left| \,m_{1}\right\rangle \cos \phi \right) \\ P_{1} &amp;=&amp;\frac{1}{2}\left( 1-\langle m_{0}\left| \,m_{1}\right\rangle \cos \phi \right) . \end{array}$ It is instructive to see the effect of decoherence on the qubit alone when its state is written in terms as a density operator. The decoherence interaction entangles qubits with the environment,

(α∣ 0⟩ + β∣1⟩)∣m⟩ ↦ α∣ 0⟩∣m0⟩ + β∣ 1⟩∣m1⟩.

Rewriting in terms of density operators and tracing over the environment's Hilbert space on the both sides, we obtain

$\left( \begin{array}{cc} \left| \alpha \right| ^{2} &amp; \alpha \beta ^{\ast } \\ \alpha ^{\ast }\beta &amp; \left| \beta \right| ^{2} \end{array} \right) \mapsto \left( \begin{array}{cc} \left| \alpha \right| ^{2} &amp; \alpha \beta ^{\ast }\langle m_{0}|\,m_{1}\rangle \\ \alpha ^{\ast }\beta \langle m_{1}|\,m_{0}\rangle &amp; \left| \beta \right| ^{2} \end{array} \right)$

The off-diagonal elements, originally called by atomic physicists coherences, vanish as m1∣ m0⟩ ↦ 0,  that is why this particular interaction with the environment is called decoherence.

How does decoherence affect, for example, Deutsch's algorithm? Substituting 0 or π for ϕ in Eq.(\ref{decoh}) we see that we obtain the correct answer only with some probability, which is

$\frac{1+\langle m_{0}|\,m_{1}\rangle }{2}.$

If m0∣ m1⟩ = 0 the perfect decoherence case, then the network outputs 0 or 1 with equal probabilities, i.e. it is useless as a computing device. It is clear that we want to avoid decoherence, or at least diminish its impact on our computing device.

In general when we analyse {\em physically realisable} computations we have to consider errors which are due to the computer-environment coupling and from the computational complexity point of view we need to assess how these errors scale with the input size n If the probability of an error in a single run, δ(n) grows exponentially with n i.e. if δ(n) = 1 − Aexp( − αn) where A and α are positive constants, then the randomised algorithm cannot technically be regarded as efficient any more regardless of how weak the coupling to the environment may be. Unfortunately, the computer-environment interaction leads to just such an unwelcome exponential increase of the error rate with the input size. To see this consider a register of size n and assume that each qubit decoheres separately, $\begin{array} \lefteqn{|x\rangle|M\rangle=|x_{n-1}\ldots x_1x_0\rangle|m\rangle\ldots|m\rangle|m\rangle}\nonumber\\ &amp;\mapsto&amp; |x_{n-1}\ldots x_1x_0\rangle|m_{x_{n-1}}\rangle...|m_{x_1}\rangle|m_{x_0}\rangle=|x\rangle|M_x\rangle, \end{array}$ where xi ∈ {0, 1} Then a superposition αx⟩ + βy evolves as

(αx⟩ + βy⟩)∣M⟩ ↦ αx⟩∣Mx⟩ + βy⟩∣My⟩, 

but now the scalar product MxMy which reduces the off-diagonal elements of the density operator of the whole register and which affects the probabilities in the interference experiment is given by

MxMy⟩ = ⟨mx0my0⟩⟨mx1my1⟩...⟨mxn − 1myn − 1

which is of the order of

MxMy⟩ = ⟨m0m1H(x, y), 

where H(x, y) is the Hamming distance between x and y i.e. the number of binary places in which x and y differ (e.g. the Hamming distance between 101101 and 111101 is 1 because the two binary string differ only in the second binary place). Hence there are some coherences which disappear as m0m1n and therefore in some interference experiments the probability of error may grow exponentially with n

It is clear that for quantum computation of any reasonable length to ever be physically feasible it will be necessary to incorporate some efficiently realisable stabilisation scheme to combat the effects of decoherence. Deutsch was the first one to discuss this problem. During the Rank Prize Funds Mini--Symposium on Quantum Communication and Cryptography, Broadway, England in 1993 he proposed `recoherence' based on a symmetrisation procedure (for details see BBDEJM). The basic idea is as follows. Suppose we have a quantum system, we prepare it in some initial state ∣Ψi and we want to implement a prescribed unitary evolution ∣Ψ(t)⟩ or just preserve ∣Ψi for some period of time t Now, suppose that instead of a single system we can prepare R copies of ∣Ψi and subsequently we can project the state of the combined system into the symmetric subspace i.e. the subspace containing all states which are invariant under any permutation of the sub-systems. The claim is that frequent projections into the symmetric subspace will reduce errors induced by the environment. The intuition behind this concept is based on the observation that a prescribed error-free storage or evolution of the R independent copies starts in the symmetric sub-space and should remain in that sub-space. Therefore, since the error-free component of any state always lies in the symmetric subspace, upon successful projection it will be unchanged and part of the error will have been removed. Note however that the projected state is generally not error--free since the symmetric subspace contains states which are not of the simple product form ∣Ψ⟩∣Ψ⟩…∣Ψ⟩ Nevertheless it has been shown that the error probability will be suppressed by a factor of 1/R BBDEJM.

More recently projections on symmetric subspaces were replaced by more complicated projections on carefully selected subspaces. These projections, proposed by Shor Sho95, Calderbank and Shor CS96, Steane Ste96 and others EM96,LMPZ96,Got96,CRSS97,KL96, are constructed on the basis of classical error-correcting methods but represent intrinsically new quantum error-correction and stabilisation schemes; they are the subject of much current study.

Let us illustrate the main idea of recoherence by describing a simple method for protecting an unknown state of a single qubit in a noisy quantum register. Consider the following scenario: we want to store in a computer memory one qubit in an unknown quantum state of the form ϕ⟩ = α∣0⟩ + β∣1⟩ and we know that any single qubit which is stored in a register undergoes a decoherence type entanglement with an environment described by Eq.(\ref{deco}). To see how the state of the qubit is affected by the environment, we calculate the fidelity of the decohered state at time t with respect to the initial state ϕ

F(t) = ⟨ϕρ(t)∣ϕ⟩, 

where ρ(t) is given by Eq.~(\ref{eqdecohere}). It follows that

F(t)=|\alpha|^4 + |\beta|^4+2|\alpha|^2 |\beta|^2 \mbox{Re}[\langle m_0(t)|m_1(t) \rangle]\;.

The expression above depends on the initial state ϕ and clearly indicates that some states are more vulnerable to decoherence than others. In order to get rid of this dependence we consider the average fidelity, calculated under the assumption that any initial state ϕ is equally probable. Taking into account the normalisation constraint the average fidelity is given by

$\bar F(t)=\int_0^1 F(t) \;d\; |\alpha|^2= \frac{1}{3}(2+\mbox{Re}[\langle m_0(t)|m_1(t)\rangle])\;.$

If we assume an exponential-type decoherence, where m0(t)∣m1(t)⟩ = e − γt the average fidelity takes the simple form

$\bar F(t)=\frac{1}{3}(2+e^{-\gamma t})\;.$

In particular, for times much shorter than the decoherence time td = 1/γ the above fidelity can be approximated as

$\bar F(t)\simeq1-\frac{1}{3}\gamma t +O(\gamma^2 t^2)\;.$

Let us now show how to improve the average fidelity by quantum encoding. Before we place the qubit in the memory register we {\em encode} it: we can add two qubits, initially both in state ∣0⟩ to the original qubit and then perform an encoding unitary transformation \begin{array} |000\rangle&\mapsto &|\bar 0\bar 0\bar 0\rangle=(|0\rangle+|1\rangle)(|0\rangle+|1\rangle)(|0\rangle+|1\rangle),\\ |100\rangle&\mapsto &|\bar 1\bar 1\bar 1\rangle=(|0\rangle-|1\rangle)(|0\rangle-|1\rangle)(|0\rangle-|1\rangle), \end{array} generating state $\alpha|\bar 0\bar 0\bar 0\rangle+\beta|\bar 1\bar 1\bar 1\rangle$ where $|\bar 0\rangle = |0\rangle+|1\rangle$ and $|\bar 1\rangle =|0\rangle-|1\rangle$ Now, suppose that only the second stored qubit was affected by decoherence and became entangled with the environment: \begin{array} && \alpha (|0\rangle+|1\rangle)(|0\rangle|m_0\rangle+|1\rangle|m_1\rangle)(|0\rangle+|1\rangle) +\nonumber\\ && \beta (|0\rangle-|1\rangle)(|0\rangle|m_0\rangle-|1\rangle|m_1\rangle)(|0\rangle-|1\rangle), \end{array} which can also be written as

$(\alpha |\bar 0\bar 0\bar 0\rangle +\beta|\bar 1\bar 1\bar 1\rangle)(|m_0\rangle + |m_1\rangle)+ (\alpha |\bar 0\bar 1\bar 0\rangle +\beta|\bar 1\bar 0\bar 1\rangle)(|m_0\rangle - |m_1\rangle).$

The decoding unitary transformation can be constructed using a couple of quantum controlled-NOT gates and the Toffoli gate, thus completing the error-correcting network:


Careful inspection of the network shows that any single phase-flip $|\bar 0\rangle \leftrightarrow |\bar 1\rangle$ will be corrected and the environment will be effectively disentangled from the qubits. In our particular case we obtain

(α∣0⟩ + β∣1⟩) [∣00⟩(∣m0⟩ + ∣m1⟩) + ∣10⟩(∣m0⟩ − ∣m1⟩)].

The two auxiliary outputs carry information about the error syndrome - 00 means no error, 01 means the phase-flip occurred in the third qubit, 10 means the phase-flip in the second qubit and 11 signals the phase flip in the first qubit.

Thus if only one qubit in the encoded triplet decoheres we can recover the original state perfectly. In reality all three qubits decohere simultaneously and, as the result, only partial recovery of the original state is possible. In this case lengthy but straightforward calculations show that the average fidelity of the reconstructed state after the decoding operation for an exponential-type decoherence is

$\bar F_{\mbox{ec}}(t)=\frac{1}{6}[4+3 e^{-\gamma t}-e^{-3\gamma t}]\;.$

For short times this can be written as

$\bar F_{\mbox{ec}}(t)\simeq 1-\frac{1}{2}\gamma^2 t^2 +O(\gamma^3t^3).$

Comparing Eq.~(\ref{fiwec}) with Eq.~(\ref{fidec}), we can easily see that for all times t

$\bar F_{\mbox{ec}}(t)\ge \bar F(t).$

This is the essence of recoherence via encoding and decoding. There is much more to say (and write) about quantum codes and the reader should be warned that we have barely scratched the surface of the current activities in quantum error correction, neglecting topics such as group theoretical ways of constructing good quantum codes Got96,CRSS97, concatenated codes KL96, quantum fault tolerant computation fault and many others.

Concluding remarks

Research in quantum computation and in its all possible variations has become vigorously active and any comprehensive review of the field must be obsolete as soon as it is written. Here we have decided to provide only some very basic knowledge, hoping that this will serve as a good starting point to enter the field. Many interesting papers in these and many related areas can be found at the Los Alamos National Laboratory e-print archive ( and on the web site of the Center for Quantum Computation (


This work was supported in part by the European TMR Research Network ERP-4061PL95-1412, The Royal Society London, and Elsag plc. PH acknowledges the support of the Rhodes Trust.


[KI04] V. Kabanets and R. Impagliazzo, "Derandomizing polynomial identity tests means proving circuit lower bounds", Computational Complexity, 13(1-2), p 1-46, (2004).

[AKS02] M. Agrawal, N. Kayal, and N. Saxena, "PRIMES is in P", IIT Kanpur, Preprint of August 8, 2002,

[LP05] H. W. Lenstra, Jr. and Carl Pomerance, "Primality testing with Gaussian periods", Preliminary version July 20, 2005,

[Sch95] The term was coined by B. Schumacher. See, for example, Phys. Rev. A 51 2738 (1995).

[Deu89] D. Deutsch, Proc.~R.~Soc.~Lond. A 425 73 (1989).

[WZ82] W. K. Wootters and W. H. Zurek, Nature 299 802 (1982).

[BBC95] A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. W. Shor, T.Sleator, J. Smolin and H.Weinfurter, Phys. Rev. A 52 3457 (1995).

[DBE95] D. Deutsch, A. Barenco and A. Ekert, Proc. R. Soc. Lond. A 449 669 (1995).

[BDEJ95] A. Barenco, D.Deutsch, A.Ekert and R. Jozsa, Phys. Rev. Lett. 74 4083 (1995).

[DiV95 D. P. DiVincenzo, Phys. Rev. A 51 1015 (1995).

[Llo95] S. Lloyd Phys. Rev. Lett. 75 346 (1995).

[HW79] G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers (Oxford University Press, Oxford, 1979).

[Tof81] T. Toffoli, Mathematical Systems Theory 14 13 (1981).

[Pap94] C. H. Papadimitriou, Computational Complexity (Addison-Wesley, 1994).

[Cop94] D. Coppersmith, IBM Research report (1994).

[SS77] R. Solovay and V. Strassen SIAM J. Comp. 6 84 (1977).

[MR95] R. Motwani and P. Raghavan, Randomised Algorithms (Cambridge University Press, 1995).

[Sho94] P. W. Shor, "Algorithms for quantum computation: Discrete logarithms and factoring" Proc. 35th Annual Symposium on the Foundations of Computer Science, p. 124 Edited by S. Goldwasser (IEEE Computer Society Press, Los Alamitos, CA 1994). Expanded version of this paper is available at LANL quant-ph archive.

[Fey82] R. P. Feynman, International Journal of Theoretical Physics 21 467 (1982).

[CEMM98] R. Cleve, A. Ekert, C. Macchiavello and M. Mosca, ''Proc. R. Soc. Lond. A'# 454 339 (1998).

[Deu85] D. Deutsch, Proc. R. Soc. Lond. A 400 97 (1985).

[Gro96] L. K. Grover, "A fast quantum mechanical algorithm for database search", Proc. 28th Annual ACM Symposium on the Theory of Computing (STOC'96) p. 212 (ACM, Philadelphia, Pennsylvania, 1996).

[Boy96] M. Boyer, G. Brassard, P. Hoyer, A. Tapp, Proc. of the Workshop on Physics and Computation (PhysComp96) 36 (1996).

[Hog98] T. Hogg, Physica D120 102 (1998).

[Zal99] C. Zalka, Physical Review A60 2746 (1999).

[Kit95] A. Y. Kitaev, LANL quant-ph archive, quant-ph/9511026 (1995).

[VBE96] V.Vedral, A. Barenco and A. Ekert, Phys. Rev. A 54 147 (1996).

[BCDP96] D. Beckman, A. Chari, S. Devabhaktuni and J. Preskill, Phys. Rev. A 54 1034 (1996).

[EJ96] A. Ekert and R. Jozsa, Rev. Mod. Phys. 68 733 (1996).

[Kah67] D. Kahn, The Codebreakers: The Story of Secret Writing, (Macmillan, New York,1967).

[Sti95] D. Stinson, Cryptography: Theory and Practice (CRC Press, 1995).

[Ver26] G. S. Vernam, J. AIEE 45 109 (1926).

[Sha49] C. E. Shannon, Bell Syst. Tech.J. 28 657 (1949).

[DH76b] W. Diffie and M. E. Hellman, IEEE Transactions on Information Theory 22 644 (1976).

[RSA78] R. L. Rivest, A. Shamir and L. M. Adleman, Communication of the ACM 21 120 (1978).

[Ell70] J. H. Ellis, Tech. report Communications-Electronics Security Group, United Kingdom (1970).

[Coc73] C. Cocks, Tech. report Communications-Electronics Security Group, United Kingdom (1973).

[Wie83] S. Wiesner, Sigact News 15, 78 (1983).

[BB84] C. H. Bennett and G. Brassard, Proc. IEEE Int. Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984).

[Eke91] A. Ekert, Phys. Rev. Lett. 67, 661 (1991).

[Zur91] W. H. Zurek, Phys.Today 44 October p.36 (1991).

[BBDEJM] A. Berthiaume, D. Deutsch and R. Jozsa, Proceedings of the Workshop on the Physics and Computation---PhysComp '94, IEEE Computer Society Press, Dallas, Texas (1994); A. Barenco, A. Berthiaume, D. Deutsch, A. Ekert, R. Jozsa and C. Macchiavello, SIAM J. Comput. 26, 1541 (1997).

[Sho95] P. W. Shor, Phys. Rev. A 52, R2493 (1995).

[CS96] R. Calderbank and P. W. Shor, ''Phys. Rev. A} 54, 1098 (1996).

[Ste96] A. Steane, Phys. Rev. Lett. 77, 793 (1996); A. Steane, Proc. R. Soc. Lond. A 452, 2551 (1996).

[EM96] A. Ekert and C. Macchiavello, Phys. Rev. Lett. 77, 2585 (1996).

[LMPZ96] R. Laflamme, C. Miquel, J.P. Paz and W.H. Zurek, Phys. Rev. Lett. 77, 198 (1996).

[Got96] D. Gottesman, Phys. Rev. A 54, 1862 (1996).

[CRSS97] A.R. Calderbank, E.M. Rains, P.W. Shor and N.J.A. Sloane, Phys. Rev. Lett. 78, 405 (1997).

[KL96] E. Knill and R. Laflamme, e-print quant-ph/9608012 (1996). [fault] P.W. Shor, e-print quant-ph/9605011 (1996); D.P. DiVincenzo and P.W. Shor, Phys. Rev. Lett. 77, 3260 (1996).

[Sol99]R.Solovay, "Lie groups and quantum circuits", preprint 1999.

Category:Introductory Tutorials

Last modified: 

Monday, January 4, 2016 - 11:49