# Index of notational and terminological conventions

Quantum information science brings together many communities, so it is only natural that different, and sometimes conflicting notations are in use. These pages are intended as an alert to such cases. When reading papers in the field it is a good idea to be aware of these.

In most parts of the physics community the adjoint of an operator A on Hilbert space (in matrix terms the complex conjugate of the transpose) is denoted by a "dagger" (i.e., A † ) or, in a poor man's version, a "plus" superscript (i.e., A + ). In mathematics and mathematical physics it is a "star" (i.e., A * ). From the latter usage we have the term C*-algebra.

The confusion is dangerous, because physicists tend to use the star superscript for the complex conjugate of the matrix (without transposing), for which mathematicians mostly use an overline (as for complex numbers).

### Scalar products

In physics, Hilbert space scalar products are defined to be conjugate linear in the first factor, i.e., $\langle z\phi,\psi\rangle=\overline{z}\,\langle\phi,\psi\rangle$ for Hilbert space vectors ϕ, ψ and a complex number z. In mathematics the scalar product is usually conjugate linear in the second factor.

### Entropy

Entropy in thermodynamics is universally denoted by the letter S. Boltzmann used H for the functional of the phase space density, which equals the thermodynamic potential S in equilibrium (witness his H-Theorem, a law of entropy increase). Actually this was supposed to be a capital greek Eta, looking just like H. The Eta is not used in physics anymore because that is in conflict with the Hamiltonian H as well as the enthalpy (another thermodynamic potential).

The word (with the capital H as standard notation) came to be used in information theory because in 1948 Shannon asked John von Neumann what to call that quantity arising in his new coding theory. Von Neumann was fully aware of the notion in statistical physics, having generalized Boltzmanns entropy to his entropy of a quantum state in 1927. Still some authors distinguish the two, using H for the "Shannon entropy" of a classical probability distribution, and S for the "von Neumann entropy" of a quantum state.

A possible further confusion is the unit: physicists use the natural logarithm in the expression S(ρ) =  − tr(ρlogρ). A different base for the logarithm introduces a constant factor in this expression, which amounts to a choice of units.Of course, information theory uses the unit "bit", and hence a base 2 logarithm.

For relative entropy, the quantum analog of Kullback-Leibler divergence an additional ambiguities arises from the sign and the ordering of factors. A standard form is S(ρ, σ) = tr(ρ(logρ − logσ)). However, there are also authors using the opposite ordering of arguments, and the textbook of Bratteli and Robinson has an overall minus sign (to save concavity in ρ).

### Quantum Fidelity

Fidelity as a distance measure between pure states used to be called "transition probability". For two states given by unit vectors ϕ, ψ it is |⟨ϕ, ψ⟩|2. For a pure state (vector ψ) and a mixed state (density matrix ρ) this generalizes to ⟨ψ, ρψ⟩, and for two density matrices ρ, σ it is generalized as the largest fidelity between any two purifications of the given states. According to a theorem by Uhlmann, this leads to the expression

$$F(\rho,\sigma)=\left(\textrm{tr}\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right)^2$$ This is precisely the expression used in 1, where the term fidelity appears to have been used first.

However, one can also start from |⟨ϕ, ψ⟩|, leading to the alternative

$$F(\rho,\sigma)=\textrm{tr}\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}$$

used in 2. This second quantity is sometimes denoted as $\sqrt{F}$ and called square root fidelity. It has no interpretation as a probability, but appears in some estimates in a simpler way.

1. Jozsa94
2. NielsenChuang