A new quantum version of f-divergence. (arXiv:1311.4722v4 [quant-ph] UPDATED)

This paper proposes and studies new quantum version of $f$-divergences, a
class of convex functionals of a pair of probability distributions including
Kullback-Leibler divergence, Rnyi-type relative entropy and so on. There are
several quantum versions so far, including the one by Petz. We introduce
another quantum version ($\mathrm{D}_{f}^{\max}$, below), defined as the
solution to an optimization problem, or the minimum classical $f$- divergence
necessary to generate a given pair of quantum states. It turns out to be the
largest quantum $f$-divergence. The closed formula of $\mathrm{D}_{f}^{\max}$
is given either if $f$ is operator convex, or if one of the state is a pure
state. Also, concise representation of $\mathrm{D}_{f}^{\max}$ as a pointwise
supremum of linear functionals is given and used for the clarification of
various properties of the quality.

Using the closed formula of $\mathrm{D}_{f}^{\max}$, we show: Suppose $f$ is
operator convex. Then the\ maximum $f\,$- divergence of the probability
distributions of a measurement under the state $\rho$ and $\sigma$ is strictly
less than $\mathrm{D}_{f}^{\max}\left( \rho\Vert\sigma\right) $. This statement
may seem intuitively trivial, but when $f$ is not operator convex, this is not
always true. A counter example is $f\left( \lambda\right) =\left\vert
1-\lambda\right\vert $, which corresponds to total variation distance.

We mostly work on finite dimensional Hilbert space, but some results are
extended to infinite dimensional case.

Article web page: