Classical information

Classical information was first defined rigorously by [[Claude Shannon]]. Information is equal to how much communication is needed to convey it. Roughly speaking, if one has a list of possible messages you might want to convey, then the information of the messages is how much communication is required to tell someone which of the messages from the list you wish to communicate. We denote the messages by a [[random variable]] X. This is a list of messages \{x_1, x_2, x_3...x_m\} each one occuring with probability \{p(x_1), p(x_2), p(x_3)...p(x_m)\}. We denote this probability distribution by P_X. Shannon showed that the number of [[bits]] needed to convey which message occurs is given by the [[Shannon entropy]] H(X)=-\sum p(x) \log p(x). Essentially, he showed that if one has n messages, then one can [[compress]] this information onto a space of dimension just over 2^{n{H(X)}} such that with high probability the information is sent faithfully. ==Reference== * C. E. Shannon, [http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf A mathematical theory of communication]''[[Bell System Technical Journal]]'', vol. 27, pp. 379–423 and 623–656, (July and October, 1948) [[Category:Classical Information Theory]]