- Binary entropy function
In
information theory , the binary entropy function, denoted or , is defined as the entropy of aBernoulli trial withprobability of success "p". Mathematically, the Bernoulli trial is modelled as arandom variable "X" that can take on only two values: 0 and 1. The event is considered a success and the event is considered a failure. (These two events are mutually exclusive and exhaustive.)If then and the entropy of "X" is given by
:
where is taken to be 0. The logarithms in this formula are usually taken (as shown in the graph) to the base 2. See "
binary logarithm ".When the binary entropy function attains its maximum value. This is the case of the unbiased
bit , the most common unit ofinformation entropy .is distinguished from the entropy function by its taking a single scalar
constant parameter . For tutorial purposes, in which the reader may not distinguish the appropriate function by its argument, is often used; however, this could confuse this function with the analogous function related toRényi entropy , so (with "b" not in italics) should be used to dispel ambiguity.Derivative
The
derivative of the binary entropy function may be expressed as the negative of thelogit function::ee also
*
Information theory
*Information entropy References
* David J. C. MacKay. " [http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Information Theory, Inference, and Learning Algorithms] " Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1
External links
Wikimedia Foundation. 2010.