- F-divergence
In
probability theory , an "f"-divergence is a function "I""f"("P","Q") that measures the difference between twoprobability distributions "P" and "Q". The divergence is intuitively an average of the function "f" of theodds ratio given by "P" and "Q".These divergences were introduced and studied independently by Csiszár (1967) and Ali and Silvey (1966) and are sometimes known as Csiszár f-divergences or Ali-Silvey distances.
Definition
Let "P" and "Q" be two probability distributions over a space Ω such that "P" is
absolutely continuous with respect to "Q". Then, for aconvex function "f" such that "f"(1) = 0, the "f"-divergence of "Q" from "P" is:
If "P" and "Q" are both absolutely continuous with respect to a reference distribution "μ" on Ω then their
probability densities "p" and "q" satisfy "dP = p dμ" and "dQ = q dμ". In this case the "f"-divergence can be written as:
Instances of "f"-divergences
Many common divergences, such as
KL-divergence ,Hellinger distance , andtotal variation , are special cases of "f"-divergence, coinciding with a particular choice of "f". The following table lists many of the common divergences between probability distributions and the "f" function to which they correspond (cf. Liese and Vajda, 2006).References
*
*
*
*External links
* [http://www.renyi.hu/~csiszar/Publications/Information_Theory_and_Statistics:_A_Tutorial.pdf Information Theory and Statistics: A Tutorial]
Wikimedia Foundation. 2010.