“联合熵”的版本间的差异
第6行: | 第6行: | ||
[[文件:Entropy-mutual-information-relative-entropy-relation-diagram.svg|缩略图|右|该图表示在变量X、Y相关联的各种信息量之间,进行加减关系的维恩图。两个圆所包含的区域是联合熵H(X,Y)。左侧的圆圈(红色和紫色)是单个熵H(X),红色是条件熵H(X ǀ Y)。右侧的圆圈(蓝色和紫色)为H(Y),蓝色为H(Y ǀ X)。中间紫色的是相互信息i(X; Y)。]] | [[文件:Entropy-mutual-information-relative-entropy-relation-diagram.svg|缩略图|右|该图表示在变量X、Y相关联的各种信息量之间,进行加减关系的维恩图。两个圆所包含的区域是联合熵H(X,Y)。左侧的圆圈(红色和紫色)是单个熵H(X),红色是条件熵H(X ǀ Y)。右侧的圆圈(蓝色和紫色)为H(Y),蓝色为H(Y ǀ X)。中间紫色的是相互信息i(X; Y)。]] | ||
− | |||
− | |||
− | |||
− | |||
In [[information theory]], '''joint [[entropy (information theory)|entropy]]''' is a measure of the uncertainty associated with a set of [[random variables|variables]].<ref name=korn>{{cite book |author1=Theresa M. Korn |author2=Korn, Granino Arthur |title=Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review |publisher=Dover Publications |location=New York |year= |isbn=0-486-41147-8 |oclc= |doi=}}</ref> | In [[information theory]], '''joint [[entropy (information theory)|entropy]]''' is a measure of the uncertainty associated with a set of [[random variables|variables]].<ref name=korn>{{cite book |author1=Theresa M. Korn |author2=Korn, Granino Arthur |title=Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review |publisher=Dover Publications |location=New York |year= |isbn=0-486-41147-8 |oclc= |doi=}}</ref> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
==Definition== | ==Definition== | ||
− | |||
− | |||
− | |||
− | |||
− | |||
The joint [[Shannon entropy]] (in [[bit]]s) of two discrete [[random variable|random variables]] <math>X</math> and <math>Y</math> with images <math>\mathcal X</math> and <math>\mathcal Y</math> is defined as<ref name=cover1991>{{cite book |author1=Thomas M. Cover |author2=Joy A. Thomas |title=Elements of Information Theory |publisher=Wiley |location=Hoboken, New Jersey |year= |isbn=0-471-24195-4}}</ref>{{rp|16}} | The joint [[Shannon entropy]] (in [[bit]]s) of two discrete [[random variable|random variables]] <math>X</math> and <math>Y</math> with images <math>\mathcal X</math> and <math>\mathcal Y</math> is defined as<ref name=cover1991>{{cite book |author1=Thomas M. Cover |author2=Joy A. Thomas |title=Elements of Information Theory |publisher=Wiley |location=Hoboken, New Jersey |year= |isbn=0-471-24195-4}}</ref>{{rp|16}} | ||
− | |||
− | |||
{{Equation box 1 | {{Equation box 1 | ||
− | |||
|indent = | |indent = | ||
− | |||
− | |||
− | |||
− | |||
− | |||
|title= | |title= | ||
− | |||
|equation = {{NumBlk||<math>\Eta(X,Y) = -\sum_{x\in\mathcal X} \sum_{y\in\mathcal Y} P(x,y) \log_2[P(x,y)]</math>|{{EquationRef|Eq.1}}}} | |equation = {{NumBlk||<math>\Eta(X,Y) = -\sum_{x\in\mathcal X} \sum_{y\in\mathcal Y} P(x,y) \log_2[P(x,y)]</math>|{{EquationRef|Eq.1}}}} | ||
− | |||
− | |||
− | |||
− | |||
− | |||
|cellpadding= 6 | |cellpadding= 6 | ||
− | |||
|border | |border | ||
− | |||
− | |||
− | |||
− | |||
− | |||
|border colour = #0073CF | |border colour = #0073CF | ||
− | |||
|background colour=#F5FFFA}} | |background colour=#F5FFFA}} | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
where <math>x</math> and <math>y</math> are particular values of <math>X</math> and <math>Y</math>, respectively, <math>P(x,y)</math> is the [[joint probability]] of these values occurring together, and <math>P(x,y) \log_2[P(x,y)]</math> is defined to be 0 if <math>P(x,y)=0</math>. | where <math>x</math> and <math>y</math> are particular values of <math>X</math> and <math>Y</math>, respectively, <math>P(x,y)</math> is the [[joint probability]] of these values occurring together, and <math>P(x,y) \log_2[P(x,y)]</math> is defined to be 0 if <math>P(x,y)=0</math>. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
For more than two random variables <math>X_1, ..., X_n</math> this expands to | For more than two random variables <math>X_1, ..., X_n</math> this expands to | ||
− | |||
− | |||
{{Equation box 1 | {{Equation box 1 | ||
− | |||
− | |||
− | |||
− | |||
− | |||
|indent = | |indent = | ||
− | |||
|title= | |title= | ||
− | |||
− | |||
− | |||
− | |||
− | |||
|equation = {{NumBlk||<math>\Eta(X_1, ..., X_n) = | |equation = {{NumBlk||<math>\Eta(X_1, ..., X_n) = | ||
− | |||
− | |||
− | |||
− | |||
− | |||
-\sum_{x_1 \in\mathcal X_1} ... \sum_{x_n \in\mathcal X_n} P(x_1, ..., x_n) \log_2[P(x_1, ..., x_n)]</math>|{{EquationRef|Eq.2}}}} | -\sum_{x_1 \in\mathcal X_1} ... \sum_{x_n \in\mathcal X_n} P(x_1, ..., x_n) \log_2[P(x_1, ..., x_n)]</math>|{{EquationRef|Eq.2}}}} | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|cellpadding= 6 | |cellpadding= 6 | ||
− | |||
− | |||
− | |||
− | |||
− | |||
|border | |border | ||
− | |||
− | |||
− | |||
− | |||
− | |||
|border colour = #0073CF | |border colour = #0073CF | ||
− | |||
− | |||
− | |||
− | |||
− | |||
|background colour=#F5FFFA}} | |background colour=#F5FFFA}} | ||
− | |||
− | |||
where <math>x_1,...,x_n</math> are particular values of <math>X_1,...,X_n</math>, respectively, <math>P(x_1, ..., x_n)</math> is the probability of these values occurring together, and <math>P(x_1, ..., x_n) \log_2[P(x_1, ..., x_n)]</math> is defined to be 0 if <math>P(x_1, ..., x_n)=0</math>. | where <math>x_1,...,x_n</math> are particular values of <math>X_1,...,X_n</math>, respectively, <math>P(x_1, ..., x_n)</math> is the probability of these values occurring together, and <math>P(x_1, ..., x_n) \log_2[P(x_1, ..., x_n)]</math> is defined to be 0 if <math>P(x_1, ..., x_n)=0</math>. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
==Properties== | ==Properties== | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
===Nonnegativity=== | ===Nonnegativity=== | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
The joint entropy of a set of random variables is a nonnegative number. | The joint entropy of a set of random variables is a nonnegative number. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
:<math>\Eta(X,Y) \geq 0</math> | :<math>\Eta(X,Y) \geq 0</math> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
:<math>\Eta(X_1,\ldots, X_n) \geq 0</math> | :<math>\Eta(X_1,\ldots, X_n) \geq 0</math> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
===Greater than individual entropies=== | ===Greater than individual entropies=== | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
The joint entropy of a set of variables is greater than or equal to the maximum of all of the individual entropies of the variables in the set. | The joint entropy of a set of variables is greater than or equal to the maximum of all of the individual entropies of the variables in the set. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
:<math>\Eta(X,Y) \geq \max \left[\Eta(X),\Eta(Y) \right]</math> | :<math>\Eta(X,Y) \geq \max \left[\Eta(X),\Eta(Y) \right]</math> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
:<math>\Eta \bigl(X_1,\ldots, X_n \bigr) \geq \max_{1 \le i \le n} | :<math>\Eta \bigl(X_1,\ldots, X_n \bigr) \geq \max_{1 \le i \le n} | ||
− | |||
− | |||
− | |||
− | |||
− | |||
\Bigl\{ \Eta\bigl(X_i\bigr) \Bigr\}</math> | \Bigl\{ \Eta\bigl(X_i\bigr) \Bigr\}</math> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
===Less than or equal to the sum of individual entropies=== | ===Less than or equal to the sum of individual entropies=== | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
The joint entropy of a set of variables is less than or equal to the sum of the individual entropies of the variables in the set. This is an example of [[subadditivity]]. This inequality is an equality if and only if <math>X</math> and <math>Y</math> are [[statistically independent]].<ref name=cover1991 />{{rp|30}} | The joint entropy of a set of variables is less than or equal to the sum of the individual entropies of the variables in the set. This is an example of [[subadditivity]]. This inequality is an equality if and only if <math>X</math> and <math>Y</math> are [[statistically independent]].<ref name=cover1991 />{{rp|30}} | ||
− | <math>\ | + | :<math>\Eta(X,Y) \leq \Eta(X) + \Eta(Y)</math> |
− | ( | + | :<math>\Eta(X_1,\ldots, X_n) \leq \Eta(X_1) + \ldots + \Eta(X_n)</math> |
+ | ==Relations to other entropy measures== | ||
+ | Joint entropy is used in the definition of [[conditional entropy]]<ref name=cover1991 />{{rp|22}} | ||
− | :<math>\Eta(X | + | :<math>\Eta(X|Y) = \Eta(X,Y) - \Eta(Y)\,</math>, |
+ | and <math display="block">\Eta(X_1,\dots,X_n) = \sum_{k=1}^n \Eta(X_k|X_{k-1},\dots, X_1)</math>It is also used in the definition of [[mutual information]]<ref name=cover1991 />{{rp|21}} | ||
+ | :<math>\operatorname{I}(X;Y) = \Eta(X) + \Eta(Y) - \Eta(X,Y)\,</math> | ||
− | + | In [[quantum information theory]], the joint entropy is generalized into the [[joint quantum entropy]]. | |
+ | === Applications === | ||
+ | A python package for computing all multivariate joint entropies, mutual informations, conditional mutual information, total correlations, information distance in a dataset of n variables is available.<ref>{{cite web|url=https://infotopo.readthedocs.io/en/latest/index.html|title=InfoTopo: Topological Information Data Analysis. Deep statistical unsupervised and supervised learning - File Exchange - Github|author=|date=|website=github.com/pierrebaudot/infotopopy/|accessdate=26 September 2020}}</ref> | ||
− | + | ==Joint differential entropy== | |
+ | ===Definition=== | ||
+ | The above definition is for discrete random variables and just as valid in the case of continuous random variables. The continuous version of discrete joint entropy is called ''joint differential (or continuous) entropy''. Let <math>X</math> and <math>Y</math> be a continuous random variables with a [[joint probability density function]] <math>f(x,y)</math>. The differential joint entropy <math>h(X,Y)</math> is defined as<ref name=cover1991 />{{rp|249}} | ||
− | + | {{Equation box 1 | |
+ | |indent = | ||
+ | |title= | ||
+ | |equation = {{NumBlk||<math>h(X,Y) = -\int_{\mathcal X , \mathcal Y} f(x,y)\log f(x,y)\,dx dy</math>|{{EquationRef|Eq.3}}}} | ||
+ | |cellpadding= 6 | ||
+ | |border | ||
+ | |border colour = #0073CF | ||
+ | |background colour=#F5FFFA}} | ||
− | + | For more than two continuous random variables <math>X_1, ..., X_n</math> the definition is generalized to: | |
+ | {{Equation box 1 | ||
+ | |indent = | ||
+ | |title= | ||
+ | |equation = {{NumBlk||<math>h(X_1, \ldots,X_n) = -\int f(x_1, \ldots,x_n)\log f(x_1, \ldots,x_n)\,dx_1 \ldots dx_n</math>|{{EquationRef|Eq.4}}}} | ||
+ | |cellpadding= 6 | ||
+ | |border | ||
+ | |border colour = #0073CF | ||
+ | |background colour=#F5FFFA}} | ||
+ | The [[integral]] is taken over the support of <math>f</math>. It is possible that the integral does not exist in which case we say that the differential entropy is not defined. | ||
− | + | ===Properties=== | |
+ | As in the discrete case the joint differential entropy of a set of random variables is smaller or equal than the sum of the entropies of the individual random variables: | ||
+ | :<math>h(X_1,X_2, \ldots,X_n) \le \sum_{i=1}^n h(X_i)</math><ref name=cover1991 />{{rp|253}} | ||
− | + | The following chain rule holds for two random variables: | |
+ | :<math>h(X,Y) = h(X|Y) + h(Y)</math> | ||
+ | In the case of more than two random variables this generalizes to:<ref name=cover1991 />{{rp|253}} | ||
+ | :<math>h(X_1,X_2, \ldots,X_n) = \sum_{i=1}^n h(X_i|X_1,X_2, \ldots,X_{i-1})</math> | ||
+ | Joint differential entropy is also used in the definition of the [[mutual information]] between continuous random variables: | ||
+ | :<math>\operatorname{I}(X,Y)=h(X)+h(Y)-h(X,Y)</math> | ||
− | + | == References == | |
+ | {{Reflist}} | ||
− | + | [[Category:Entropy and information]] | |
− | [[ | + | [[de:Bedingte Entropie#Blockentropie]] |
2020年11月3日 (二) 15:23的版本
此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。
In information theory, joint entropy is a measure of the uncertainty associated with a set of variables.[1]
Definition
The joint Shannon entropy (in bits) of two discrete random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] with images [math]\displaystyle{ \mathcal X }[/math] and [math]\displaystyle{ \mathcal Y }[/math] is defined as[2]:16
[math]\displaystyle{ \Eta(X,Y) = -\sum_{x\in\mathcal X} \sum_{y\in\mathcal Y} P(x,y) \log_2[P(x,y)] }[/math]
|
|
(Eq.1) |
where [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] are particular values of [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math], respectively, [math]\displaystyle{ P(x,y) }[/math] is the joint probability of these values occurring together, and [math]\displaystyle{ P(x,y) \log_2[P(x,y)] }[/math] is defined to be 0 if [math]\displaystyle{ P(x,y)=0 }[/math].
For more than two random variables [math]\displaystyle{ X_1, ..., X_n }[/math] this expands to
[math]\displaystyle{ \Eta(X_1, ..., X_n) =
-\sum_{x_1 \in\mathcal X_1} ... \sum_{x_n \in\mathcal X_n} P(x_1, ..., x_n) \log_2[P(x_1, ..., x_n)] }[/math]
|
|
(Eq.2) |
where [math]\displaystyle{ x_1,...,x_n }[/math] are particular values of [math]\displaystyle{ X_1,...,X_n }[/math], respectively, [math]\displaystyle{ P(x_1, ..., x_n) }[/math] is the probability of these values occurring together, and [math]\displaystyle{ P(x_1, ..., x_n) \log_2[P(x_1, ..., x_n)] }[/math] is defined to be 0 if [math]\displaystyle{ P(x_1, ..., x_n)=0 }[/math].
Properties
Nonnegativity
The joint entropy of a set of random variables is a nonnegative number.
- [math]\displaystyle{ \Eta(X,Y) \geq 0 }[/math]
- [math]\displaystyle{ \Eta(X_1,\ldots, X_n) \geq 0 }[/math]
Greater than individual entropies
The joint entropy of a set of variables is greater than or equal to the maximum of all of the individual entropies of the variables in the set.
- [math]\displaystyle{ \Eta(X,Y) \geq \max \left[\Eta(X),\Eta(Y) \right] }[/math]
- [math]\displaystyle{ \Eta \bigl(X_1,\ldots, X_n \bigr) \geq \max_{1 \le i \le n} \Bigl\{ \Eta\bigl(X_i\bigr) \Bigr\} }[/math]
Less than or equal to the sum of individual entropies
The joint entropy of a set of variables is less than or equal to the sum of the individual entropies of the variables in the set. This is an example of subadditivity. This inequality is an equality if and only if [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are statistically independent.[2]:30
- [math]\displaystyle{ \Eta(X,Y) \leq \Eta(X) + \Eta(Y) }[/math]
- [math]\displaystyle{ \Eta(X_1,\ldots, X_n) \leq \Eta(X_1) + \ldots + \Eta(X_n) }[/math]
Relations to other entropy measures
Joint entropy is used in the definition of conditional entropy[2]:22
- [math]\displaystyle{ \Eta(X|Y) = \Eta(X,Y) - \Eta(Y)\, }[/math],
and [math]\displaystyle{ \Eta(X_1,\dots,X_n) = \sum_{k=1}^n \Eta(X_k|X_{k-1},\dots, X_1) }[/math]It is also used in the definition of mutual information[2]:21
- [math]\displaystyle{ \operatorname{I}(X;Y) = \Eta(X) + \Eta(Y) - \Eta(X,Y)\, }[/math]
In quantum information theory, the joint entropy is generalized into the joint quantum entropy.
Applications
A python package for computing all multivariate joint entropies, mutual informations, conditional mutual information, total correlations, information distance in a dataset of n variables is available.[3]
Joint differential entropy
Definition
The above definition is for discrete random variables and just as valid in the case of continuous random variables. The continuous version of discrete joint entropy is called joint differential (or continuous) entropy. Let [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] be a continuous random variables with a joint probability density function [math]\displaystyle{ f(x,y) }[/math]. The differential joint entropy [math]\displaystyle{ h(X,Y) }[/math] is defined as[2]:249
[math]\displaystyle{ h(X,Y) = -\int_{\mathcal X , \mathcal Y} f(x,y)\log f(x,y)\,dx dy }[/math]
|
|
(Eq.3) |
For more than two continuous random variables [math]\displaystyle{ X_1, ..., X_n }[/math] the definition is generalized to:
[math]\displaystyle{ h(X_1, \ldots,X_n) = -\int f(x_1, \ldots,x_n)\log f(x_1, \ldots,x_n)\,dx_1 \ldots dx_n }[/math]
|
|
(Eq.4) |
The integral is taken over the support of [math]\displaystyle{ f }[/math]. It is possible that the integral does not exist in which case we say that the differential entropy is not defined.
Properties
As in the discrete case the joint differential entropy of a set of random variables is smaller or equal than the sum of the entropies of the individual random variables:
- [math]\displaystyle{ h(X_1,X_2, \ldots,X_n) \le \sum_{i=1}^n h(X_i) }[/math][2]:253
The following chain rule holds for two random variables:
- [math]\displaystyle{ h(X,Y) = h(X|Y) + h(Y) }[/math]
In the case of more than two random variables this generalizes to:[2]:253
- [math]\displaystyle{ h(X_1,X_2, \ldots,X_n) = \sum_{i=1}^n h(X_i|X_1,X_2, \ldots,X_{i-1}) }[/math]
Joint differential entropy is also used in the definition of the mutual information between continuous random variables:
- [math]\displaystyle{ \operatorname{I}(X,Y)=h(X)+h(Y)-h(X,Y) }[/math]
References
- ↑ Theresa M. Korn; Korn, Granino Arthur. Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review. New York: Dover Publications. ISBN 0-486-41147-8.
- ↑ 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Thomas M. Cover; Joy A. Thomas. Elements of Information Theory. Hoboken, New Jersey: Wiley. ISBN 0-471-24195-4.
- ↑ "InfoTopo: Topological Information Data Analysis. Deep statistical unsupervised and supervised learning - File Exchange - Github". github.com/pierrebaudot/infotopopy/. Retrieved 26 September 2020.