第5行: |
第5行: |
| Transfer entropy is a non-parametric statistic measuring the amount of directed (time-asymmetric) transfer of information between two random processes. Transfer entropy from a process X to another process Y is the amount of uncertainty reduced in future values of Y by knowing the past values of X given past values of Y. More specifically, if <math> X_t </math> and <math> Y_t </math> for <math> t\in \mathbb{N} </math> denote two random processes and the amount of information is measured using Shannon's entropy, the transfer entropy can be written as: | | Transfer entropy is a non-parametric statistic measuring the amount of directed (time-asymmetric) transfer of information between two random processes. Transfer entropy from a process X to another process Y is the amount of uncertainty reduced in future values of Y by knowing the past values of X given past values of Y. More specifically, if <math> X_t </math> and <math> Y_t </math> for <math> t\in \mathbb{N} </math> denote two random processes and the amount of information is measured using Shannon's entropy, the transfer entropy can be written as: |
| | | |
− | <font color="#ff8000"> 传递熵Transfer entropy</font>是衡量两个随机过程之间有向(时间不对称)信息传递量的非参数统计量。从一个过程X到另一个过程Y的传递熵是通过知道给定Y的过去值X的过去值而在Y的未来值中减少的不确定性量。更具体地说,如果t∈N的Xt和Yt表示两个随机过程,并且信息量是用香农熵测量的,则传递熵可以写成: | + | <font color="#ff8000"> 传递熵Transfer entropy</font>是衡量两个随机过程之间有向(时间不对称)信息传递量的非参数统计量。从一个过程X到另一个过程Y的传递熵是通过知道给定Y的过去值X的过去值而在Y的未来值中减少的不确定性量。更具体地说,如果t∈N的Xt和Yt表示两个随机过程,并且信息量是用<font color="#ff8000"> 香农熵Shannon entropy</font>测量的,则传递熵可以写成: |
| | | |
| | | |
第40行: |
第40行: |
| Transfer entropy is conditional mutual information, with the history of the influenced variable <math>Y_{t-1:t-L}</math> in the condition: | | Transfer entropy is conditional mutual information, with the history of the influenced variable <math>Y_{t-1:t-L}</math> in the condition: |
| | | |
− | 转移熵是条件互信息,其历史变量为 y _ { t-1: t-L } </math > : | + | 转移熵是条件互信息,其历史变量为 Yt−1:t−L: |
| | | |
| | | |
第68行: |
第68行: |
| Transfer entropy reduces to Granger causality for vector auto-regressive processes. Hence, it is advantageous when the model assumption of Granger causality doesn't hold, for example, analysis of non-linear signals. However, it usually requires more samples for accurate estimation. | | Transfer entropy reduces to Granger causality for vector auto-regressive processes. Hence, it is advantageous when the model assumption of Granger causality doesn't hold, for example, analysis of non-linear signals. However, it usually requires more samples for accurate estimation. |
| | | |
− | 对于向量自回归过程,传递熵降低到格兰杰因果关系。因此,当格兰杰因果关系的模型假设不成立时,例如,对非线性信号的分析是有利的。然而,为了精确估计,通常需要更多的样本。
| + | 向量自回归过程的传递熵归结为Granger因果关系。因此,当Granger因果关系的模型假设不成立时,例如非线性信号的分析,它是有利的。然而,它通常需要更多的样本来进行准确的估计 。 |
| | | |
| The probabilities in the entropy formula can be estimated using different approaches (binning, nearest neighbors) or, in order to reduce complexity, using a non-uniform embedding.<ref>{{cite journal|last=Montalto|first=A|author2=Faes, L |author3=Marinazzo, D |title=MuTE: A MATLAB Toolbox to Compare Established and Novel Estimators of the Multivariate Transfer Entropy.|journal=PLOS ONE|date=Oct 2014|pmid=25314003|doi=10.1371/journal.pone.0109462|volume=9|issue=10|pmc=4196918|page=e109462|bibcode=2014PLoSO...9j9462M}}</ref> | | The probabilities in the entropy formula can be estimated using different approaches (binning, nearest neighbors) or, in order to reduce complexity, using a non-uniform embedding.<ref>{{cite journal|last=Montalto|first=A|author2=Faes, L |author3=Marinazzo, D |title=MuTE: A MATLAB Toolbox to Compare Established and Novel Estimators of the Multivariate Transfer Entropy.|journal=PLOS ONE|date=Oct 2014|pmid=25314003|doi=10.1371/journal.pone.0109462|volume=9|issue=10|pmc=4196918|page=e109462|bibcode=2014PLoSO...9j9462M}}</ref> |
第98行: |
第98行: |
| <math>I(X^n\to Y^n) =\sum_{i=1}^n I(X^i;Y_i|Y^{i-1})</math>, where <math>X^n</math> denotes the vector <math>X_1,X_2,...,X_n</math> and <math>Y^n</math> denotes <math>Y_1,Y_2,...,Y_n</math>. The [[directed information]] places an important role in characterizing the fundamental limits ([[channel capacity]]) of communication channels with or without feedback <ref>{{cite journal|last1=Permuter|first1=Haim Henry|last2=Weissman|first2=Tsachy|last3=Goldsmith|first3=Andrea J.|title=Finite State Channels With Time-Invariant Deterministic Feedback|journal=IEEE Transactions on Information Theory|date=February 2009|volume=55|issue=2|pages=644–662|doi=10.1109/TIT.2008.2009849|arxiv=cs/0608070}}</ref> | | <math>I(X^n\to Y^n) =\sum_{i=1}^n I(X^i;Y_i|Y^{i-1})</math>, where <math>X^n</math> denotes the vector <math>X_1,X_2,...,X_n</math> and <math>Y^n</math> denotes <math>Y_1,Y_2,...,Y_n</math>. The [[directed information]] places an important role in characterizing the fundamental limits ([[channel capacity]]) of communication channels with or without feedback <ref>{{cite journal|last1=Permuter|first1=Haim Henry|last2=Weissman|first2=Tsachy|last3=Goldsmith|first3=Andrea J.|title=Finite State Channels With Time-Invariant Deterministic Feedback|journal=IEEE Transactions on Information Theory|date=February 2009|volume=55|issue=2|pages=644–662|doi=10.1109/TIT.2008.2009849|arxiv=cs/0608070}}</ref> |
| | | |
− | <math>I(X^n\to Y^n) =\sum_{i=1}^n I(X^i;Y_i|Y^{i-1})</math>, where <math>X^n</math> denotes the vector <math>X_1,X_2,...,X_n</math> and <math>Y^n</math> denotes <math>Y_1,Y_2,...,Y_n</math>. The directed information places an important role in characterizing the fundamental limits (channel capacity) of communication channels with or without feedback | + | <math>I(X^n\to Y^n) =\sum_{i=1}^n I(X^i;Y_i|Y^{i-1})</math>, where <math>X^n</math> denotes the vector<math>X_1,X_2,...,X_n</math>and <math>Y^n</math> denotes <math>Y_1,Y_2,...,Y_n</math>. The directed information places an important role in characterizing the fundamental limits (channel capacity) of communication channels with or without feedback |
| | | |
− | I (x ^ n to y ^ n) = sum { i = 1} ^ n i (x ^ i; y _ i | y ^ { i-1}) </math > ,其中 < math > x ^ n </math > 表示向量 < math > x1,x2,... ,xn </math > 和 < math > y ^ n </math > 表示 < math > y _ 1,y _ 2,... ,y _ n </math > 。定向信息在描述有无反馈信道的基本限制(信道容量)方面起着重要作用 | + | I(Xn→Yn)=∑ni=1I(Xi;Yi|Yi−1),其中 Xn表示向量X1,X2,...,Xn和Yn表示 Y1,Y2,...,Yn。定向信息在描述有无反馈信道的基本限制(信道容量)方面起着重要作用 |
| | | |
| <ref>{{cite journal|last1=Kramer|first1=G.|title=Capacity results for the discrete memoryless network|journal=IEEE Transactions on Information Theory|date=January 2003|volume=49|issue=1|pages=4–21|doi=10.1109/TIT.2002.806135}}</ref> and [[gambling]] with causal side information,<ref>{{cite journal|last1=Permuter|first1=Haim H.|last2=Kim|first2=Young-Han|last3=Weissman|first3=Tsachy|title=Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing|journal=IEEE Transactions on Information Theory|date=June 2011|volume=57|issue=6|pages=3248–3259|doi=10.1109/TIT.2011.2136270|arxiv=0912.4872}}</ref> | | <ref>{{cite journal|last1=Kramer|first1=G.|title=Capacity results for the discrete memoryless network|journal=IEEE Transactions on Information Theory|date=January 2003|volume=49|issue=1|pages=4–21|doi=10.1109/TIT.2002.806135}}</ref> and [[gambling]] with causal side information,<ref>{{cite journal|last1=Permuter|first1=Haim H.|last2=Kim|first2=Young-Han|last3=Weissman|first3=Tsachy|title=Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing|journal=IEEE Transactions on Information Theory|date=June 2011|volume=57|issue=6|pages=3248–3259|doi=10.1109/TIT.2011.2136270|arxiv=0912.4872}}</ref> |
第111行: |
第111行: |
| | | |
| == See also == | | == See also == |
− | | + | 参见 |
| * [[Conditional mutual information]] | | * [[Conditional mutual information]] |
− | | + | 条件互信息 |
| * [[Causality]] | | * [[Causality]] |
− | | + | 因果关系 |
| * [[Causality (physics)]] | | * [[Causality (physics)]] |
− | | + | 因果关系(物理) |
| * [[Structural equation modeling]] | | * [[Structural equation modeling]] |
− | | + | 结构方程建模 |
| * [[Rubin causal model]] | | * [[Rubin causal model]] |
− | | + | 虚拟事实模型 |
| * [[Mutual information]] | | * [[Mutual information]] |
− | | + | 相互信息 |
| | | |
| | | |
| == References == | | == References == |
− | | + | 参考 |
| {{Reflist|2}} | | {{Reflist|2}} |
| | | |
第133行: |
第133行: |
| | | |
| == External links == | | == External links == |
− | | + | 外部链接 |
| * {{cite web|title=Transfer Entropy Toolbox|url=http://code.google.com/p/transfer-entropy-toolbox/|publisher=[[Google Code]]}}, a toolbox, developed in [[C++]] and [[MATLAB]], for computation of transfer entropy between spike trains. | | * {{cite web|title=Transfer Entropy Toolbox|url=http://code.google.com/p/transfer-entropy-toolbox/|publisher=[[Google Code]]}}, a toolbox, developed in [[C++]] and [[MATLAB]], for computation of transfer entropy between spike trains. |
| | | |