第1行: |
第1行: |
| 此词条暂由Henry翻译。 | | 此词条暂由Henry翻译。 |
| + | 已由Vicky审校 |
| | | |
| '''Transfer entropy''' is a [[non-parametric statistics|non-parametric statistic]] measuring the amount of directed (time-asymmetric) transfer of [[information]] between two [[random process]]es.<ref>{{cite journal|last=Schreiber|first=Thomas|title=Measuring information transfer|journal=Physical Review Letters|date=1 July 2000|volume=85|issue=2|pages=461–464|doi=10.1103/PhysRevLett.85.461|pmid=10991308|arxiv=nlin/0001042|bibcode=2000PhRvL..85..461S}}</ref><ref name=Scholarpedia >{{cite encyclopedia |year= 2007 |title = Granger causality |volume = 2 |issue = 7 |pages = 1667 |last= Seth |first=Anil|encyclopedia=[[Scholarpedia]] |url=http://www.scholarpedia.org/article/Granger_causality|doi=10.4249/scholarpedia.1667 |bibcode=2007SchpJ...2.1667S|doi-access= free }}</ref><ref name=Schindler07>{{cite journal|last=Hlaváčková-Schindler|first=Katerina|author2=Palus, M |author3=Vejmelka, M |author4= Bhattacharya, J |title=Causality detection based on information-theoretic approaches in time series analysis|journal=Physics Reports|date=1 March 2007|volume=441|issue=1|pages=1–46|doi=10.1016/j.physrep.2006.12.004|bibcode=2007PhR...441....1H|citeseerx=10.1.1.183.1617}}</ref> Transfer entropy from a process ''X'' to another process ''Y'' is the amount of uncertainty reduced in future values of ''Y'' by knowing the past values of ''X'' given past values of ''Y''. More specifically, if <math> X_t </math> and <math> Y_t </math> for <math> t\in \mathbb{N} </math> denote two random processes and the amount of information is measured using [[Shannon's entropy]], the transfer entropy can be written as: | | '''Transfer entropy''' is a [[non-parametric statistics|non-parametric statistic]] measuring the amount of directed (time-asymmetric) transfer of [[information]] between two [[random process]]es.<ref>{{cite journal|last=Schreiber|first=Thomas|title=Measuring information transfer|journal=Physical Review Letters|date=1 July 2000|volume=85|issue=2|pages=461–464|doi=10.1103/PhysRevLett.85.461|pmid=10991308|arxiv=nlin/0001042|bibcode=2000PhRvL..85..461S}}</ref><ref name=Scholarpedia >{{cite encyclopedia |year= 2007 |title = Granger causality |volume = 2 |issue = 7 |pages = 1667 |last= Seth |first=Anil|encyclopedia=[[Scholarpedia]] |url=http://www.scholarpedia.org/article/Granger_causality|doi=10.4249/scholarpedia.1667 |bibcode=2007SchpJ...2.1667S|doi-access= free }}</ref><ref name=Schindler07>{{cite journal|last=Hlaváčková-Schindler|first=Katerina|author2=Palus, M |author3=Vejmelka, M |author4= Bhattacharya, J |title=Causality detection based on information-theoretic approaches in time series analysis|journal=Physics Reports|date=1 March 2007|volume=441|issue=1|pages=1–46|doi=10.1016/j.physrep.2006.12.004|bibcode=2007PhR...441....1H|citeseerx=10.1.1.183.1617}}</ref> Transfer entropy from a process ''X'' to another process ''Y'' is the amount of uncertainty reduced in future values of ''Y'' by knowing the past values of ''X'' given past values of ''Y''. More specifically, if <math> X_t </math> and <math> Y_t </math> for <math> t\in \mathbb{N} </math> denote two random processes and the amount of information is measured using [[Shannon's entropy]], the transfer entropy can be written as: |
第5行: |
第6行: |
| Transfer entropy is a non-parametric statistic measuring the amount of directed (time-asymmetric) transfer of information between two random processes. Transfer entropy from a process X to another process Y is the amount of uncertainty reduced in future values of Y by knowing the past values of X given past values of Y. More specifically, if <math> X_t </math> and <math> Y_t </math> for <math> t\in \mathbb{N} </math> denote two random processes and the amount of information is measured using Shannon's entropy, the transfer entropy can be written as: | | Transfer entropy is a non-parametric statistic measuring the amount of directed (time-asymmetric) transfer of information between two random processes. Transfer entropy from a process X to another process Y is the amount of uncertainty reduced in future values of Y by knowing the past values of X given past values of Y. More specifically, if <math> X_t </math> and <math> Y_t </math> for <math> t\in \mathbb{N} </math> denote two random processes and the amount of information is measured using Shannon's entropy, the transfer entropy can be written as: |
| | | |
− | <font color="#ff8000"> 传递熵Transfer entropy</font>是衡量两个随机过程之间有向(时间不对称)信息传递量的非参数统计量。从一个过程X到另一个过程Y的传递熵是通过知道给定Y的过去值X的过去值而在Y的未来值中减少的不确定性量。更具体地说,如果t∈N的Xt和Yt表示两个随机过程,并且信息量是用<font color="#ff8000"> 香农熵Shannon entropy</font>测量的,则传递熵可以写成: | + | <font color="#ff8000"> 转移熵 Transfer entropy</font>(也可译为<font color="#ff8000">传递熵</font>)是衡量两个随机过程之间有向(时间不对称)信息传递量的非参数统计量。过程X到过程Y的转移熵是指在给定过去值Y得到过去值X时,Y值不确定性的减少量。更具体地,如果Xt和Yt(t∈N)表示两个随机过程,且信息量用<font color="#ff8000"> 香农熵 Shannon entropy</font>测量,则转移熵可以写为: |
| | | |
| | | |
第32行: |
第33行: |
| where H(X) is Shannon entropy of X. The above definition of transfer entropy has been extended by other types of entropy measures such as Rényi entropy. | | where H(X) is Shannon entropy of X. The above definition of transfer entropy has been extended by other types of entropy measures such as Rényi entropy. |
| | | |
− | 其中 H (x)是 x 的香农熵。上述转移熵的定义被其他类型的熵测度(如Rényi熵)所扩展。 | + | 其中 H (x)是 x 的香农熵。上述转移熵的定义已被其他类型的熵测度(如Rényi熵)所扩展。 |
| | | |
| | | |
第68行: |
第69行: |
| Transfer entropy reduces to Granger causality for vector auto-regressive processes. Hence, it is advantageous when the model assumption of Granger causality doesn't hold, for example, analysis of non-linear signals. However, it usually requires more samples for accurate estimation. | | Transfer entropy reduces to Granger causality for vector auto-regressive processes. Hence, it is advantageous when the model assumption of Granger causality doesn't hold, for example, analysis of non-linear signals. However, it usually requires more samples for accurate estimation. |
| | | |
− | 向量自回归过程的传递熵归结为Granger因果关系。因此,当Granger因果关系的模型假设不成立时,例如非线性信号的分析,它是有利的。然而,它通常需要更多的样本来进行准确的估计 。
| + | 对于向量自回归过程,转移熵简化为<font color="#ff8000"> 格兰杰因果关系 Granger causality</font>。因此,当格兰杰因果关系的模型假设不成立时,例如对非线性信号的分析时,转移熵就更具优势。然而,它通常需要更多的样本才能进行准确估计 。 |
| | | |
| The probabilities in the entropy formula can be estimated using different approaches (binning, nearest neighbors) or, in order to reduce complexity, using a non-uniform embedding.<ref>{{cite journal|last=Montalto|first=A|author2=Faes, L |author3=Marinazzo, D |title=MuTE: A MATLAB Toolbox to Compare Established and Novel Estimators of the Multivariate Transfer Entropy.|journal=PLOS ONE|date=Oct 2014|pmid=25314003|doi=10.1371/journal.pone.0109462|volume=9|issue=10|pmc=4196918|page=e109462|bibcode=2014PLoSO...9j9462M}}</ref> | | The probabilities in the entropy formula can be estimated using different approaches (binning, nearest neighbors) or, in order to reduce complexity, using a non-uniform embedding.<ref>{{cite journal|last=Montalto|first=A|author2=Faes, L |author3=Marinazzo, D |title=MuTE: A MATLAB Toolbox to Compare Established and Novel Estimators of the Multivariate Transfer Entropy.|journal=PLOS ONE|date=Oct 2014|pmid=25314003|doi=10.1371/journal.pone.0109462|volume=9|issue=10|pmc=4196918|page=e109462|bibcode=2014PLoSO...9j9462M}}</ref> |
第74行: |
第75行: |
| The probabilities in the entropy formula can be estimated using different approaches (binning, nearest neighbors) or, in order to reduce complexity, using a non-uniform embedding. | | The probabilities in the entropy formula can be estimated using different approaches (binning, nearest neighbors) or, in order to reduce complexity, using a non-uniform embedding. |
| | | |
− | 熵公式中的概率可以用不同的方法估计(包装,最近邻) ,或者为了降低复杂度,使用非均匀嵌入。
| + | 熵公式中的概率可以用不同的方法估计,如分箱、最近邻,或为了降低复杂度,使用非均匀嵌入方法。 |
| | | |
| While it was originally defined for [[bivariate analysis]], transfer entropy has been extended to [[Multivariate analysis|multivariate]] forms, either conditioning on other potential source variables<ref>{{cite journal|last=Lizier|first=Joseph|author2=Prokopenko, Mikhail |author3=Zomaya, Albert |title=Local information transfer as a spatiotemporal filter for complex systems|journal=Physical Review E|year=2008|volume=77|issue=2|pages=026110|doi=10.1103/PhysRevE.77.026110|pmid=18352093|arxiv=0809.3275|bibcode=2008PhRvE..77b6110L}}</ref> or considering transfer from a collection of sources,<ref name = Lizier2011>{{cite journal|last=Lizier|first=Joseph|author2=Heinzle, Jakob |author3=Horstmann, Annette |author4=Haynes, John-Dylan |author5= Prokopenko, Mikhail |title=Multivariate information-theoretic measures reveal directed information structure and task relevant changes in fMRI connectivity|journal=Journal of Computational Neuroscience|year=2011|volume=30|issue=1|pages=85–107|doi=10.1007/s10827-010-0271-2|pmid=20799057}}</ref> although these forms require more samples again. | | While it was originally defined for [[bivariate analysis]], transfer entropy has been extended to [[Multivariate analysis|multivariate]] forms, either conditioning on other potential source variables<ref>{{cite journal|last=Lizier|first=Joseph|author2=Prokopenko, Mikhail |author3=Zomaya, Albert |title=Local information transfer as a spatiotemporal filter for complex systems|journal=Physical Review E|year=2008|volume=77|issue=2|pages=026110|doi=10.1103/PhysRevE.77.026110|pmid=18352093|arxiv=0809.3275|bibcode=2008PhRvE..77b6110L}}</ref> or considering transfer from a collection of sources,<ref name = Lizier2011>{{cite journal|last=Lizier|first=Joseph|author2=Heinzle, Jakob |author3=Horstmann, Annette |author4=Haynes, John-Dylan |author5= Prokopenko, Mikhail |title=Multivariate information-theoretic measures reveal directed information structure and task relevant changes in fMRI connectivity|journal=Journal of Computational Neuroscience|year=2011|volume=30|issue=1|pages=85–107|doi=10.1007/s10827-010-0271-2|pmid=20799057}}</ref> although these forms require more samples again. |
第80行: |
第81行: |
| While it was originally defined for bivariate analysis, transfer entropy has been extended to multivariate forms, either conditioning on other potential source variables or considering transfer from a collection of sources, although these forms require more samples again. | | While it was originally defined for bivariate analysis, transfer entropy has been extended to multivariate forms, either conditioning on other potential source variables or considering transfer from a collection of sources, although these forms require more samples again. |
| | | |
− | 虽然传递熵最初定义为双变量分析,但它已经扩展到多变量形式,或者对其他潜在源变量进行调节,或者考虑从一组源的传递,尽管这些形式再次需要更多的样本。
| + | 虽然转移熵最初定义为双变量分析,但它已经扩展到多变量形式,或者对其他潜在源变量进行调节,或者考虑从一组源的传递,尽管这些形式再次需要更多的样本。 |
| | | |
| | | |
第88行: |
第89行: |
| Transfer entropy has been used for estimation of functional connectivity of neurons and social influence in social networks. | | Transfer entropy has been used for estimation of functional connectivity of neurons and social influence in social networks. |
| | | |
− | 传递熵被用来估计社会网络中神经元的功能连通性和社会影响。
| + | 转移熵被用于估计神经元的功能连接和社交网络的社交影响。 |
| | | |
| Transfer entropy is a finite version of the [[Directed Information]] which was defined in 1990 by [[James Massey]] <ref>{{cite journal|last1=Massey|first1=James|title=Causality, Feedback And Directed Information|date=1990|issue=ISITA|citeseerx=10.1.1.36.5688}}</ref> as | | Transfer entropy is a finite version of the [[Directed Information]] which was defined in 1990 by [[James Massey]] <ref>{{cite journal|last1=Massey|first1=James|title=Causality, Feedback And Directed Information|date=1990|issue=ISITA|citeseerx=10.1.1.36.5688}}</ref> as |
第94行: |
第95行: |
| Transfer entropy is a finite version of the Directed Information which was defined in 1990 by James Massey as | | Transfer entropy is a finite version of the Directed Information which was defined in 1990 by James Massey as |
| | | |
− | 转移熵是有向信息的有限形式,1990年由 James Massey 定义为
| + | 转移熵是有向信息的有限形式,1990年由詹姆斯·梅西 James Massey定义为 |
| | | |
| <math>I(X^n\to Y^n) =\sum_{i=1}^n I(X^i;Y_i|Y^{i-1})</math>, where <math>X^n</math> denotes the vector <math>X_1,X_2,...,X_n</math> and <math>Y^n</math> denotes <math>Y_1,Y_2,...,Y_n</math>. The [[directed information]] places an important role in characterizing the fundamental limits ([[channel capacity]]) of communication channels with or without feedback <ref>{{cite journal|last1=Permuter|first1=Haim Henry|last2=Weissman|first2=Tsachy|last3=Goldsmith|first3=Andrea J.|title=Finite State Channels With Time-Invariant Deterministic Feedback|journal=IEEE Transactions on Information Theory|date=February 2009|volume=55|issue=2|pages=644–662|doi=10.1109/TIT.2008.2009849|arxiv=cs/0608070}}</ref> | | <math>I(X^n\to Y^n) =\sum_{i=1}^n I(X^i;Y_i|Y^{i-1})</math>, where <math>X^n</math> denotes the vector <math>X_1,X_2,...,X_n</math> and <math>Y^n</math> denotes <math>Y_1,Y_2,...,Y_n</math>. The [[directed information]] places an important role in characterizing the fundamental limits ([[channel capacity]]) of communication channels with or without feedback <ref>{{cite journal|last1=Permuter|first1=Haim Henry|last2=Weissman|first2=Tsachy|last3=Goldsmith|first3=Andrea J.|title=Finite State Channels With Time-Invariant Deterministic Feedback|journal=IEEE Transactions on Information Theory|date=February 2009|volume=55|issue=2|pages=644–662|doi=10.1109/TIT.2008.2009849|arxiv=cs/0608070}}</ref> |
第100行: |
第101行: |
| <math>I(X^n\to Y^n) =\sum_{i=1}^n I(X^i;Y_i|Y^{i-1})</math>, where <math>X^n</math> denotes the vector<math>X_1,X_2,...,X_n</math>and <math>Y^n</math> denotes <math>Y_1,Y_2,...,Y_n</math>. The directed information places an important role in characterizing the fundamental limits (channel capacity) of communication channels with or without feedback | | <math>I(X^n\to Y^n) =\sum_{i=1}^n I(X^i;Y_i|Y^{i-1})</math>, where <math>X^n</math> denotes the vector<math>X_1,X_2,...,X_n</math>and <math>Y^n</math> denotes <math>Y_1,Y_2,...,Y_n</math>. The directed information places an important role in characterizing the fundamental limits (channel capacity) of communication channels with or without feedback |
| | | |
− | I(Xn→Yn)=∑ni=1I(Xi;Yi|Yi−1),其中 Xn表示向量X1,X2,...,Xn和Yn表示 Y1,Y2,...,Yn。定向信息在描述有无反馈信道的基本限制(信道容量)方面起着重要作用 | + | I(Xn→Yn)=∑ni=1I(Xi;Yi|Yi−1),其中 Xn表示向量X1,X2,...,Xn和Yn表示 Y1,Y2,...,Yn。有向信息在描述有无反馈信道的基本限制(信道容量)方面起着重要作用。 |
| | | |
| <ref>{{cite journal|last1=Kramer|first1=G.|title=Capacity results for the discrete memoryless network|journal=IEEE Transactions on Information Theory|date=January 2003|volume=49|issue=1|pages=4–21|doi=10.1109/TIT.2002.806135}}</ref> and [[gambling]] with causal side information,<ref>{{cite journal|last1=Permuter|first1=Haim H.|last2=Kim|first2=Young-Han|last3=Weissman|first3=Tsachy|title=Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing|journal=IEEE Transactions on Information Theory|date=June 2011|volume=57|issue=6|pages=3248–3259|doi=10.1109/TIT.2011.2136270|arxiv=0912.4872}}</ref> | | <ref>{{cite journal|last1=Kramer|first1=G.|title=Capacity results for the discrete memoryless network|journal=IEEE Transactions on Information Theory|date=January 2003|volume=49|issue=1|pages=4–21|doi=10.1109/TIT.2002.806135}}</ref> and [[gambling]] with causal side information,<ref>{{cite journal|last1=Permuter|first1=Haim H.|last2=Kim|first2=Young-Han|last3=Weissman|first3=Tsachy|title=Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing|journal=IEEE Transactions on Information Theory|date=June 2011|volume=57|issue=6|pages=3248–3259|doi=10.1109/TIT.2011.2136270|arxiv=0912.4872}}</ref> |
第119行: |
第120行: |
| 因果关系(物理) | | 因果关系(物理) |
| * [[Structural equation modeling]] | | * [[Structural equation modeling]] |
− | 结构方程建模
| + | 结构方程模型 |
| * [[Rubin causal model]] | | * [[Rubin causal model]] |
| 虚拟事实模型 | | 虚拟事实模型 |
| * [[Mutual information]] | | * [[Mutual information]] |
− | 相互信息
| + | 互信息 |
| | | |
| | | |
第158行: |
第159行: |
| Category:Nonparametric statistics | | Category:Nonparametric statistics |
| | | |
− | 类别: 无母数统计 | + | 类别: 非参数统计 |
| | | |
| [[Category:Entropy and information]] | | [[Category:Entropy and information]] |