更改

跳到导航 跳到搜索
添加491字节 、 2020年12月21日 (一) 21:49
无编辑摘要
第1行: 第1行: −
此词条暂由彩云小译翻译,翻译字数共2049,未经人工整理和审校,带来阅读不便,请见谅。
+
此词条暂由水流心不竞初译,翻译字数共,未经审校,带来阅读不便,请见谅。
    
{{Short description|Hypothesis in neuroscience developed by Karl J. Friston}}
 
{{Short description|Hypothesis in neuroscience developed by Karl J. Friston}}
 +
 +
{{简介{神经科学中由卡尔J.弗里斯顿提出的假说}}
    
The '''free energy principle''' is a formal statement that explains how living and non-living systems remain in [[non-equilibrium thermodynamics|non-equilibrium steady-states]] by restricting themselves to a limited number of states.<ref>Ashby, W. R. (1962). [http://csis.pace.edu/~marchese/CS396x/Computing/Ashby.pdf Principles of the self-organizing system].in Principles of Self-Organization: Transactions of the University of Illinois Symposium, H. Von Foerster and G. W. Zopf, Jr. (eds.), Pergamon Press: London, UK, pp. 255–278.</ref> It establishes that systems minimise a free energy function of their internal states, which entail beliefs about hidden states in their environment. The implicit minimisation of [[variational free energy|free energy]] is formally related to [[variational Bayesian methods]] and was originally introduced by [[Karl Friston]] as an explanation for embodied perception in [[neuroscience]],<ref>{{cite journal | last1=Friston | first1=Karl | last2=Kilner | first2=James | last3=Harrison | first3=Lee | title=A free energy principle for the brain | journal=Journal of Physiology-Paris | publisher=Elsevier BV | volume=100 | issue=1–3 | year=2006 | issn=0928-4257 | doi=10.1016/j.jphysparis.2006.10.001 | pmid=17097864 | pages=70–87| s2cid=637885 |url=http://www.fil.ion.ucl.ac.uk/~karl/A%20free%20energy%20principle%20for%20the%20brain.pdf}}</ref> where it is also known as '''active inference'''.
 
The '''free energy principle''' is a formal statement that explains how living and non-living systems remain in [[non-equilibrium thermodynamics|non-equilibrium steady-states]] by restricting themselves to a limited number of states.<ref>Ashby, W. R. (1962). [http://csis.pace.edu/~marchese/CS396x/Computing/Ashby.pdf Principles of the self-organizing system].in Principles of Self-Organization: Transactions of the University of Illinois Symposium, H. Von Foerster and G. W. Zopf, Jr. (eds.), Pergamon Press: London, UK, pp. 255–278.</ref> It establishes that systems minimise a free energy function of their internal states, which entail beliefs about hidden states in their environment. The implicit minimisation of [[variational free energy|free energy]] is formally related to [[variational Bayesian methods]] and was originally introduced by [[Karl Friston]] as an explanation for embodied perception in [[neuroscience]],<ref>{{cite journal | last1=Friston | first1=Karl | last2=Kilner | first2=James | last3=Harrison | first3=Lee | title=A free energy principle for the brain | journal=Journal of Physiology-Paris | publisher=Elsevier BV | volume=100 | issue=1–3 | year=2006 | issn=0928-4257 | doi=10.1016/j.jphysparis.2006.10.001 | pmid=17097864 | pages=70–87| s2cid=637885 |url=http://www.fil.ion.ucl.ac.uk/~karl/A%20free%20energy%20principle%20for%20the%20brain.pdf}}</ref> where it is also known as '''active inference'''.
第23行: 第25行:       −
== Background ==
+
== Background 背景==
    
The notion that self-organising biological systems – like a cell or brain – can be understood as minimising variational free energy is based upon Helmholtz’s work on unconscious inference  and subsequent treatments in psychology and machine learning. Variational free energy is a function of observations and a probability density over their hidden causes. This variational density is defined in relation to a probabilistic model that generates predicted observations from hypothesized causes. In this setting, free energy provides an approximation to Bayesian model evidence. Therefore, its minimisation can be seen as a Bayesian inference process. When a system actively makes observations to minimise free energy, it implicitly performs active inference and maximises the evidence for its model of the world.
 
The notion that self-organising biological systems – like a cell or brain – can be understood as minimising variational free energy is based upon Helmholtz’s work on unconscious inference  and subsequent treatments in psychology and machine learning. Variational free energy is a function of observations and a probability density over their hidden causes. This variational density is defined in relation to a probabilistic model that generates predicted observations from hypothesized causes. In this setting, free energy provides an approximation to Bayesian model evidence. Therefore, its minimisation can be seen as a Bayesian inference process. When a system actively makes observations to minimise free energy, it implicitly performs active inference and maximises the evidence for its model of the world.
第43行: 第45行:       −
=== Relationship to other theories ===
+
=== Relationship to other theories 与其他理论的关系===
    
Active inference is closely related to the good regulator theorem and related accounts of self-organisation, such as self-assembly, pattern formation, autopoiesis and practopoiesis. It addresses the themes considered in cybernetics, synergetics and embodied cognition. Because free energy can be expressed as the expected energy of observations under the variational density minus its entropy, it is also related to the maximum entropy principle. Finally, because the time average of energy is action, the principle of minimum variational free energy is a principle of least action.
 
Active inference is closely related to the good regulator theorem and related accounts of self-organisation, such as self-assembly, pattern formation, autopoiesis and practopoiesis. It addresses the themes considered in cybernetics, synergetics and embodied cognition. Because free energy can be expressed as the expected energy of observations under the variational density minus its entropy, it is also related to the maximum entropy principle. Finally, because the time average of energy is action, the principle of minimum variational free energy is a principle of least action.
第61行: 第63行:  
这些图表说明了状态划分为内部和隐藏的或外部的状态,这些状态被马尔可夫综合包括感觉和活跃的状态分开。下面的面板显示了这个分区,因为它将应用于大脑的行动和感知; 在那里活跃和内部状态最小化感官状态的自由能功能。随后内部状态的自我组织然后对应感知,而行动夫妇的大脑状态回到外部状态。上面的面板显示完全相同的依赖性,但重新排列,使内部状态与细胞内的状态相关,而感官状态成为细胞膜上覆盖的活跃状态(例如,细胞骨架的肌动蛋白丝)的表面状态。
 
这些图表说明了状态划分为内部和隐藏的或外部的状态,这些状态被马尔可夫综合包括感觉和活跃的状态分开。下面的面板显示了这个分区,因为它将应用于大脑的行动和感知; 在那里活跃和内部状态最小化感官状态的自由能功能。随后内部状态的自我组织然后对应感知,而行动夫妇的大脑状态回到外部状态。上面的面板显示完全相同的依赖性,但重新排列,使内部状态与细胞内的状态相关,而感官状态成为细胞膜上覆盖的活跃状态(例如,细胞骨架的肌动蛋白丝)的表面状态。
   −
== Definition ==
+
== Definition定义 ==
      第95行: 第97行:  
其目的是最大限度地提高模型的证据,或者最大限度地减少惊喜。这通常涉及隐状态的棘手边缘化,因此用变分自由能上界代替惊奇。这个公式建立在一个马尔可夫毯子(包括行动和感官状态) ,分离内部和外部状态。如果内部状态和作用力使自由能最小化,那么它们在感觉状态的熵上设置了一个上限
 
其目的是最大限度地提高模型的证据,或者最大限度地减少惊喜。这通常涉及隐状态的棘手边缘化,因此用变分自由能上界代替惊奇。这个公式建立在一个马尔可夫毯子(包括行动和感官状态) ,分离内部和外部状态。如果内部状态和作用力使自由能最小化,那么它们在感觉状态的熵上设置了一个上限
   −
=== Action and perception ===
+
=== Action and perception 行动与感知===
      第145行: 第147行:  
在复杂性成本(c.f,Occam 的剃须刀和更正式的计算成本处理方法)下,自由能最小的模型提供了对数据的准确解释。在这里,复杂性是指关于隐状态的变分密度和先验信念(即用于解释数据的有效自由度)之间的差异。
 
在复杂性成本(c.f,Occam 的剃须刀和更正式的计算成本处理方法)下,自由能最小的模型提供了对数据的准确解释。在这里,复杂性是指关于隐状态的变分密度和先验信念(即用于解释数据的有效自由度)之间的差异。
   −
== Free energy minimisation ==
+
== Free energy minimisation 自由能最小化==
         −
=== Free energy minimisation and self-organisation ===
+
=== Free energy minimisation and self-organisation 自由能最小化和自组织===
      第175行: 第177行:       −
=== Free energy minimisation and Bayesian inference ===
+
=== Free energy minimisation and Bayesian inference 自由能最小化与贝叶斯推理===
    
Free energy minimisation provides a useful way to formulate normative (Bayes optimal) models of neuronal inference and learning under uncertainty and therefore subscribes to the Bayesian brain hypothesis. The neuronal processes described by free energy minimisation depend on the nature of hidden states: <math> \Psi = X \times \Theta \times \Pi </math> that can comprise time-dependent variables, time-invariant parameters and the precision (inverse variance or temperature) of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively.
 
Free energy minimisation provides a useful way to formulate normative (Bayes optimal) models of neuronal inference and learning under uncertainty and therefore subscribes to the Bayesian brain hypothesis. The neuronal processes described by free energy minimisation depend on the nature of hidden states: <math> \Psi = X \times \Theta \times \Pi </math> that can comprise time-dependent variables, time-invariant parameters and the precision (inverse variance or temperature) of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively.
第203行: 第205行:       −
=== Free energy minimisation and thermodynamics ===
+
=== Free energy minimisation and thermodynamics 自由能最小化与热力学===
    
Usually, the generative models that define free energy are non-linear and hierarchical (like cortical hierarchies in the brain). Special cases of generalised filtering include Kalman filtering, which is formally equivalent to predictive coding – a popular metaphor for message passing in the brain. Under hierarchical models, predictive coding involves the recurrent exchange of ascending (bottom-up) prediction errors and descending (top-down) predictions that is consistent with the anatomy and physiology of sensory and motor systems.
 
Usually, the generative models that define free energy are non-linear and hierarchical (like cortical hierarchies in the brain). Special cases of generalised filtering include Kalman filtering, which is formally equivalent to predictive coding – a popular metaphor for message passing in the brain. Under hierarchical models, predictive coding involves the recurrent exchange of ascending (bottom-up) prediction errors and descending (top-down) predictions that is consistent with the anatomy and physiology of sensory and motor systems.
第215行: 第217行:       −
=== Free energy minimisation and information theory ===
+
=== Free energy minimisation and information theory 自由能最小化与信息论===
    
In predictive coding, optimising model parameters through a gradient ascent on the time integral of free energy (free action) reduces to associative or Hebbian plasticity and is associated with synaptic plasticity in the brain.
 
In predictive coding, optimising model parameters through a gradient ascent on the time integral of free energy (free action) reduces to associative or Hebbian plasticity and is associated with synaptic plasticity in the brain.
第233行: 第235行:  
优化精度参数相当于优化预测误差的增益(cf,Kalman 增益)。在神经系统似是而非的预测编码实现中,
 
优化精度参数相当于优化预测误差的增益(cf,Kalman 增益)。在神经系统似是而非的预测编码实现中,
   −
== Free energy minimisation in neuroscience ==
+
== Free energy minimisation in neuroscience 神经科学中的自由能最小化==
      第249行: 第251行:  
关于自上而下和自下而上的争论,已经被作为一个主要的公开的注意力问题来处理,一个计算模型已经成功地阐明了循环的性质之间的互换自上而下和自下而上的机制。作者使用一个新建立的注意力模型,即 SAIM,提出了一个被称为 pe-SAIM 的模型,这个模型与标准版本相反,从自上而下的角度来处理选择性注意。该模型考虑了前向预测错误发送到同一水平或以上的水平,以尽量减少能量函数之间的差异,表明数据及其原因,换句话说,生成模型和后验。为了提高效度,他们还在模型中加入了刺激之间的神经竞争。该模型的一个显著特点是仅根据任务执行过程中的预测误差来重新构造自由能函数。
 
关于自上而下和自下而上的争论,已经被作为一个主要的公开的注意力问题来处理,一个计算模型已经成功地阐明了循环的性质之间的互换自上而下和自下而上的机制。作者使用一个新建立的注意力模型,即 SAIM,提出了一个被称为 pe-SAIM 的模型,这个模型与标准版本相反,从自上而下的角度来处理选择性注意。该模型考虑了前向预测错误发送到同一水平或以上的水平,以尽量减少能量函数之间的差异,表明数据及其原因,换句话说,生成模型和后验。为了提高效度,他们还在模型中加入了刺激之间的神经竞争。该模型的一个显著特点是仅根据任务执行过程中的预测误差来重新构造自由能函数。
   −
=== Perceptual inference and categorisation ===
+
=== Perceptual inference and categorisation 感性推理与分类===
      第277行: 第279行:       −
=== Perceptual learning and memory ===
+
=== Perceptual learning and memory 知觉学习与记忆===
    
When gradient descent is applied to action <math> \dot{a} = -\partial_aF(s,\tilde{\mu}) </math>, motor control can be understood in terms of classical reflex arcs that are engaged by descending (corticospinal) predictions. This provides a formalism that generalizes the equilibrium point solution – to the degrees of freedom problem – to movement trajectories.
 
When gradient descent is applied to action <math> \dot{a} = -\partial_aF(s,\tilde{\mu}) </math>, motor control can be understood in terms of classical reflex arcs that are engaged by descending (corticospinal) predictions. This provides a formalism that generalizes the equilibrium point solution – to the degrees of freedom problem – to movement trajectories.
第289行: 第291行:       −
=== Perceptual precision, attention and salience ===
+
=== Perceptual precision, attention and salience 知觉的精确性、注意力和显著性===
    
Active inference is related to optimal control by replacing value or cost-to-go functions with prior beliefs about state transitions or flow. This exploits the close connection between Bayesian filtering and the solution to the Bellman equation. However, active inference starts with (priors over) flow <math> f = \Gamma \cdot \nabla V + \nabla \times W </math> that are specified with scalar <math> V(x) </math>  and vector <math> W(x) </math> value functions of state space (c.f., the Helmholtz decomposition).  Here, <math> \Gamma </math> is the amplitude of random fluctuations and cost is <math> c(x) = f \cdot \nabla V + \nabla \cdot \Gamma \cdot V</math>.  The priors over flow <math> p(\tilde{x}\mid m) </math> induce a prior over states <math> p(x\mid m) = \exp (V(x)) </math> that is the solution to the appropriate forward Kolmogorov equations. In contrast, optimal control optimises the flow, given a cost function, under the assumption that <math> W = 0 </math> (i.e., the flow is curl free or has detailed balance). Usually, this entails solving backward Kolmogorov equations.
 
Active inference is related to optimal control by replacing value or cost-to-go functions with prior beliefs about state transitions or flow. This exploits the close connection between Bayesian filtering and the solution to the Bellman equation. However, active inference starts with (priors over) flow <math> f = \Gamma \cdot \nabla V + \nabla \times W </math> that are specified with scalar <math> V(x) </math>  and vector <math> W(x) </math> value functions of state space (c.f., the Helmholtz decomposition).  Here, <math> \Gamma </math> is the amplitude of random fluctuations and cost is <math> c(x) = f \cdot \nabla V + \nabla \cdot \Gamma \cdot V</math>.  The priors over flow <math> p(\tilde{x}\mid m) </math> induce a prior over states <math> p(x\mid m) = \exp (V(x)) </math> that is the solution to the appropriate forward Kolmogorov equations. In contrast, optimal control optimises the flow, given a cost function, under the assumption that <math> W = 0 </math> (i.e., the flow is curl free or has detailed balance). Usually, this entails solving backward Kolmogorov equations.
第331行: 第333行:       −
== Active inference ==
+
== Active inference 主动推理==
      第339行: 第341行:       −
=== Active inference and optimal control ===
+
=== Active inference and optimal control 主动推理与最优控制===
      第347行: 第349行:       −
=== Active inference and optimal decision (game) theory ===
+
=== Active inference and optimal decision (game) theory 主动推理与最优决策(博弈)理论===
      第359行: 第361行:       −
=== Active inference and cognitive neuroscience ===
+
=== Active inference and cognitive neuroscience 主动推理与认知神经科学===
      第367行: 第369行:       −
== See also ==
+
== See also 请参阅==
     
561

个编辑

导航菜单