更改

跳到导航 跳到搜索
添加1,778字节 、 2020年12月29日 (二) 17:04
第69行: 第69行:  
Definition (continuous formulation): Active inference rests on the tuple <math>(\Omega,\Psi,S,A,R,q,p)</math>,
 
Definition (continuous formulation): Active inference rests on the tuple <math>(\Omega,\Psi,S,A,R,q,p)</math>,
   −
定义(连续公式) : 主动推理依赖于元组 < math > (Omega,Psi,s,a,r,q,p) </math > ,
+
定义(连续公式) : 主动推理依赖于元组<math>(\Omega,\Psi,S,A,R,q,p)</math> ,
    
[[Image:MarokovBlanketFreeEnergyFigure.jpg|500px|right|These schematics illustrate the partition of states into internal and hidden or external states that are separated by a Markov blanket – comprising sensory and active states. The lower panel shows this partition as it would be applied to action and perception in the brain; where active and internal states minimise a free energy functional of sensory states. The ensuing self-organisation of internal states then correspond perception, while action couples brain states back to external states. The upper panel shows exactly the same dependencies but rearranged so that the internal states are associated with the intracellular states of a cell, while the sensory states become the surface states of the cell membrane overlying active states (e.g., the actin filaments of the cytoskeleton).]]
 
[[Image:MarokovBlanketFreeEnergyFigure.jpg|500px|right|These schematics illustrate the partition of states into internal and hidden or external states that are separated by a Markov blanket – comprising sensory and active states. The lower panel shows this partition as it would be applied to action and perception in the brain; where active and internal states minimise a free energy functional of sensory states. The ensuing self-organisation of internal states then correspond perception, while action couples brain states back to external states. The upper panel shows exactly the same dependencies but rearranged so that the internal states are associated with the intracellular states of a cell, while the sensory states become the surface states of the cell membrane overlying active states (e.g., the actin filaments of the cytoskeleton).]]
    +
[[图片:MarokovBlanketFreeEnergyFigure.jpg|500px |右|这些示意图说明了将状态划分为内部状态和隐藏状态或外部状态,这些状态由一个马尔可夫毯(包括感觉状态和活动状态)隔开。下面的面板显示了这个分区,因为它将应用于大脑中的动作和感知;活动和内部状态将感官状态的自由能功能最小化。随后内部状态的自组织与感知相对应,而动作将大脑状态与外部状态耦合。上面的面板显示了完全相同的依赖性,但重新排列,使内部状态与细胞内状态相关联,而感觉状态则成为细胞膜上覆盖活性状态的表面状态(例如,细胞骨架的肌动蛋白丝)。]]
    +
'''Definition''' (continuous formulation): Active inference rests on the tuple <math>(\Omega,\Psi,S,A,R,q,p)</math>,
   −
'''Definition''' (continuous formulation): Active inference rests on the tuple <math>(\Omega,\Psi,S,A,R,q,p)</math>,
+
“Definition”(连续公式):主动推理基于元组<math>\Omega,\Psi,S,A,R,q,p)</math>
    
* ''A sample space'' <math>\Omega</math> – from which random fluctuations <math>\omega \in \Omega</math> are drawn
 
* ''A sample space'' <math>\Omega</math> – from which random fluctuations <math>\omega \in \Omega</math> are drawn
 +
 +
*“一个样本空间”<math>\Omega</math>–从中提取随机波动<math>\Omega\in\Omega</math>
    
* ''Hidden or external states'' <math>\Psi:\Psi\times A \times \Omega \to \mathbb{R}</math> – that cause sensory states and depend on action
 
* ''Hidden or external states'' <math>\Psi:\Psi\times A \times \Omega \to \mathbb{R}</math> – that cause sensory states and depend on action
 +
 +
*“隐藏或外部状态”<math>\Psi:\Psi\times A\times\Omega\to\mathbb{R}</math>——引起感觉状态并依赖于动作
    
* ''Sensory states'' <math>S:\Psi \times A \times \Omega \to \mathbb{R}</math> – a probabilistic mapping from action and hidden states  
 
* ''Sensory states'' <math>S:\Psi \times A \times \Omega \to \mathbb{R}</math> – a probabilistic mapping from action and hidden states  
 +
 +
*“感觉状态”<math>S:\Psi\times A\times\Omega\to\mathbb{R}</math>——动作和隐藏状态的概率映射
    
* ''Action'' <math>A:S\times R \to \mathbb{R}</math> – that depends on sensory and internal states  
 
* ''Action'' <math>A:S\times R \to \mathbb{R}</math> – that depends on sensory and internal states  
 +
 +
*“动作”<math>A:S\times R \to \mathbb{R}</math>——这取决于感觉和内部状态
    
* ''Internal states'' <math>R:R\times S \to \mathbb{R}</math> – that cause action and depend on sensory states
 
* ''Internal states'' <math>R:R\times S \to \mathbb{R}</math> – that cause action and depend on sensory states
 +
 +
*“内部状态”<math>R:R\times S\to\mathbb{R}</math>——引起动作并依赖于感觉状态
    
* ''Generative density'' <math>p(s, \psi \mid m)</math> – over sensory and hidden states under a generative model  <math>m</math>
 
* ''Generative density'' <math>p(s, \psi \mid m)</math> – over sensory and hidden states under a generative model  <math>m</math>
   −
* ''Variational density'' <math>q(\psi \mid \mu)</math> – over hidden states <math>\psi \in \Psi</math> that is parameterised by internal states <math>\mu \in R</math>
+
*“生成密度”<math>p(s, \psi \mid m)</math>——在生成模型下的感觉和隐藏状态
    +
*''Variational density'' <math>q(\psi \mid \mu)</math>  – over hidden states <math>\psi \in \Psi</math> that is parameterised by internal states <math>\mu \in R</math>
    +
*“变分密度”<math>q(\psi \mid \mu)</math>–由R中的内部状态<math>\mu \in R</math>参数化的隐藏状态<math>\psi \in \Psi</math>
    
The objective is to maximise model evidence <math>p(s\mid m)</math> or minimise surprise <math>-\log p(s\mid m)</math>. This generally involves an intractable marginalisation over hidden states, so surprise is replaced with an upper variational free energy bound. This formulation rests on a Markov blanket (comprising action and sensory states) that separates internal and external states. If internal states and action minimise free energy, then they place an upper bound on the entropy of sensory states
 
The objective is to maximise model evidence <math>p(s\mid m)</math> or minimise surprise <math>-\log p(s\mid m)</math>. This generally involves an intractable marginalisation over hidden states, so surprise is replaced with an upper variational free energy bound. This formulation rests on a Markov blanket (comprising action and sensory states) that separates internal and external states. If internal states and action minimise free energy, then they place an upper bound on the entropy of sensory states
第106行: 第120行:     
The objective is to maximise model evidence <math>p(s\mid m)</math> or minimise surprise <math>-\log p(s\mid m)</math>. This generally involves an intractable marginalisation over hidden states, so surprise is replaced with an upper variational free energy bound.<ref name="Dayan"/> However, this means that internal states must also minimise free energy, because free energy is a function of sensory and internal states:
 
The objective is to maximise model evidence <math>p(s\mid m)</math> or minimise surprise <math>-\log p(s\mid m)</math>. This generally involves an intractable marginalisation over hidden states, so surprise is replaced with an upper variational free energy bound.<ref name="Dayan"/> However, this means that internal states must also minimise free energy, because free energy is a function of sensory and internal states:
 +
 +
目标是最大化模型证据<math>p(s\mid m)</math>或最小化意外<math>-\log p(s\mid m)</math>。这通常涉及隐藏态的难以处理的边缘化,因此意外被一个较高的变分自由能边界所取代。<ref name="Dayan"/>然而,这意味着内部状态也必须最小化自由能,因为自由能是感官和内部状态的函数:
    
\lim_{T\to\infty} \frac{1}{T} \int_0^T \underset{\text{surprise}}{\underbrace{-\log p(s(t)\mid m)}} \, dt = H[p(s\mid m)] </math>
 
\lim_{T\to\infty} \frac{1}{T} \int_0^T \underset{\text{surprise}}{\underbrace{-\log p(s(t)\mid m)}} \, dt = H[p(s\mid m)] </math>
第117行: 第133行:  
This is because – under ergodic assumptions – the long-term average of surprise is entropy. This bound resists a natural tendency to disorder – of the sort associated with the second law of thermodynamics and the fluctuation theorem.
 
This is because – under ergodic assumptions – the long-term average of surprise is entropy. This bound resists a natural tendency to disorder – of the sort associated with the second law of thermodynamics and the fluctuation theorem.
   −
这是因为——在遍历性假设下——长期惊奇的平均值是熵。这种束缚阻止了一种自然的无序倾向,这种倾向与热力学第二定律和涨落定理有关。
+
这是因为——在遍历假设下——意外的长期平均值是熵。这个界限阻止了一种自然的无序倾向,这种无序倾向与热力学第二定律和涨落定理有关。
    
: <math>\mu(t) = \underset{\mu}{\operatorname{arg\,min}} \{ F(s(t),\mu)) \}  </math>
 
: <math>\mu(t) = \underset{\mu}{\operatorname{arg\,min}} \{ F(s(t),\mu)) \}  </math>
第129行: 第145行:  
All Bayesian inference can be cast in terms of free energy minimisation; e.g.,. When free energy is minimised with respect to internal states, the Kullback–Leibler divergence between the variational and posterior density over hidden states is minimised. This corresponds to approximate Bayesian inference – when the form of the variational density is fixed – and exact Bayesian inference otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering (e.g., Kalman filtering). It is also used in Bayesian model selection, where free energy can be usefully decomposed into complexity and accuracy:
 
All Bayesian inference can be cast in terms of free energy minimisation; e.g.,. When free energy is minimised with respect to internal states, the Kullback–Leibler divergence between the variational and posterior density over hidden states is minimised. This corresponds to approximate Bayesian inference – when the form of the variational density is fixed – and exact Bayesian inference otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering (e.g., Kalman filtering). It is also used in Bayesian model selection, where free energy can be usefully decomposed into complexity and accuracy:
   −
所有的贝叶斯推断都可以以自由能量最小化的方式施放; 例如:。当自由能相对于内态最小时,隐态上变分密度和后密度之间的 Kullback-Leibler 散度最小。这相当于近似贝叶斯推断-当变分密度的形式是固定的-否则精确的贝叶斯推断。因此,自由能量最小化提供了贝叶斯推断和滤波的一般描述(例如,卡尔曼滤波)。它也用于贝叶斯模型选择,其中自由能可以有效地分解为复杂性和准确性:
+
所有的贝叶斯推断都可以用自由能最小化来表达,例如,当自由能相对于内态最小化时,隐态上变分密度和后验密度之间的Kullback-Leibler散度最小化。当变分密度的形式固定时,这对应于近似贝叶斯推理,反之则对应于精确贝叶斯推理。因此,自由能最小化提供了贝叶斯推理和滤波(如Kalman滤波)的一般描述。复杂度和贝叶斯模型可以有效地分解为自由能量选择:
 +
 
    
   \geq \underset{\mathrm{surprise}} {\underbrace{ -\log p(s \mid m)}} </math>
 
   \geq \underset{\mathrm{surprise}} {\underbrace{ -\log p(s \mid m)}} </math>
第137行: 第154行:  
  <math> \underset{\text{free-energy}} {\underbrace{ F(s,\mu)}} = \underset{\text{complexity}} {\underbrace{ D_\mathrm{KL}[q(\psi\mid\mu)\parallel p(\psi\mid m)]}} - \underset{\mathrm{accuracy}} {\underbrace{E_q[\log p(s\mid\psi,m)]}}</math>
 
  <math> \underset{\text{free-energy}} {\underbrace{ F(s,\mu)}} = \underset{\text{complexity}} {\underbrace{ D_\mathrm{KL}[q(\psi\mid\mu)\parallel p(\psi\mid m)]}} - \underset{\mathrm{accuracy}} {\underbrace{E_q[\log p(s\mid\psi,m)]}}</math>
   −
{{{{{{自由能}}}{ underbrace { f (s,mu)}} = underset { text { complexity }{ underbrace { d _ mathrm { KL }[ q (psi mid mu) parallel p (psi mid m)]}}-underset { mathrm { accuracy }{ underbrace { e _ q [ log p (mid psi,m)]}}}} </math >
      
This induces a dual minimisation with respect to action and internal states that correspond to action and perception respectively.
 
This induces a dual minimisation with respect to action and internal states that correspond to action and perception respectively.
   −
 
+
这导致了一个双重最小化的行动和内部状态,分别对应于行动和感知。
    
Models with minimum free energy provide an accurate explanation of data, under complexity costs (c.f., Occam's razor and more formal treatments of computational costs). Here, complexity is the divergence between the variational density and prior beliefs about hidden states (i.e., the effective degrees of freedom used to explain the data).
 
Models with minimum free energy provide an accurate explanation of data, under complexity costs (c.f., Occam's razor and more formal treatments of computational costs). Here, complexity is the divergence between the variational density and prior beliefs about hidden states (i.e., the effective degrees of freedom used to explain the data).
   −
在复杂性成本(c.f,Occam 的剃须刀和更正式的计算成本处理方法)下,自由能最小的模型提供了对数据的准确解释。在这里,复杂性是指关于隐状态的变分密度和先验信念(即用于解释数据的有效自由度)之间的差异。
+
具有最小自由能的模型在复杂度成本(c.f.,Occam's razor和更正式的计算成本处理)下提供了数据的精确解释。这里,复杂性是变分密度和关于隐藏状态的先验信念(即用于解释数据的有效自由度)之间的差异。
    
== Free energy minimisation 自由能最小化==
 
== Free energy minimisation 自由能最小化==
561

个编辑

导航菜单