更改

添加1,296字节 、 2020年12月29日 (二) 17:22
第195行: 第195行:  
Free energy minimisation provides a useful way to formulate normative (Bayes optimal) models of neuronal inference and learning under uncertainty and therefore subscribes to the Bayesian brain hypothesis. The neuronal processes described by free energy minimisation depend on the nature of hidden states: <math> \Psi = X \times \Theta \times \Pi </math> that can comprise time-dependent variables, time-invariant parameters and the precision (inverse variance or temperature) of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively.
 
Free energy minimisation provides a useful way to formulate normative (Bayes optimal) models of neuronal inference and learning under uncertainty and therefore subscribes to the Bayesian brain hypothesis. The neuronal processes described by free energy minimisation depend on the nature of hidden states: <math> \Psi = X \times \Theta \times \Pi </math> that can comprise time-dependent variables, time-invariant parameters and the precision (inverse variance or temperature) of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively.
   −
自由能最小化提供了一个有用的方式来建立规范(贝叶斯优化)模型的神经元推理和学习的不确定性,因此赞同贝叶斯大脑假说。自由能最小化描述的神经元过程依赖于隐藏状态的本质: < math > Psi = x 乘以 Theta 乘以 Pi </math > ,这些状态包括时变变量、时不变参数和随机波动的精确度(逆方差或温度)。最小化变量、参数和精度分别对应于推理、学习和不确定性的编码。
+
自由能最小化为在不确定性条件下建立神经元推理和学习的规范(Bayes最优)模型提供了一种有用的方法,因此符合贝叶斯Bayesian脑假设。由自由能最小化描述的神经元过程取决于隐藏状态的性质:<math> \Psi = X \times \Theta \times \Pi </math>,它可以包括时间相关变量、时不变参数和随机波动的精度(逆方差或温度)。最小化变量、参数和精度分别对应于推理、学习和不确定性编码。
 
         
All Bayesian inference can be cast in terms of free energy minimisation; e.g.,.<ref>Roweis, S., & [[Zoubin Ghahramani|Ghahramani, Z.]] (1999). [http://authors.library.caltech.edu/13697/1/ROWnc99.pdf A unifying review of linear Gaussian models]. Neural Computat. , 11 (2), 305–45. {{doi|10.1162/089976699300016674}}</ref>{{Failed verification|date=April 2020}} When free energy is minimised with respect to internal states, the [[Kullback–Leibler divergence]] between the variational and posterior density over hidden states is minimised. This corresponds to approximate [[Bayesian inference]] – when the form of the variational density is fixed – and exact [[Bayesian inference]] otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering (e.g., [[Kalman filter]]ing). It is also used in Bayesian [[model selection]], where free energy can be usefully decomposed into complexity and accuracy:
 
All Bayesian inference can be cast in terms of free energy minimisation; e.g.,.<ref>Roweis, S., & [[Zoubin Ghahramani|Ghahramani, Z.]] (1999). [http://authors.library.caltech.edu/13697/1/ROWnc99.pdf A unifying review of linear Gaussian models]. Neural Computat. , 11 (2), 305–45. {{doi|10.1162/089976699300016674}}</ref>{{Failed verification|date=April 2020}} When free energy is minimised with respect to internal states, the [[Kullback–Leibler divergence]] between the variational and posterior density over hidden states is minimised. This corresponds to approximate [[Bayesian inference]] – when the form of the variational density is fixed – and exact [[Bayesian inference]] otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering (e.g., [[Kalman filter]]ing). It is also used in Bayesian [[model selection]], where free energy can be usefully decomposed into complexity and accuracy:
   −
 
+
所有的贝叶斯推断都可以用自由能最小化来表示,例如,<ref>Roweis, S., & [[Zoubin Ghahramani|Ghahramani, Z.]] (1999). [http://authors.library.caltech.edu/13697/1/ROWnc99.pdf A unifying review of linear Gaussian models]. Neural Computat. , 11 (2), 305–45. {{doi|10.1162/089976699300016674}}</ref>{{验证失败|日期=2020年4月}}当自由能相对于内部态最小化时,隐态上变分密度和后验密度之间的[[Kullback–Leibler散度]]最小化。当变分密度的形式固定时,这对应于近似的[[贝叶斯推理]],否则对应于精确的[[贝叶斯推理]]。因此,自由能最小化提供了贝叶斯推理和滤波的一般描述(例如,[[Kalman filter]]ing)。它也用于贝叶斯[[模型选择]],其中自由能可以有效地分解为复杂性和准确性:
    
: <math> \underset{\text{free-energy}} {\underbrace{ F(s,\mu)}} = \underset{\text{complexity}} {\underbrace{ D_\mathrm{KL}[q(\psi\mid\mu)\parallel p(\psi\mid m)]}} - \underset{\mathrm{accuracy}} {\underbrace{E_q[\log p(s\mid\psi,m)]}}</math>
 
: <math> \underset{\text{free-energy}} {\underbrace{ F(s,\mu)}} = \underset{\text{complexity}} {\underbrace{ D_\mathrm{KL}[q(\psi\mid\mu)\parallel p(\psi\mid m)]}} - \underset{\mathrm{accuracy}} {\underbrace{E_q[\log p(s\mid\psi,m)]}}</math>
第207行: 第206行:  
Free energy minimisation formalises the notion of unconscious inference in perception
 
Free energy minimisation formalises the notion of unconscious inference in perception
   −
自由能量最小化使知觉中的无意识推理的概念正规化
+
自由能最小化使知觉中的无意识推理的概念正规化
   −
   
+
Models with minimum free energy provide an accurate explanation of data, under complexity costs (c.f., [[Occam's razor]] and more formal treatments of computational costs<ref>Ortega, P. A., & Braun, D. A. (2012). [http://rspa.royalsocietypublishing.org/content/469/2153/20120683 Thermodynamics as a theory of decision-making with information processing costs]. Proceedings of the Royal Society A, vol. 469, no. 2153 (20120683) .</ref>). Here, complexity is the divergence between the variational density and prior beliefs about hidden states (i.e., the effective degrees of freedom used to explain the data).
   −
Models with minimum free energy provide an accurate explanation of data, under complexity costs (c.f., [[Occam's razor]] and more formal treatments of computational costs<ref>Ortega, P. A., & Braun, D. A. (2012). [http://rspa.royalsocietypublishing.org/content/469/2153/20120683 Thermodynamics as a theory of decision-making with information processing costs].  Proceedings of the Royal Society A, vol. 469, no. 2153 (20120683) .</ref>). Here, complexity is the divergence between the variational density and prior beliefs about hidden states (i.e., the effective degrees of freedom used to explain the data).
+
具有最小自由能的模型提供了数据的精确解释,降低了复杂性成本(c.f.[[奥卡姆剃刀]]和计算成本的更正式的处理方法<ref>Ortega, P. A., & Braun, D. A. (2012). [http://rspa.royalsocietypublishing.org/content/469/2153/20120683 Thermodynamics as a theory of decision-making with information processing costs].  Proceedings of the Royal Society A, vol. 469, no. 2153 (20120683) .</ref>)。这里,复杂性是变分密度和关于隐藏状态的先验信念(即用于解释数据的有效自由度)之间的差异。
    
  <math>\dot{\tilde{\mu}} = D \tilde{\mu} - \partial_{\mu}F(s,\mu)\Big|_{\mu = \tilde{\mu}}</math>
 
  <math>\dot{\tilde{\mu}} = D \tilde{\mu} - \partial_{\mu}F(s,\mu)\Big|_{\mu = \tilde{\mu}}</math>
  −
[数学]点{ tilde { mu } = d tilde { mu }-partial _ { mu } f (s,mu) Big | { mu = tilde { mu }
  −
  −
      
=== Free energy minimisation and thermodynamics 自由能最小化与热力学===
 
=== Free energy minimisation and thermodynamics 自由能最小化与热力学===
561

个编辑