更改

跳到导航 跳到搜索
添加1,162字节 、 2020年12月29日 (二) 17:40
第229行: 第229行:  
In predictive coding, optimising model parameters through a gradient ascent on the time integral of free energy (free action) reduces to associative or Hebbian plasticity and is associated with synaptic plasticity in the brain.
 
In predictive coding, optimising model parameters through a gradient ascent on the time integral of free energy (free action) reduces to associative or Hebbian plasticity and is associated with synaptic plasticity in the brain.
   −
在预测编码中,通过自由能时间积分(自由作用)的梯度上升来优化模型参数,降低到联想或赫布可塑性,并与大脑中的突触可塑性有关。
+
在预测编码中,通过自由能时间积分(自由作用)的梯度上升来优化模型参数会降低到联想或赫伯可塑性,并与大脑中的突触可塑性有关。
    +
Free energy minimisation is equivalent to maximising the [[mutual information]] between sensory states and internal states that parameterise the variational density (for a fixed entropy variational density).<ref name="Friston" />{{Better source|date=February 2020|reason=MDPI is a questionable source}} This relates free energy minimization to the principle of minimum redundancy<ref>Barlow, H. (1961). [http://www.trin.cam.ac.uk/horacebarlow/21.pdf Possible principles underlying the transformations of sensory messages] {{Webarchive|url=https://web.archive.org/web/20120603182706/http://www.trin.cam.ac.uk/horacebarlow/21.pdf |date=2012-06-03 }}. In W. Rosenblith (Ed.), Sensory Communication (pp. 217-34). Cambridge, MA: MIT Press.</ref> and related treatments using information theory to describe optimal behaviour.<ref>Linsker, R. (1990). [https://www.annualreviews.org/doi/pdf/10.1146/annurev.ne.13.030190.001353 Perceptual neural organization: some approaches based on network models and information theory]. Annu Rev Neurosci. , 13, 257–81.</ref><ref>Bialek, W., Nemenman, I., & Tishby, N. (2001). [http://www.princeton.edu/~wbialek/our_papers/bnt_01a.pdf Predictability, complexity, and learning]. Neural Computat., 13 (11), 2409–63.</ref>
   −
 
+
自由能最小化相当于最大化感官状态和内部状态之间的[[互信息]],使变分密度参数化(对于固定熵变分密度)<ref name="Friston" />{{Better source|date=February 2020|reason=MDPI is a questionable source}}这将自由能最小化与最小冗余原则联系起来。<ref>Barlow, H. (1961). [http://www.trin.cam.ac.uk/horacebarlow/21.pdf Possible principles underlying the transformations of sensory messages] {{Webarchive|url=https://web.archive.org/web/20120603182706/http://www.trin.cam.ac.uk/horacebarlow/21.pdf |date=2012-06-03 }}. In W. Rosenblith (Ed.), Sensory Communication (pp. 217-34). Cambridge, MA: MIT Press.</ref>并且联系到用信息论描述最优行为的相关处理<ref>Linsker, R. (1990).[https://www.annualreviews.org/doi/pdf/10.1146/annurev.ne.13.030190.001353 Perceptual neural organization: some approaches based on network models and information theory]. Annu Rev Neurosci. , 13, 257–81.</ref><ref>Bialek, W., Nemenman, I., & Tishby, N. (2001). [http://www.princeton.edu/~wbialek/our_papers/bnt_01a.pdf Predictability, complexity, and learning]. Neural Computat., 13 (11), 2409–63.</ref>
Free energy minimisation is equivalent to maximising the [[mutual information]] between sensory states and internal states that parameterise the variational density (for a fixed entropy variational density).<ref name="Friston" />{{Better source|date=February 2020|reason=MDPI is a questionable source}} This relates free energy minimization to the principle of minimum redundancy<ref>Barlow, H. (1961). [http://www.trin.cam.ac.uk/horacebarlow/21.pdf Possible principles underlying the transformations of sensory messages] {{Webarchive|url=https://web.archive.org/web/20120603182706/http://www.trin.cam.ac.uk/horacebarlow/21.pdf |date=2012-06-03 }}. In W. Rosenblith (Ed.), Sensory Communication (pp. 217-34). Cambridge, MA: MIT Press.</ref> and related treatments using information theory to describe optimal behaviour.<ref>Linsker, R. (1990).  
  −
 
  −
[https://www.annualreviews.org/doi/pdf/10.1146/annurev.ne.13.030190.001353 Perceptual neural organization: some approaches based on network models and information theory]. Annu Rev Neurosci. , 13, 257–81.</ref><ref>Bialek, W., Nemenman, I., & Tishby, N. (2001). [http://www.princeton.edu/~wbialek/our_papers/bnt_01a.pdf Predictability, complexity, and learning]. Neural Computat., 13 (11), 2409–63.</ref>
        第241行: 第239行:  
Optimizing the precision parameters corresponds to optimizing the gain of prediction errors (c.f., Kalman gain). In neuronally plausible implementations of predictive coding,
 
Optimizing the precision parameters corresponds to optimizing the gain of prediction errors (c.f., Kalman gain). In neuronally plausible implementations of predictive coding,
   −
优化精度参数相当于优化预测误差的增益(cf,Kalman 增益)。在神经系统似是而非的预测编码实现中,
+
优化精度参数对应于优化预测误差的增益(c.f.,Kalman增益)。在预测编码的神经元似是而非的实现中,
    
== Free energy minimisation in neuroscience 神经科学中的自由能最小化==
 
== Free energy minimisation in neuroscience 神经科学中的自由能最小化==
561

个编辑

导航菜单