更改

删除27字节 、 2022年4月24日 (日) 20:22
第85行: 第85行:  
In this architecture, an input layer feeds into a high dimensional dynamical system which is read out by a trainable single-layer perceptron. Two kinds of dynamical system were described: a recurrent neural network with fixed random weights, and a continuous reaction–diffusion system inspired by Alan Turing’s model of morphogenesis. At the trainable layer, the perceptron associates current inputs with the signals that reverberate in the dynamical system; the latter were said to provide a dynamic "context" for the inputs.  In the language of later work, the reaction–diffusion system served as the reservoir.
 
In this architecture, an input layer feeds into a high dimensional dynamical system which is read out by a trainable single-layer perceptron. Two kinds of dynamical system were described: a recurrent neural network with fixed random weights, and a continuous reaction–diffusion system inspired by Alan Turing’s model of morphogenesis. At the trainable layer, the perceptron associates current inputs with the signals that reverberate in the dynamical system; the latter were said to provide a dynamic "context" for the inputs.  In the language of later work, the reaction–diffusion system served as the reservoir.
   −
上下文混响网络上下文混响网络是油藏计算的一个早期实例。凯文 · 柯比。“神经连续学习中的环境动力学。”佛罗里达人工智能研究会论文集,FLAIRS (1991) ,66-70。在这种架构中,一个输入层输入到一个高维动力系统中,这个高维信息由一个可训练的单层感知器读出。描述了两种动力系统: 一种是固定随机重量的递归神经网络,另一种是受 Alan Turing 的形态发生模型启发的连续反应扩散系统。在可训练层,感知器将当前输入与在动力系统中回响的信号联系起来; 后者被认为为输入提供了一个动态的“上下文”。用后来的工作语言来说,反应扩散系统就是水库。
+
上下文混响网络
 +
 
 +
上下文混响网络是储备池计算的一个早期实例。在这种结构中,一个输入层将信号输入到一个高维动力系统中,这个高维动力系统中的信息由一个可训练的单层感知器读出。有两种类型的动力学系统: 其中一种是将随机权重固定的递归神经网络,另一种动力学系统是受 Alan Turing 的形态发生模型启发的连续反应扩散系统。在可训练层,感知器将当前输入与在动力学系统中回响的信号联系起来,这个在动力学系统中回响的信号被认为是为输入提供的一个动力学的“上下文”。用后来的工作的术语来讲,反应扩散系统就相当于储备池库。
    
==== Echo state network ====
 
==== Echo state network ====
第92行: 第94行:  
The Tree Echo State Network (TreeESN) model represents a generalization of the reservoir computing framework to tree structured data.
 
The Tree Echo State Network (TreeESN) model represents a generalization of the reservoir computing framework to tree structured data.
   −
= = = = 回声状态网络 = = = = = 树状回声状态网络(TreeESN)模型代表了油藏计算框架向树状结构数据的推广。
+
回声状态网络  
 +
 
 +
树状回声状态网络(TreeESN)模型代表了储备池计算框架向树状结构数据的推广。
    
==== Liquid-state machine ====
 
==== Liquid-state machine ====
第99行: 第103行:  
Chaotic Liquid State Machine
 
Chaotic Liquid State Machine
   −
= = = = 液态机 = = = 混沌液态机
+
液体状态机
 +
 
 +
混沌液体状态机
    
The liquid (i.e. reservoir) of a Chaotic Liquid State Machine (CLSM),<ref name=":7">{{Cite journal|last1=Aoun|first1=Mario Antoine|last2=Boukadoum|first2=Mounir|date=2014|title=Learning algorithm and neurocomputing architecture for NDS Neurons|url=http://dx.doi.org/10.1109/icci-cc.2014.6921451|journal=2014 IEEE 13th International Conference on Cognitive Informatics and Cognitive Computing|pages=126–132|publisher=IEEE|doi=10.1109/icci-cc.2014.6921451|isbn=978-1-4799-6081-1|s2cid=16026952}}</ref><ref name=":8">{{Cite journal|last1=Aoun|first1=Mario Antoine|last2=Boukadoum|first2=Mounir|date=2015|title=Chaotic Liquid State Machine|url=http://dx.doi.org/10.4018/ijcini.2015100101|journal=International Journal of Cognitive Informatics and Natural Intelligence|volume=9|issue=4|pages=1–20|doi=10.4018/ijcini.2015100101|issn=1557-3958}}</ref> or chaotic reservoir, is made from chaotic spiking neurons but which stabilize their activity by settling to a single hypothesis that describes the trained inputs of the machine. This is in contrast to general types of reservoirs that don’t stabilize. The liquid stabilization occurs via synaptic plasticity and chaos control that govern neural connections inside the liquid. CLSM showed promising results in learning sensitive time series data.<ref name=":7" /><ref name=":8" />
 
The liquid (i.e. reservoir) of a Chaotic Liquid State Machine (CLSM),<ref name=":7">{{Cite journal|last1=Aoun|first1=Mario Antoine|last2=Boukadoum|first2=Mounir|date=2014|title=Learning algorithm and neurocomputing architecture for NDS Neurons|url=http://dx.doi.org/10.1109/icci-cc.2014.6921451|journal=2014 IEEE 13th International Conference on Cognitive Informatics and Cognitive Computing|pages=126–132|publisher=IEEE|doi=10.1109/icci-cc.2014.6921451|isbn=978-1-4799-6081-1|s2cid=16026952}}</ref><ref name=":8">{{Cite journal|last1=Aoun|first1=Mario Antoine|last2=Boukadoum|first2=Mounir|date=2015|title=Chaotic Liquid State Machine|url=http://dx.doi.org/10.4018/ijcini.2015100101|journal=International Journal of Cognitive Informatics and Natural Intelligence|volume=9|issue=4|pages=1–20|doi=10.4018/ijcini.2015100101|issn=1557-3958}}</ref> or chaotic reservoir, is made from chaotic spiking neurons but which stabilize their activity by settling to a single hypothesis that describes the trained inputs of the machine. This is in contrast to general types of reservoirs that don’t stabilize. The liquid stabilization occurs via synaptic plasticity and chaos control that govern neural connections inside the liquid. CLSM showed promising results in learning sensitive time series data.<ref name=":7" /><ref name=":8" />
20

个编辑