更改

跳到导航 跳到搜索
删除23,213字节 、 2022年6月19日 (日) 10:58
无编辑摘要
第1行: 第1行: −
此词条暂由神经动力学模型读书会词条梳理志愿者1210080212翻译,未经专家审核,带来阅读不便,请见谅。
+
{{#seo:
 +
|keywords=储备池计算,神经网络,动力学系统
 +
|description=是一个从循环神经网络理论中得出来的计算框架
 +
}}
   −
{{Short description|A type of recurrent neural network with random and non-trainable internal structure}}
+
'''储备池计算 Reservoir computing'''是一个从循环神经网络理论中得出来的计算框架,储备池是一个固定的,非线性系统,其内部具有动力学过程,这个动力学过程将输入信号映射到更高维的计算空间。<ref name=":4">{{Cite journal|last1=Tanaka|first1=Gouhei|last2=Yamane|first2=Toshiyuki|last3=Héroux|first3=Jean Benoit|last4=Nakane|first4=Ryosho|last5=Kanazawa|first5=Naoki|last6=Takeda|first6=Seiji|last7=Numata|first7=Hidetoshi|last8=Nakano|first8=Daiju|last9=Hirose|first9=Akira|title=Recent advances in physical reservoir computing: A review|journal=Neural Networks|volume=115|pages=100–123|doi=10.1016/j.neunet.2019.03.005|pmid=30981085|issn=0893-6080|year=2019|doi-access=free}}</ref>当输入信号被送入储备池(储备池通常被当作一个“黑匣子”)后,可以训练一个简单的读出机制来读取储备池中神经元的状态并将其映射到所需的输出。<ref name=":4" />这个框架的第一个关键好处是,训练只在读出阶段进行,在读出阶段储备池动力学特性保持不变。<ref name=":4" />第二个好处是这个储备池系统的计算能力,无论是在经典力学还是量子力学中,都可以有效的降低计算成本。<ref name=":6">{{Cite journal|last1=Röhm|first1=André|last2=Lüdge|first2=Kathy|date=2018-08-03|title=Multiplexed networks: reservoir computing with virtual and real nodes|journal=Journal of Physics Communications|volume=2|issue=8|pages=085007|bibcode=2018JPhCo...2h5007R|doi=10.1088/2399-6528/aad56d|issn=2399-6528|doi-access=free}}</ref>
'''Reservoir computing''' is a framework for computation derived from [[recurrent neural network]] theory that maps input signals into higher dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir.<ref name=":4">{{Cite journal|last1=Tanaka|first1=Gouhei|last2=Yamane|first2=Toshiyuki|last3=Héroux|first3=Jean Benoit|last4=Nakane|first4=Ryosho|last5=Kanazawa|first5=Naoki|last6=Takeda|first6=Seiji|last7=Numata|first7=Hidetoshi|last8=Nakano|first8=Daiju|last9=Hirose|first9=Akira|title=Recent advances in physical reservoir computing: A review|journal=Neural Networks|volume=115|pages=100–123|doi=10.1016/j.neunet.2019.03.005|pmid=30981085|issn=0893-6080|year=2019|doi-access=free}}</ref> After the input signal is fed into the reservoir, which is treated as a "black box," a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output.<ref name=":4" /> The first key benefit of this framework is that training is performed only at the readout stage, as the reservoir dynamics are fixed.<ref name=":4" /> The second is that the computational power of naturally available systems, both classical and quantum mechanical, can be used to reduce the effective computational cost.<ref name=":6">{{Cite journal|last1=Röhm|first1=André|last2=Lüdge|first2=Kathy|date=2018-08-03|title=Multiplexed networks: reservoir computing with virtual and real nodes|journal=Journal of Physics Communications|volume=2|issue=8|pages=085007|bibcode=2018JPhCo...2h5007R|doi=10.1088/2399-6528/aad56d|issn=2399-6528|doi-access=free}}</ref>
     −
储备池计算是一个从循环神经网络理论中得出来的计算框架,储备池是一个固定的,非线性系统,其内部具有动力学过程,这个动力学过程将输入信号映射到更高维的计算空间。<ref name=":4" />当输入信号被送入储备池(储备池通常被当作一个“黑匣子”)后,可以训练一个简单的读出机制来读取储备池中神经元的状态并将其映射到所需的输出。<ref name=":4" />这个框架的第一个关键好处是,训练只在读出阶段进行,在读出阶段储备池动力学特性保持不变。<ref name=":4" />第二个好处是这个储备池系统的计算能力,无论是在经典力学还是量子力学中,都可以有效的降低计算成本。<ref name=":6" />
     −
== History ==
+
== 历史 ==
The concept of reservoir computing stems from the use of recursive connections within [[neural network]]s to create a complex dynamical system.<ref name=":0">[[Benjamin Schrauwen|Schrauwen, Benjamin]], [[David Verstraeten]], and [[Jan Van Campenhout]].  
+
储备池计算的概念源于神经网络中使用递归连接来创建一个复杂的动力系统。<ref name=":0">[[Benjamin Schrauwen|Schrauwen, Benjamin]], [[David Verstraeten]], and [[Jan Van Campenhout]].  
 
"An overview of reservoir computing: theory, applications, and implementations."  
 
"An overview of reservoir computing: theory, applications, and implementations."  
Proceedings of the European Symposium on Artificial Neural Networks ESANN 2007, pp. 471–482.</ref> It is a generalisation of earlier neural network architectures such as recurrent neural networks, [[Liquid state machine|liquid-state machines]] and [[Echo state network|echo-state networks]]. Reservoir computing also extends to physical systems that are not networks in the classical sense, but rather continuous systems in space and/or time: e.g. a literal "bucket of water" can serve as a reservoir that performs computations on inputs given as perturbations of the surface.<ref name=":9">{{Cite book|last1=Fernando|first1=C.|last2=Sojakka|first2=Sampsa|title=Advances in Artificial Life|chapter=Pattern Recognition in a Bucket|date=2003 |url=https://www.semanticscholar.org/paper/Pattern-Recognition-in-a-Bucket-Fernando-Sojakka/af342af4d0e674aef3bced5fd90875c6f2e04abc |series=Lecture Notes in Computer Science|volume=2801|pages=588–597|doi=10.1007/978-3-540-39432-7_63|isbn=978-3-540-20057-4|s2cid=15073928}}</ref> The resultant complexity of such recurrent neural networks was found to be useful in solving a variety of problems including language processing and dynamic system modeling.<ref name=":0" /> However, training of recurrent neural networks is challenging and computationally expensive.<ref name=":0" /> Reservoir computing reduces those training-related challenges by fixing the dynamics of the reservoir and only training the linear output layer.<ref name=":0" />
+
Proceedings of the European Symposium on Artificial Neural Networks ESANN 2007, pp. 471–482.</ref>它是对早期神经网络体系结构,比如循环神经网络,液体状态机和回声状态网络的一个推广。储备计算还可以扩展到物理系统,在物理系统中它不是传统意义上的网络,而是空间和/或时间上的连续系统: 例如:。“一桶水”可以看作一个蓄水池,可以对它表面的扰动输入进行计算。<ref name=":9">{{Cite book|last1=Fernando|first1=C.|last2=Sojakka|first2=Sampsa|title=Advances in Artificial Life|chapter=Pattern Recognition in a Bucket|date=2003 |url=https://www.semanticscholar.org/paper/Pattern-Recognition-in-a-Bucket-Fernando-Sojakka/af342af4d0e674aef3bced5fd90875c6f2e04abc |series=Lecture Notes in Computer Science|volume=2801|pages=588–597|doi=10.1007/978-3-540-39432-7_63|isbn=978-3-540-20057-4|s2cid=15073928}}</ref>循环神经网络内部的复杂性,对于解决包括语言处理和动态系统建模在内的各种问题是有用的。<ref name=":0" />然而,循环神经网络的训练是具有挑战性的,它的计算开销十分巨大。<ref name=":0" />储备池计算通过固定储备池的动力学特性,只训练线性读出层的特点,可以减少循环神经网络在训练上的问题。<ref name=":0" />
   −
The concept of reservoir computing stems from the use of recursive connections within neural networks to create a complex dynamical system.Schrauwen, Benjamin, David Verstraeten, and Jan Van Campenhout.
  −
"An overview of reservoir computing: theory, applications, and implementations."
  −
Proceedings of the European Symposium on Artificial Neural Networks ESANN 2007, pp. 471–482. It is a generalisation of earlier neural network architectures such as recurrent neural networks, liquid-state machines and echo-state networks. Reservoir computing also extends to physical systems that are not networks in the classical sense, but rather continuous systems in space and/or time: e.g. a literal "bucket of water" can serve as a reservoir that performs computations on inputs given as perturbations of the surface. The resultant complexity of such recurrent neural networks was found to be useful in solving a variety of problems including language processing and dynamic system modeling. However, training of recurrent neural networks is challenging and computationally expensive. Reservoir computing reduces those training-related challenges by fixing the dynamics of the reservoir and only training the linear output layer.
  −
  −
储备池计算的概念源于神经网络中使用递归连接来创建一个复杂的动力系统。<ref name=":0" />它是对早期神经网络体系结构,比如循环神经网络,液体状态机和回声状态网络的一个推广。储备计算还可以扩展到物理系统,在物理系统中它不是传统意义上的网络,而是空间和/或时间上的连续系统: 例如:。“一桶水”可以看作一个蓄水池,可以对它表面的扰动输入进行计算。<ref name=":9" />循环神经网络内部的复杂性,对于解决包括语言处理和动态系统建模在内的各种问题是有用的。<ref name=":0" />然而,循环神经网络的训练是具有挑战性的,它的计算开销十分巨大。<ref name=":0" />储备池计算通过固定储备池的动力学特性,只训练线性读出层的特点,可以减少循环神经网络在训练上的问题。<ref name=":0" />
  −
  −
A large variety of nonlinear dynamical systems can serve as a reservoir that performs computations. In recent years semiconductor lasers have attracted considerable interest as computation can be fast and energy efficient compared to electrical components.
  −
  −
A large variety of nonlinear dynamical systems can serve as a reservoir that performs computations. In recent years semiconductor lasers have attracted considerable interest as computation can be fast and energy efficient compared to electrical components.
      
各种各样的非线性动力系统可以看作一个储备池来进行计算。近年来,半导体激光器引起了人们的极大兴趣,因为与电子元件相比,半导体激光器的运算速度更快,能量效率更高。
 
各种各样的非线性动力系统可以看作一个储备池来进行计算。近年来,半导体激光器引起了人们的极大兴趣,因为与电子元件相比,半导体激光器的运算速度更快,能量效率更高。
   −
Recent advances in both AI and quantum information theory have given rise to the concept of [[quantum neural networks]].<ref name=":2" /> These hold promise in quantum information processing, which is challenging to classical networks, but can also find application in solving classical problems.<ref name=":2">{{Cite journal|last1=Ghosh|first1=Sanjib|last2=Opala|first2=Andrzej|last3=Matuszewski|first3=Michał|last4=Paterek|first4=Tomasz|last5=Liew|first5=Timothy C. H.|date=December 2019|title=Quantum reservoir processing|arxiv=1811.10335|journal=NPJ Quantum Information|volume=5|issue=1|pages=35|doi=10.1038/s41534-019-0149-8|bibcode=2019npjQI...5...35G|s2cid=119197635|issn=2056-6387}}</ref><ref name=":3">{{cite arXiv|last1=Negoro|first1=Makoto|last2=Mitarai|first2=Kosuke|last3=Fujii|first3=Keisuke|last4=Nakajima|first4=Kohei|last5=Kitagawa|first5=Masahiro|date=2018-06-28|title=Machine learning with controllable quantum dynamics of a nuclear spin ensemble in a solid|class=quant-ph|eprint=1806.10910}}</ref>  In 2018, a physical realization of a quantum reservoir computing architecture was demonstrated in the form of nuclear spins within a molecular solid.<ref name=":3" /> However, the nuclear spin experiments in <ref name=":3" /> did not demonstrate quantum reservoir computing per se as they did not involve processing of sequential data. Rather the data were vector inputs, which makes this more accurately a demonstration of quantum implementation of a [[random kitchen sink]]<ref name="RB08">{{cite journal|last1=Rahimi|first1=Ali|last2=Recht|first2=Benjamin|date=December 2008|title=Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in Learning|journal=NIPS'08: Proceedings of the 21st International Conference on Neural Information Processing Systems|url=http://papers.nips.cc/paper/3495-weighted-sums-of-random-kitchen-sinks-replacing-minimization-with-randomization-in-learning.pdf|pages=1313–1320}}</ref> algorithm (also going by the name of [[extreme learning machine]]s in some communities). In 2019, another possible implementation of quantum reservoir processors was proposed in the form of two-dimensional fermionic lattices.<ref name=":3" /> In 2020, realization of reservoir computing on gate-based quantum computers was proposed and demonstrated on cloud-based IBM superconducting near-term quantum computers.<ref name="JNY20">{{cite journal|last1=Chen|first1=Jiayin|last2=Nurdin|first2=Hendra|last3=Yamamoto|first3=Naoki|title=Temporal Information Processing on Noisy Quantum Computers|journal=Physical Review Applied|volume=14|pages=024065|date=2020-08-24|issue=2|doi=10.1103/PhysRevApplied.14.024065|arxiv=2001.09498|bibcode=2020PhRvP..14b4065C|s2cid=210920543|url=https://doi.org/10.1103/PhysRevApplied.14.024065}}</ref>
     −
Recent advances in both AI and quantum information theory have given rise to the concept of quantum neural networks. These hold promise in quantum information processing, which is challenging to classical networks, but can also find application in solving classical problems. In 2018, a physical realization of a quantum reservoir computing architecture was demonstrated in the form of nuclear spins within a molecular solid. However, the nuclear spin experiments in did not demonstrate quantum reservoir computing per se as they did not involve processing of sequential data. Rather the data were vector inputs, which makes this more accurately a demonstration of quantum implementation of a random kitchen sink algorithm (also going by the name of extreme learning machines in some communities). In 2019, another possible implementation of quantum reservoir processors was proposed in the form of two-dimensional fermionic lattices. In 2020, realization of reservoir computing on gate-based quantum computers was proposed and demonstrated on cloud-based IBM superconducting near-term quantum computers.
+
人工智能和量子信息理论的最新进展引出了量子神经网络的概念。<ref name=":2" />这些技术在量子信息处理领域具有广阔的应用前景, 量子神经网络正在逐渐挑战经典的网络,同时量子神经网络在解决经典问题方面也具有广阔的应用前景。<ref name=":2">{{Cite journal|last1=Ghosh|first1=Sanjib|last2=Opala|first2=Andrzej|last3=Matuszewski|first3=Michał|last4=Paterek|first4=Tomasz|last5=Liew|first5=Timothy C. H.|date=December 2019|title=Quantum reservoir processing|arxiv=1811.10335|journal=NPJ Quantum Information|volume=5|issue=1|pages=35|doi=10.1038/s41534-019-0149-8|bibcode=2019npjQI...5...35G|s2cid=119197635|issn=2056-6387}}</ref><ref name=":3">{{cite arXiv|last1=Negoro|first1=Makoto|last2=Mitarai|first2=Kosuke|last3=Fujii|first3=Keisuke|last4=Nakajima|first4=Kohei|last5=Kitagawa|first5=Masahiro|date=2018-06-28|title=Machine learning with controllable quantum dynamics of a nuclear spin ensemble in a solid|class=quant-ph|eprint=1806.10910}}</ref>2018年,一个量子储备池计算架构的物理实现以分子固体中的核自旋的形式被证明。<ref name=":3" />然而,核自旋实验<ref name=":3" />并没有证明量子储备池计算本身,因为它们并不涉及序列数据的处理。相反,当数据是矢量输入时,其更准确地演示了一个随机厨房槽<ref name="RB08">{{cite journal|last1=Rahimi|first1=Ali|last2=Recht|first2=Benjamin|date=December 2008|title=Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in Learning|journal=NIPS'08: Proceedings of the 21st International Conference on Neural Information Processing Systems|url=http://papers.nips.cc/paper/3495-weighted-sums-of-random-kitchen-sinks-replacing-minimization-with-randomization-in-learning.pdf|pages=1313–1320}}</ref>算法的量子实现(在一些社区中也被称为极限学习机制)。2019年,另一种可能的量子库处理器的实现被提出,以二维费米晶格的形式来实现。<ref name=":3" />2020年,在基于门的量子计算机上实现了储备池计算,并在基于云的 IBM 超导近期量子计算机上进行了演示。<ref name="JNY20">{{cite journal|last1=Chen|first1=Jiayin|last2=Nurdin|first2=Hendra|last3=Yamamoto|first3=Naoki|title=Temporal Information Processing on Noisy Quantum Computers|journal=Physical Review Applied|volume=14|pages=024065|date=2020-08-24|issue=2|doi=10.1103/PhysRevApplied.14.024065|arxiv=2001.09498|bibcode=2020PhRvP..14b4065C|s2cid=210920543|url=https://doi.org/10.1103/PhysRevApplied.14.024065}}</ref>
   −
人工智能和量子信息理论的最新进展引出了量子神经网络的概念。<ref name=":2" />这些技术在量子信息处理领域具有广阔的应用前景, 量子神经网络正在逐渐挑战经典的网络,同时量子神经网络在解决经典问题方面也具有广阔的应用前景。<ref name=":2" /><ref name=":3" />2018年,一个量子储备池计算架构的物理实现以分子固体中的核自旋的形式被证明。<ref name=":3" />然而,核自旋实验<ref name=":3" />并没有证明量子储备池计算本身,因为它们并不涉及序列数据的处理。相反,当数据是矢量输入时,其更准确地演示了一个随机厨房槽<ref name="RB08" />算法的量子实现(在一些社区中也被称为极限学习机制)。2019年,另一种可能的量子库处理器的实现被提出,以二维费米晶格的形式来实现。<ref name=":3" />2020年,在基于门的量子计算机上实现了储备池计算,并在基于云的 IBM 超导近期量子计算机上进行了演示。<ref name="JNY20" />
     −
Reservoir computers have been used for [[Time series|time-series]] analysis purposes. In particular, some of their usages involve [[Chaos theory|chaotic]] [[Time series|time-series]] prediction,<ref name=":10">{{Cite journal|last1=Pathak|first1=Jaideep|last2=Hunt|first2=Brian|last3=Girvan|first3=Michelle|last4=Lu|first4=Zhixin|last5=Ott|first5=Edward|date=2018-01-12|title=Model-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir Computing Approach|journal=Physical Review Letters|volume=120|issue=2|pages=024102|doi=10.1103/PhysRevLett.120.024102|pmid=29376715|bibcode=2018PhRvL.120b4102P|doi-access=free}}</ref><ref name=":11">{{Cite journal|last1=Vlachas|first1=P.R.|last2=Pathak|first2=J.|last3=Hunt|first3=B.R.|last4=Sapsis|first4=T.P.|last5=Girvan|first5=M.|last6=Ott|first6=E.|last7=Koumoutsakos|first7=P.|date=2020-03-21|title=Backpropagation algorithms and Reservoir Computing in Recurrent Neural Networks for the forecasting of complex spatiotemporal dynamics|url=http://dx.doi.org/10.1016/j.neunet.2020.02.016|journal=Neural Networks|volume=126|pages=191–217|doi=10.1016/j.neunet.2020.02.016|pmid=32248008|issn=0893-6080|arxiv=1910.05266|s2cid=211146609}}</ref> separation of [[Chaos theory|chaotic]] signals,<ref name=":12">{{Cite journal|last1=Krishnagopal|first1=Sanjukta|last2=Girvan|first2=Michelle|last3=Ott|first3=Edward|last4=Hunt|first4=Brian R.|date=2020-02-01|title=Separation of chaotic signals by reservoir computing|url=https://aip.scitation.org/doi/10.1063/1.5132766|journal=Chaos: An Interdisciplinary Journal of Nonlinear Science|volume=30|issue=2|pages=023123|doi=10.1063/1.5132766|pmid=32113243|issn=1054-1500|arxiv=1910.10080|bibcode=2020Chaos..30b3123K|s2cid=204823815}}</ref> and link inference of [[Network theory|networks]] from their dynamics.<ref name=":13">{{Cite journal|last1=Banerjee|first1=Amitava|last2=Hart|first2=Joseph D.|last3=Roy|first3=Rajarshi|last4=Ott|first4=Edward|date=2021-07-20|title=Machine Learning Link Inference of Noisy Delay-Coupled Networks with Optoelectronic Experimental Tests|journal=Physical Review X|volume=11|issue=3|pages=031014|doi=10.1103/PhysRevX.11.031014|arxiv=2010.15289|bibcode=2021PhRvX..11c1014B|doi-access=free}}</ref>
+
储备池计算已经被用于时间序列分析。特别是在混沌时间序列预测<ref name=":10">{{Cite journal|last1=Pathak|first1=Jaideep|last2=Hunt|first2=Brian|last3=Girvan|first3=Michelle|last4=Lu|first4=Zhixin|last5=Ott|first5=Edward|date=2018-01-12|title=Model-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir Computing Approach|journal=Physical Review Letters|volume=120|issue=2|pages=024102|doi=10.1103/PhysRevLett.120.024102|pmid=29376715|bibcode=2018PhRvL.120b4102P|doi-access=free}}</ref><ref name=":11">{{Cite journal|last1=Vlachas|first1=P.R.|last2=Pathak|first2=J.|last3=Hunt|first3=B.R.|last4=Sapsis|first4=T.P.|last5=Girvan|first5=M.|last6=Ott|first6=E.|last7=Koumoutsakos|first7=P.|date=2020-03-21|title=Backpropagation algorithms and Reservoir Computing in Recurrent Neural Networks for the forecasting of complex spatiotemporal dynamics|url=http://dx.doi.org/10.1016/j.neunet.2020.02.016|journal=Neural Networks|volume=126|pages=191–217|doi=10.1016/j.neunet.2020.02.016|pmid=32248008|issn=0893-6080|arxiv=1910.05266|s2cid=211146609}}</ref>、混沌信号分离,<ref name=":12">{{Cite journal|last1=Krishnagopal|first1=Sanjukta|last2=Girvan|first2=Michelle|last3=Ott|first3=Edward|last4=Hunt|first4=Brian R.|date=2020-02-01|title=Separation of chaotic signals by reservoir computing|url=https://aip.scitation.org/doi/10.1063/1.5132766|journal=Chaos: An Interdisciplinary Journal of Nonlinear Science|volume=30|issue=2|pages=023123|doi=10.1063/1.5132766|pmid=32113243|issn=1054-1500|arxiv=1910.10080|bibcode=2020Chaos..30b3123K|s2cid=204823815}}</ref>、网络动力学链路推理等方面的应用。<ref name=":13">{{Cite journal|last1=Banerjee|first1=Amitava|last2=Hart|first2=Joseph D.|last3=Roy|first3=Rajarshi|last4=Ott|first4=Edward|date=2021-07-20|title=Machine Learning Link Inference of Noisy Delay-Coupled Networks with Optoelectronic Experimental Tests|journal=Physical Review X|volume=11|issue=3|pages=031014|doi=10.1103/PhysRevX.11.031014|arxiv=2010.15289|bibcode=2021PhRvX..11c1014B|doi-access=free}}</ref>
   −
Reservoir computers have been used for time-series analysis purposes. In particular, some of their usages involve chaotic time-series prediction, separation of chaotic signals, and link inference of networks from their dynamics.
     −
储备池计算已经被用于时间序列分析。特别是在混沌时间序列预测<ref name=":10" /><ref name=":11" />、混沌信号分离<ref name=":12" />、网络动力学链路推理等方面的应用。<ref name=":13" />
+
== 经典的储备池计算 ==
   −
== Classical reservoir computing ==
+
=== 储备池 ===
 +
储备池计算中的“储备池”是这个计算机的内部结构,必须具有两个特性: 第一个特性是必须由多个独立的的非线性单元组成,第二个特性是必须能够存储信息。非线性特性描述了每个单元对输入的响应,这使得储备池计算机能够解决复杂的问题。储备池能够通过循环回路中的每个单元的连接来储存信息,其中上一个输入影响下一个响应。响应的历史变化允许计算机被训练来完成特定的任务。<ref name=":1">{{Cite journal|last=Soriano|first=Miguel C.|date=2017-02-06|title=Viewpoint: Reservoir Computing Speeds Up|url=https://physics.aps.org/articles/v10/12|journal=Physics|language=en|volume=10|doi=10.1103/Physics.10.12|doi-access=free}}</ref>
   −
== Classical reservoir computing ==
  −
  −
= = = 经典的储备池计算 = =
  −
  −
=== Reservoir ===
  −
The 'reservoir' in reservoir computing is the internal structure of the computer, and must have two properties: it must be made up of individual, non-linear units, and it must be capable of storing information. The non-linearity describes the response of each unit to input, which is what allows reservoir computers to solve complex problems. Reservoirs are able to store information by connecting the units in recurrent loops, where the previous input affects the next response. The change in reaction due to the past allows the computers to be trained to complete specific tasks.<ref name=":1">{{Cite journal|last=Soriano|first=Miguel C.|date=2017-02-06|title=Viewpoint: Reservoir Computing Speeds Up|url=https://physics.aps.org/articles/v10/12|journal=Physics|language=en|volume=10|doi=10.1103/Physics.10.12|doi-access=free}}</ref>
  −
  −
The 'reservoir' in reservoir computing is the internal structure of the computer, and must have two properties: it must be made up of individual, non-linear units, and it must be capable of storing information. The non-linearity describes the response of each unit to input, which is what allows reservoir computers to solve complex problems. Reservoirs are able to store information by connecting the units in recurrent loops, where the previous input affects the next response. The change in reaction due to the past allows the computers to be trained to complete specific tasks.
  −
  −
储备池计算中的“储备池”是这个计算机的内部结构,必须具有两个特性: 第一个特性是必须由多个独立的的非线性单元组成,第二个特性是必须能够存储信息。非线性特性描述了每个单元对输入的响应,这使得储备池计算机能够解决复杂的问题。储备池能够通过循环回路中的每个单元的连接来储存信息,其中上一个输入影响下一个响应。响应的历史变化允许计算机被训练来完成特定的任务。<ref name=":1" />
  −
  −
Reservoirs can be virtual or physical.<ref name=":1" /> Virtual reservoirs are typically randomly generated and are designed like neural networks.<ref name=":1" /><ref name=":0" /> Virtual reservoirs can be designed to have non-linearity and recurrent loops, but, unlike neural networks, the connections between units are randomized and remain unchanged throughout computation.<ref name=":1" /> Physical reservoirs are possible because of the inherent non-linearity of certain natural systems. The interaction between ripples on the surface of water contains the nonlinear dynamics required in reservoir creation, and a pattern recognition RC was developed by first inputting ripples with electric motors then recording and analyzing the ripples in the readout.<ref name=":4" />
  −
  −
Reservoirs can be virtual or physical. Virtual reservoirs are typically randomly generated and are designed like neural networks. Virtual reservoirs can be designed to have non-linearity and recurrent loops, but, unlike neural networks, the connections between units are randomized and remain unchanged throughout computation. Physical reservoirs are possible because of the inherent non-linearity of certain natural systems. The interaction between ripples on the surface of water contains the nonlinear dynamics required in reservoir creation, and a pattern recognition RC was developed by first inputting ripples with electric motors then recording and analyzing the ripples in the readout.
      
储备池可以是虚拟的,也可以是物理实现的。<ref name=":1" />虚拟的储备池通常是随机产生的,设计类似于神经网络。<ref name=":1" /><ref name=":0" />它可以设计成具有非线性且具有循环回路,但是,与神经网络不同,单元之间的连接是随机的,并且在整个计算过程中保持不变。<ref name=":1" />由于某些自然系统固有的非线性,物理储备池是可能存在的。水面波纹之间的相互作用包含了储备池的形成所需的非线性动力学,通过电动机输入波纹,然后对读出的波纹进行记录和分析,建立了模式识别 RC(模式识别储备池计算)。<ref name=":4" />
 
储备池可以是虚拟的,也可以是物理实现的。<ref name=":1" />虚拟的储备池通常是随机产生的,设计类似于神经网络。<ref name=":1" /><ref name=":0" />它可以设计成具有非线性且具有循环回路,但是,与神经网络不同,单元之间的连接是随机的,并且在整个计算过程中保持不变。<ref name=":1" />由于某些自然系统固有的非线性,物理储备池是可能存在的。水面波纹之间的相互作用包含了储备池的形成所需的非线性动力学,通过电动机输入波纹,然后对读出的波纹进行记录和分析,建立了模式识别 RC(模式识别储备池计算)。<ref name=":4" />
   −
=== Readout ===
+
=== 读出层 ===
 
  −
=== Readout ===
  −
 
  −
= = = 读出 层= =
  −
 
  −
The readout is a neural network layer that performs a linear transformation on the output of the reservoir.<ref name=":4" /> The weights of the readout layer are trained by analyzing the spatiotemporal patterns of the reservoir after excitation by known inputs, and by utilizing a training method such as a [[linear regression]] or a [[Ridge regression]].<ref name=":4" /> As its implementation depends on spatiotemporal reservoir patterns, the details of readout methods are tailored to each type of reservoir.<ref name=":4" /> For example, the readout for a reservoir computer using a container of liquid as its reservoir might entail observing spatiotemporal patterns on the surface of the liquid.<ref name=":4" />
  −
 
  −
The readout is a neural network layer that performs a linear transformation on the output of the reservoir. The weights of the readout layer are trained by analyzing the spatiotemporal patterns of the reservoir after excitation by known inputs, and by utilizing a training method such as a linear regression or a Ridge regression. As its implementation depends on spatiotemporal reservoir patterns, the details of readout methods are tailored to each type of reservoir. For example, the readout for a reservoir computer using a container of liquid as its reservoir might entail observing spatiotemporal patterns on the surface of the liquid.
  −
 
   
读出层是神经网络的一个层,它对储备池的输出进行一个线性映射。<ref name=":4" />储备池在已知输入刺激后,通过分析储备池的时空模式,以及利用线性回归或岭回归等训练方法,对读出层的权重进行训练。<ref name=":4" />由于这个实现取决于时空储存器模式,所以读出权重训练的细节是针对每种储备池型量身定制的。<ref name=":4" />例如,使用液态容器作为储备池的储备池计算机,其读出可能需要观察液体表面的时空模式。<ref name=":4" />
 
读出层是神经网络的一个层,它对储备池的输出进行一个线性映射。<ref name=":4" />储备池在已知输入刺激后,通过分析储备池的时空模式,以及利用线性回归或岭回归等训练方法,对读出层的权重进行训练。<ref name=":4" />由于这个实现取决于时空储存器模式,所以读出权重训练的细节是针对每种储备池型量身定制的。<ref name=":4" />例如,使用液态容器作为储备池的储备池计算机,其读出可能需要观察液体表面的时空模式。<ref name=":4" />
   −
=== Types ===
  −
  −
=== Types ===
     −
= = = 类型 = =  
+
=== 类型 ===
 
+
==== 上下文混响网络 ====
==== Context reverberation network ====
+
上下文混响网络是储备池计算的一个早期实例。<ref name=":14">
An early example of reservoir computing was the context reverberation network.<ref name=":14">
   
[[Kevin Kirby|Kirby, Kevin]]. "Context dynamics in neural sequential learning."
 
[[Kevin Kirby|Kirby, Kevin]]. "Context dynamics in neural sequential learning."
 
Proceedings of the Florida Artificial Intelligence Research Symposium FLAIRS (1991), 66–70.
 
Proceedings of the Florida Artificial Intelligence Research Symposium FLAIRS (1991), 66–70.
</ref>
+
</ref>在这种结构中,一个输入层将信号输入到一个高维动力系统中,这个高维动力系统中的信息由一个可训练的单层感知器读出。有两种类型的动力学系统: 其中一种是将随机权重固定的递归神经网络,另一种动力学系统是受 Alan Turing 的形态发生模型启发的连续反应扩散系统。在可训练层,感知器将当前输入与在动力学系统中回响的信号联系起来,这个在动力学系统中回响的信号被认为是为输入提供的一个动力学的“上下文”。用后来的工作的术语来讲,反应扩散系统就相当于储备池库。
In this architecture, an input layer feeds into a high dimensional dynamical system which is read out by a trainable single-layer [[perceptron]]. Two kinds of dynamical system were described: a recurrent neural network with fixed random weights, and a continuous [[reaction–diffusion system]] inspired by [[Alan Turing]]’s model of [[morphogenesis]]. At the trainable layer, the perceptron associates current inputs with the signals that [[Reverberation|reverberate]] in the dynamical system; the latter were said to provide a dynamic "context" for the inputs.  In the language of later work, the reaction–diffusion system served as the reservoir.
     −
An early example of reservoir computing was the context reverberation network.
  −
Kirby, Kevin. "Context dynamics in neural sequential learning."
  −
Proceedings of the Florida Artificial Intelligence Research Symposium FLAIRS (1991), 66–70.
     −
In this architecture, an input layer feeds into a high dimensional dynamical system which is read out by a trainable single-layer perceptron. Two kinds of dynamical system were described: a recurrent neural network with fixed random weights, and a continuous reaction–diffusion system inspired by Alan Turing’s model of morphogenesis. At the trainable layer, the perceptron associates current inputs with the signals that reverberate in the dynamical system; the latter were said to provide a dynamic "context" for the inputs. In the language of later work, the reaction–diffusion system served as the reservoir.
+
==== 回声状态网络  ====
 +
树状回声状态网络 The Tree Echo State Network (TreeESN)模型代表了储备池计算框架向树状结构数据的推广。<ref name=":15">{{Cite journal|last1=Gallicchio|first1=Claudio|last2=Micheli|first2=Alessio|year=2013|title=Tree Echo State Networks|journal=Neurocomputing|volume=101|pages=319–337|doi=10.1016/j.neucom.2012.08.017|hdl=11568/158480|hdl-access=free}}</ref>
   −
上下文混响网络
     −
上下文混响网络是储备池计算的一个早期实例。<ref name=":14" />在这种结构中,一个输入层将信号输入到一个高维动力系统中,这个高维动力系统中的信息由一个可训练的单层感知器读出。有两种类型的动力学系统: 其中一种是将随机权重固定的递归神经网络,另一种动力学系统是受 Alan Turing 的形态发生模型启发的连续反应扩散系统。在可训练层,感知器将当前输入与在动力学系统中回响的信号联系起来,这个在动力学系统中回响的信号被认为是为输入提供的一个动力学的“上下文”。用后来的工作的术语来讲,反应扩散系统就相当于储备池库。
+
==== 混沌液体状态机  ====
 +
一个混沌液体状态机 Chaotic Liquid State Machine (CLSM)中的液态(比如储备池)或者混沌储备池<ref name=":7">{{Cite journal|last1=Aoun|first1=Mario Antoine|last2=Boukadoum|first2=Mounir|date=2014|title=Learning algorithm and neurocomputing architecture for NDS Neurons|url=http://dx.doi.org/10.1109/icci-cc.2014.6921451|journal=2014 IEEE 13th International Conference on Cognitive Informatics and Cognitive Computing|pages=126–132|publisher=IEEE|doi=10.1109/icci-cc.2014.6921451|isbn=978-1-4799-6081-1|s2cid=16026952}}</ref><ref name=":8">{{Cite journal|last1=Aoun|first1=Mario Antoine|last2=Boukadoum|first2=Mounir|date=2015|title=Chaotic Liquid State Machine|url=http://dx.doi.org/10.4018/ijcini.2015100101|journal=International Journal of Cognitive Informatics and Natural Intelligence|volume=9|issue=4|pages=1–20|doi=10.4018/ijcini.2015100101|issn=1557-3958}}</ref>,是由混沌脉冲神经元构成,但它们通过确立一个描述机器的被训练的输入的单一假设来稳定其活动。这与通常不稳定类型的储备池形成了鲜明的对比。液态稳定化是通过突触可塑性以及管理着液态内部的神经连接的混沌控制来实现的。CLSM 在学习敏感时间序列数据方面取得了良好的效果。<ref name=":7" /><ref name=":8" />
   −
==== Echo state network ====
  −
{{main|Echo state network}}The Tree Echo State Network (TreeESN) model represents a generalization of the reservoir computing framework to tree structured data.<ref name=":15">{{Cite journal|last1=Gallicchio|first1=Claudio|last2=Micheli|first2=Alessio|year=2013|title=Tree Echo State Networks|journal=Neurocomputing|volume=101|pages=319–337|doi=10.1016/j.neucom.2012.08.017|hdl=11568/158480|hdl-access=free}}</ref>
     −
The Tree Echo State Network (TreeESN) model represents a generalization of the reservoir computing framework to tree structured data.
+
====非线性瞬态计算 ====
 +
当依赖时间的输入信号从这种储备池机制的内部动态性分离开来时,信息处理是最有效的。<ref name="NTC" />这些偏离引起瞬态或暂时的变化,这些变化在设备的输出中得到了体现。<ref name="NTC">{{cite journal |last1=Crook |first1=Nigel |title=Nonlinear Transient Computation |journal=Neurocomputing |date=2007 |volume=70 |issue=7–9 |pages=1167–1176 |doi=10.1016/j.neucom.2006.10.148}}</ref>
   −
回声状态网络
     −
树状回声状态网络(TreeESN)模型代表了储备池计算框架向树状结构数据的推广。<ref name=":15" />
+
==== 深度储备池计算 ====
 +
随着深度储备池计算和深度回波状态网络 the Deep Echo State Network (DeepESN)模型<ref name=":16">{{cite thesis |type=PhD thesis |last=Pedrelli |first=Luca |date=2019 |title=Deep Reservoir Computing: A Novel Class of Deep Recurrent Neural Networks |publisher=Università di Pisa |url=https://etd.adm.unipi.it/t/etd-02282019-191815/}}</ref><ref name=":17">{{Cite journal|last1=Gallicchio|first1=Claudio|last2=Micheli|first2=Alessio|last3=Pedrelli|first3=Luca|title=Deep reservoir computing: A critical experimental analysis|journal=Neurocomputing|volume=268|pages=87–99|doi=10.1016/j.neucom.2016.12.089|date=2017-12-13|hdl=11568/851934|hdl-access=free}}</ref><ref name=":18">{{Cite journal|last1=Gallicchio|first1=Claudio|last2=Micheli|first2=Alessio|date=2017-05-05|title=Echo State Property of Deep Reservoir Computing Networks|journal=Cognitive Computation|volume=9|issue=3|pages=337–350|doi=10.1007/s12559-017-9461-9|issn=1866-9956|hdl=11568/851932|s2cid=1077549|hdl-access=free}}</ref><ref name=":19">{{Cite journal|last1=Gallicchio|first1=Claudio|last2=Micheli|first2=Alessio|last3=Pedrelli|first3=Luca|date=December 2018|title=Design of deep echo state networks|journal=Neural Networks|volume=108|pages=33–47|doi=10.1016/j.neunet.2018.08.002|pmid=30138751|issn=0893-6080|hdl=11568/939082|s2cid=52075702|hdl-access=free}}</ref>的出现,储备池计算框架开始向深度学习扩展,发展了有效的可训练模型来对时间数据进行多层次处理,同时使层状组合在循环神经网络中的固有作用的研究得以进行。
   −
==== Liquid-state machine ====
  −
{{main|Liquid-state machine}}'''Chaotic Liquid State Machine'''
     −
Chaotic Liquid State Machine
+
== 量子储备池计算 ==
 +
量子储备池计算可以利用量子力学相互作用的非线性本质或过程来形成具有特征的非线性储备池<ref name=":2" /><ref name=":3" /><ref name="CN19">{{Cite journal|last1=Chen|first1=Jiayin|last2=Nurdin|first2=Hendra|date=2019-05-15|title=Learning nonlinear input–output maps with dissipative quantum systems|url=https://link.springer.com/article/10.1007%2Fs11128-019-2311-9|journal=Quantum Information Processing|volume=18|issue=7|page=198|doi=10.1007/s11128-019-2311-9|arxiv=1901.01653|bibcode=2019QuIP...18..198C|s2cid=57573677}}</ref><ref name="JNY20"/>,也可以利用线性储备池来实现,即向储备池注入输入来产生非线性。<ref name=":5">{{cite journal|last1=Nokkala|first1=Johannes|last2=Martínez-Peña|first2=Rodrigo|last3=Giorgi|first3=Gian Luca|last4=Parigi|first4=Valentina|last5=Soriano|first5=Miguel C.|last6=Zambrini|first6=Roberta|title=Gaussian states of continuous-variable quantum systems provide universal and versatile reservoir computing|journal=Communications Physics|year=2021|volume=4|issue=1|page=53|doi=10.1038/s42005-021-00556-w|arxiv=2006.04821|bibcode=2021CmPhy...4...53N|s2cid=234355683}}</ref>机器学习和量子设备的结合,引出了一个新的研究领域——量子神经形态计算。<ref name="MG20">{{cite journal|last1=Marković|first1=Danijela |last2=Grollier|first2=Julie|title=Quantum Neuromorphic Computing|journal=Applied Physics Letters|volume=117|pages=150501|date=2020-10-13|issue=15|doi=10.1063/5.0020014|arxiv=2006.15111|bibcode=2020ApPhL.117o0501M |s2cid=210920543|url=https://doi.org/10.1063/5.0020014}}</ref>
   −
液体状态机
     −
混沌液体状态机
+
=== 类型 ===
 +
==== 相互作用的量子谐振子的高斯态 ====
 +
高斯态是连续变量量子系统的一类典型态。<ref name=":20" />尽管它们现在可以在最先进的光学平台上创建和操作,这些平台对去相干具有天然的鲁棒性<ref name=":21">{{Cite journal|last1=Roslund|first1=Jonathan|last2=de Araújo|first2=Renné Medeiros|last3=Jiang|first3=Shifeng|last4=Fabre|first4=Claude|last5=Treps|first5=Nicolas|date=2013-12-15|title=Wavelength-multiplexed quantum networks with ultrafast frequency combs|url=https://www.nature.com/articles/nphoton.2013.340|journal=Nature Photonics|language=en|volume=8|issue=2|pages=109–112|doi=10.1038/nphoton.2013.340|arxiv=1307.1216|s2cid=2328402|issn=1749-4893}}</ref>,但众所周知,它们对于通用量子计算来说是不够的,因为保持状态的高斯性质的变换是线性的。<ref name=":22">{{Cite journal|last1=Bartlett|first1=Stephen D.|last2=Sanders|first2=Barry C.|last3=Braunstein|first3=Samuel L.|last4=Nemoto|first4=Kae|date=2002-02-14|title=Efficient Classical Simulation of Continuous Variable Quantum Information Processes|url=https://link.aps.org/doi/10.1103/PhysRevLett.88.097904|journal=Physical Review Letters|volume=88|issue=9|pages=097904|doi=10.1103/PhysRevLett.88.097904|pmid=11864057|arxiv=quant-ph/0109047|bibcode=2002PhRvL..88i7904B|s2cid=2161585}}</ref>正常情况下,线性动力学也不足以进行非平凡的储层计算。然而,通过考虑一个由相互作用的量子谐振子组成的网络,并通过周期性的振子子集的状态重置注入输入,可以将这种动力学应用于储备池计算目的。选择一个合适的振荡器子集的状态如何取决于输入,其余振荡器的观测量可以成为非线性函数的输入适合于储备池计算; 事实上,由于这些函数的性质,甚至通用储备池计算成为可能,通过结合观测量和一个多项式读出函数。<ref name=":5" />原则上,这种储备池计算机可以通过受控的多模光学参量过程实现<ref name=":23">{{Cite journal|last1=Nokkala|first1=J.|last2=Arzani|first2=F.|last3=Galve|first3=F.|last4=Zambrini|first4=R.|last5=Maniscalco|first5=S.|last6=Piilo|first6=J.|last7=Treps|first7=N.|last8=Parigi|first8=V.|date=2018-05-09|title=Reconfigurable optical implementation of quantum complex networks|url=https://doi.org/10.1088%2F1367-2630%2Faabc77|journal=New Journal of Physics|language=en|volume=20|issue=5|pages=053024|doi=10.1088/1367-2630/aabc77|arxiv=1708.08726|bibcode=2018NJPh...20e3024N|s2cid=119091176|issn=1367-2630}}</ref>,但是从系统中有效地提取输出是一个挑战,特别是在必须考虑测量反作用的量子体制中。
   −
The liquid (i.e. reservoir) of a Chaotic Liquid State Machine (CLSM),<ref name=":7">{{Cite journal|last1=Aoun|first1=Mario Antoine|last2=Boukadoum|first2=Mounir|date=2014|title=Learning algorithm and neurocomputing architecture for NDS Neurons|url=http://dx.doi.org/10.1109/icci-cc.2014.6921451|journal=2014 IEEE 13th International Conference on Cognitive Informatics and Cognitive Computing|pages=126–132|publisher=IEEE|doi=10.1109/icci-cc.2014.6921451|isbn=978-1-4799-6081-1|s2cid=16026952}}</ref><ref name=":8">{{Cite journal|last1=Aoun|first1=Mario Antoine|last2=Boukadoum|first2=Mounir|date=2015|title=Chaotic Liquid State Machine|url=http://dx.doi.org/10.4018/ijcini.2015100101|journal=International Journal of Cognitive Informatics and Natural Intelligence|volume=9|issue=4|pages=1–20|doi=10.4018/ijcini.2015100101|issn=1557-3958}}</ref> or chaotic reservoir, is made from chaotic spiking neurons but which stabilize their activity by settling to a single hypothesis that describes the trained inputs of the machine. This is in contrast to general types of reservoirs that don’t stabilize. The liquid stabilization occurs via synaptic plasticity and chaos control that govern neural connections inside the liquid. CLSM showed promising results in learning sensitive time series data.<ref name=":7" /><ref name=":8" />
  −
  −
The liquid (i.e. reservoir) of a Chaotic Liquid State Machine (CLSM), or chaotic reservoir, is made from chaotic spiking neurons but which stabilize their activity by settling to a single hypothesis that describes the trained inputs of the machine. This is in contrast to general types of reservoirs that don’t stabilize. The liquid stabilization occurs via synaptic plasticity and chaos control that govern neural connections inside the liquid. CLSM showed promising results in learning sensitive time series data.
  −
  −
一个混沌液体状态机(CLSM)中的液态(比如储备池)或者混沌储备池<ref name=":7" /><ref name=":8" />,是由混沌脉冲神经元构成,但它们通过确立一个描述机器的被训练的输入的单一假设来稳定其活动。这与通常不稳定类型的储备池形成了鲜明的对比。液态稳定化是通过突触可塑性以及管理着液态内部的神经连接的混沌控制来实现的。CLSM 在学习敏感时间序列数据方面取得了良好的效果。<ref name=":7" /><ref name=":8" />
  −
  −
==== Nonlinear transient computation ====
  −
This type of information processing is most relevant when time-dependent input signals depart from the mechanism’s internal dynamics.<ref name="NTC" /> These departures cause transients or temporary altercations which are represented in the device’s output.<ref name="NTC">{{cite journal |last1=Crook |first1=Nigel |title=Nonlinear Transient Computation |journal=Neurocomputing |date=2007 |volume=70 |issue=7–9 |pages=1167–1176 |doi=10.1016/j.neucom.2006.10.148}}</ref>
  −
  −
This type of information processing is most relevant when time-dependent input signals depart from the mechanism’s internal dynamics. These departures cause transients or temporary altercations which are represented in the device’s output.
  −
  −
非线性瞬态计算
  −
  −
当依赖时间的输入信号从这种储备池机制的内部动态性分离开来时,信息处理是最有效的。<ref name="NTC" />这些偏离引起瞬态或暂时的变化,这些变化在设备的输出中得到了体现。<ref name="NTC" />
  −
  −
==== Deep reservoir computing ====
  −
The extension of the reservoir computing framework towards Deep Learning, with the introduction of Deep Reservoir Computing and of the Deep Echo State Network (DeepESN) model<ref name=":16">{{cite thesis |type=PhD thesis |last=Pedrelli |first=Luca |date=2019 |title=Deep Reservoir Computing: A Novel Class of Deep Recurrent Neural Networks |publisher=Università di Pisa |url=https://etd.adm.unipi.it/t/etd-02282019-191815/}}</ref><ref name=":17">{{Cite journal|last1=Gallicchio|first1=Claudio|last2=Micheli|first2=Alessio|last3=Pedrelli|first3=Luca|title=Deep reservoir computing: A critical experimental analysis|journal=Neurocomputing|volume=268|pages=87–99|doi=10.1016/j.neucom.2016.12.089|date=2017-12-13|hdl=11568/851934|hdl-access=free}}</ref><ref name=":18">{{Cite journal|last1=Gallicchio|first1=Claudio|last2=Micheli|first2=Alessio|date=2017-05-05|title=Echo State Property of Deep Reservoir Computing Networks|journal=Cognitive Computation|volume=9|issue=3|pages=337–350|doi=10.1007/s12559-017-9461-9|issn=1866-9956|hdl=11568/851932|s2cid=1077549|hdl-access=free}}</ref><ref name=":19">{{Cite journal|last1=Gallicchio|first1=Claudio|last2=Micheli|first2=Alessio|last3=Pedrelli|first3=Luca|date=December 2018|title=Design of deep echo state networks|journal=Neural Networks|volume=108|pages=33–47|doi=10.1016/j.neunet.2018.08.002|pmid=30138751|issn=0893-6080|hdl=11568/939082|s2cid=52075702|hdl-access=free}}</ref> allows to develop efficiently trained models for hierarchical processing of temporal data, at the same time enabling the investigation on the inherent role of layered composition in [[recurrent neural network]]s.
  −
  −
The extension of the reservoir computing framework towards Deep Learning, with the introduction of Deep Reservoir Computing and of the Deep Echo State Network (DeepESN) model allows to develop efficiently trained models for hierarchical processing of temporal data, at the same time enabling the investigation on the inherent role of layered composition in recurrent neural networks.
  −
  −
深度储备池计算
  −
  −
随着深度储备池计算和深度回波状态网络(DeepESN)模型<ref name=":16" /><ref name=":17" /><ref name=":18" /><ref name=":19" />的出现,储备池计算框架开始向深度学习扩展,发展了有效的可训练模型来对时间数据进行多层次处理,同时使层状组合在循环神经网络中的固有作用的研究得以进行。
  −
  −
== Quantum reservoir computing ==
  −
Quantum reservoir computing may use the nonlinear nature of quantum mechanical interactions or processes to form the characteristic nonlinear reservoirs<ref name=":2" /><ref name=":3" /><ref name="CN19">{{Cite journal|last1=Chen|first1=Jiayin|last2=Nurdin|first2=Hendra|date=2019-05-15|title=Learning nonlinear input–output maps with dissipative quantum systems|url=https://link.springer.com/article/10.1007%2Fs11128-019-2311-9|journal=Quantum Information Processing|volume=18|issue=7|page=198|doi=10.1007/s11128-019-2311-9|arxiv=1901.01653|bibcode=2019QuIP...18..198C|s2cid=57573677}}</ref><ref name="JNY20"/> but may also be done with linear reservoirs when the injection of the input to the reservoir creates the nonlinearity.<ref name=":5">{{cite journal|last1=Nokkala|first1=Johannes|last2=Martínez-Peña|first2=Rodrigo|last3=Giorgi|first3=Gian Luca|last4=Parigi|first4=Valentina|last5=Soriano|first5=Miguel C.|last6=Zambrini|first6=Roberta|title=Gaussian states of continuous-variable quantum systems provide universal and versatile reservoir computing|journal=Communications Physics|year=2021|volume=4|issue=1|page=53|doi=10.1038/s42005-021-00556-w|arxiv=2006.04821|bibcode=2021CmPhy...4...53N|s2cid=234355683}}</ref> The marriage of machine learning and quantum devices is leading to the emergence of quantum neuromorphic computing as a new research area.<ref name="MG20">{{cite journal|last1=Marković|first1=Danijela |last2=Grollier|first2=Julie|title=Quantum Neuromorphic Computing|journal=Applied Physics Letters|volume=117|pages=150501|date=2020-10-13|issue=15|doi=10.1063/5.0020014|arxiv=2006.15111|bibcode=2020ApPhL.117o0501M |s2cid=210920543|url=https://doi.org/10.1063/5.0020014}}</ref>
  −
  −
Quantum reservoir computing may use the nonlinear nature of quantum mechanical interactions or processes to form the characteristic nonlinear reservoirs but may also be done with linear reservoirs when the injection of the input to the reservoir creates the nonlinearity. The marriage of machine learning and quantum devices is leading to the emergence of quantum neuromorphic computing as a new research area.
  −
  −
量子储备池计算
  −
  −
量子储备池计算可以利用量子力学相互作用的非线性本质或过程来形成具有特征的非线性储备池<ref name=":2" /><ref name=":3" /><ref name="CN19" /><ref name="JNY20" />,也可以利用线性储备池来实现,即向储备池注入输入来产生非线性。<ref name=":5" />机器学习和量子设备的结合,引出了一个新的研究领域——量子神经形态计算。<ref name="MG20" />
  −
  −
=== Types ===
  −
  −
=== Types ===
  −
  −
= = = 类型 = =
  −
  −
==== Gaussian states of interacting quantum harmonic oscillators ====
  −
  −
==== Gaussian states of interacting quantum harmonic oscillators ====
  −
  −
= = = 相互作用的量子谐振子的高斯态 = = =
  −
  −
Gaussian states are a paradigmatic class of states of [[Continuous-variable quantum information|continuous variable quantum systems]].<ref name=":20">{{cite arXiv|last1=Ferraro|first1=Alessandro|last2=Olivares|first2=Stefano|last3=Paris|first3=Matteo G. A.|date=2005-03-31|title=Gaussian states in continuous variable quantum information|eprint=quant-ph/0503237}}</ref> Although they can nowadays be created and manipulated in, e.g, state-of-the-art optical platforms,<ref name=":21">{{Cite journal|last1=Roslund|first1=Jonathan|last2=de Araújo|first2=Renné Medeiros|last3=Jiang|first3=Shifeng|last4=Fabre|first4=Claude|last5=Treps|first5=Nicolas|date=2013-12-15|title=Wavelength-multiplexed quantum networks with ultrafast frequency combs|url=https://www.nature.com/articles/nphoton.2013.340|journal=Nature Photonics|language=en|volume=8|issue=2|pages=109–112|doi=10.1038/nphoton.2013.340|arxiv=1307.1216|s2cid=2328402|issn=1749-4893}}</ref> naturally robust to [[Quantum decoherence|decoherence]], it is well-known that they are not sufficient for, e.g., universal [[quantum computing]] because transformations that preserve the Gaussian nature of a state are linear.<ref name=":22">{{Cite journal|last1=Bartlett|first1=Stephen D.|last2=Sanders|first2=Barry C.|last3=Braunstein|first3=Samuel L.|last4=Nemoto|first4=Kae|date=2002-02-14|title=Efficient Classical Simulation of Continuous Variable Quantum Information Processes|url=https://link.aps.org/doi/10.1103/PhysRevLett.88.097904|journal=Physical Review Letters|volume=88|issue=9|pages=097904|doi=10.1103/PhysRevLett.88.097904|pmid=11864057|arxiv=quant-ph/0109047|bibcode=2002PhRvL..88i7904B|s2cid=2161585}}</ref> Normally, linear dynamics would not be sufficient for nontrivial reservoir computing either. It is nevertheless possible to harness such dynamics for reservoir computing purposes by considering a network of interacting [[quantum harmonic oscillator]]s and injecting the input by periodical state resets of a subset of the oscillators. With a suitable choice of how the states of this subset of oscillators depends on the input, the observables of the rest of the oscillators can become nonlinear functions of the input suitable for reservoir computing; indeed, thanks to the properties of these functions, even universal reservoir computing becomes possible by combining the observables with a polynomial readout function.<ref name=":5" /> In principle, such reservoir computers could be implemented with controlled multimode [[Optical parametric oscillator|optical parametric processes]],<ref name=":23">{{Cite journal|last1=Nokkala|first1=J.|last2=Arzani|first2=F.|last3=Galve|first3=F.|last4=Zambrini|first4=R.|last5=Maniscalco|first5=S.|last6=Piilo|first6=J.|last7=Treps|first7=N.|last8=Parigi|first8=V.|date=2018-05-09|title=Reconfigurable optical implementation of quantum complex networks|url=https://doi.org/10.1088%2F1367-2630%2Faabc77|journal=New Journal of Physics|language=en|volume=20|issue=5|pages=053024|doi=10.1088/1367-2630/aabc77|arxiv=1708.08726|bibcode=2018NJPh...20e3024N|s2cid=119091176|issn=1367-2630}}</ref> however efficient extraction of the output from the system is challenging especially in the quantum regime where [[Measurement in quantum mechanics#State change due to measurement|measurement back-action]] must be taken into account.
  −
  −
Gaussian states are a paradigmatic class of states of continuous variable quantum systems. Although they can nowadays be created and manipulated in, e.g, state-of-the-art optical platforms, naturally robust to decoherence, it is well-known that they are not sufficient for, e.g., universal quantum computing because transformations that preserve the Gaussian nature of a state are linear. Normally, linear dynamics would not be sufficient for nontrivial reservoir computing either. It is nevertheless possible to harness such dynamics for reservoir computing purposes by considering a network of interacting quantum harmonic oscillators and injecting the input by periodical state resets of a subset of the oscillators. With a suitable choice of how the states of this subset of oscillators depends on the input, the observables of the rest of the oscillators can become nonlinear functions of the input suitable for reservoir computing; indeed, thanks to the properties of these functions, even universal reservoir computing becomes possible by combining the observables with a polynomial readout function. In principle, such reservoir computers could be implemented with controlled multimode optical parametric processes, however efficient extraction of the output from the system is challenging especially in the quantum regime where measurement back-action must be taken into account.
  −
  −
高斯态是连续变量量子系统的一类典型态。<ref name=":20" />尽管它们现在可以在最先进的光学平台上创建和操作,这些平台对去相干具有天然的鲁棒性<ref name=":21" />,但众所周知,它们对于通用量子计算来说是不够的,因为保持状态的高斯性质的变换是线性的。<ref name=":22" />正常情况下,线性动力学也不足以进行非平凡的储层计算。然而,通过考虑一个由相互作用的量子谐振子组成的网络,并通过周期性的振子子集的状态重置注入输入,可以将这种动力学应用于储备池计算目的。选择一个合适的振荡器子集的状态如何取决于输入,其余振荡器的观测量可以成为非线性函数的输入适合于储备池计算; 事实上,由于这些函数的性质,甚至通用储备池计算成为可能,通过结合观测量和一个多项式读出函数。<ref name=":5" />原则上,这种储备池计算机可以通过受控的多模光学参量过程实现<ref name=":23" />,但是从系统中有效地提取输出是一个挑战,特别是在必须考虑测量反作用的量子体制中。
  −
  −
==== 2-D quantum dot lattices ====
  −
In this architecture, randomized coupling between lattice sites grants the reservoir the “black box” property inherent to reservoir processors.<ref name=":2" /> The reservoir is then excited, which acts as the input, by an incident [[optical field]]. Readout occurs in the form of occupational numbers of lattice sites, which are naturally nonlinear functions of the input.<ref name=":2" />
  −
  −
In this architecture, randomized coupling between lattice sites grants the reservoir the “black box” property inherent to reservoir processors. The reservoir is then excited, which acts as the input, by an incident optical field. Readout occurs in the form of occupational numbers of lattice sites, which are naturally nonlinear functions of the input.
  −
  −
2-D 量子点格子
      +
==== 2-D 量子点格子 ====
 
在这种结构中,格点之间的随机耦合赋予了储备池处理器固有的“黑盒”属性。<ref name=":2" />然后通过一个入射光场激发储存器,作为输入。读出以格点的职业数的形式出现,这是输入的自然非线性函数。<ref name=":2" />
 
在这种结构中,格点之间的随机耦合赋予了储备池处理器固有的“黑盒”属性。<ref name=":2" />然后通过一个入射光场激发储存器,作为输入。读出以格点的职业数的形式出现,这是输入的自然非线性函数。<ref name=":2" />
   −
==== Nuclear spins in a molecular solid ====
  −
In this architecture, quantum mechanical coupling between spins of neighboring atoms within the [[molecular solid]] provides the non-linearity required to create the higher-dimensional computational space.<ref name=":3" /> The reservoir is then excited by radiofrequency [[electromagnetic radiation]] tuned to the [[resonance]] frequencies of relevant [[Spin (physics)|nuclear spins]].<ref name=":3" /> Readout occurs by measuring the nuclear spin states.<ref name=":3" />
  −
  −
In this architecture, quantum mechanical coupling between spins of neighboring atoms within the molecular solid provides the non-linearity required to create the higher-dimensional computational space. The reservoir is then excited by radiofrequency electromagnetic radiation tuned to the resonance frequencies of relevant nuclear spins. Readout occurs by measuring the nuclear spin states.
      +
==== 分子固体中的核自旋 ====
 
分子固体中的核自旋在这种结构中,分子固体中相邻原子自旋之间的量子力学耦合提供了创造高维计算空间所需的非线性。<ref name=":3" />然后,该储备池被调谐到相关核自旋共振频率的射频电磁辐射所激发。通过测量原子核的自旋态就可以读出数据。<ref name=":3" />
 
分子固体中的核自旋在这种结构中,分子固体中相邻原子自旋之间的量子力学耦合提供了创造高维计算空间所需的非线性。<ref name=":3" />然后,该储备池被调谐到相关核自旋共振频率的射频电磁辐射所激发。通过测量原子核的自旋态就可以读出数据。<ref name=":3" />
   −
==== Reservoir computing on gate-based near-term superconducting quantum computers ====
  −
The most prevalent model of quantum computing is the gate-based model where quantum computation is performed by sequential applications of unitary quantum gates on qubits of a quantum computer.<ref name=":24">{{Citation|last1=Nielsen|first1=Michael|last2=Chuang|first2=Isaac|title=Quantum Computation and Quantum Information|publisher=Cambridge University Press Cambridge|date=2010|edition=2}}</ref> A theory for the implementation of reservoir computing on a gate-based quantum computer with proof-of-principle demonstrations on a number of IBM superconducting [[NISQ era|noisy intermediate-scale quantum]] (NISQ) computers<ref name=":25">[[John Preskill]]. "Quantum Computing in the NISQ era and beyond." Quantum 2,79 (2018)</ref> has been reported in.<ref name="JNY20"/>
     −
The most prevalent model of quantum computing is the gate-based model where quantum computation is performed by sequential applications of unitary quantum gates on qubits of a quantum computer. A theory for the implementation of reservoir computing on a gate-based quantum computer with proof-of-principle demonstrations on a number of IBM superconducting noisy intermediate-scale quantum (NISQ) computersJohn Preskill. "Quantum Computing in the NISQ era and beyond." Quantum 2,79 (2018) has been reported in.
+
==== 基于门的近期超导量子计算机上的储备池计算 ====
 +
量子计算最流行的模型是基于门的模型,量子计算是通过量子计算机量子比特上的幺正量子门顺序应用来执行的。<ref name=":24">{{Citation|last1=Nielsen|first1=Michael|last2=Chuang|first2=Isaac|title=Quantum Computation and Quantum Information|publisher=Cambridge University Press Cambridge|date=2010|edition=2}}</ref>在基于栅极的量子计算机上实现储备池计算的理论,并在 IBM 超导带噪中级量子计算机(NISQ)<ref name=":25">[[John Preskill]]. "Quantum Computing in the NISQ era and beyond." Quantum 2,79 (2018)</ref>上进行了原理论证。<ref name="JNY20" />
   −
基于门的近期超导量子计算机上的储备池计算
     −
量子计算最流行的模型是基于门的模型,量子计算是通过量子计算机量子比特上的幺正量子门顺序应用来执行的。<ref name=":24" />在基于栅极的量子计算机上实现储备池计算的理论,并在 IBM 超导带噪中级量子计算机(NISQ)<ref name=":25" />上进行了原理论证。<ref name="JNY20" />
+
== 另见 ==
 +
* [[深度学习]]
 +
* [[极限学习机器]]
   −
== See also ==
  −
* [[Deep learning]]
  −
* [[Extreme learning machine]]s
     −
* Deep learning
+
== 参考文献 ==
* Extreme learning machines
+
{{Reflist|30em}}
 
  −
深度学习
  −
 
  −
极限学习机器
     −
== References ==
  −
{{Reflist|30em}}
     −
== Further reading ==
+
== 进一步阅读 ==
 
* [http://www.nature.com/ncomms/journal/v2/n9/full/ncomms1476.html?WT.ec_id=NCOMMS-20110913 Reservoir Computing using delay systems], Nature Communications 2011
 
* [http://www.nature.com/ncomms/journal/v2/n9/full/ncomms1476.html?WT.ec_id=NCOMMS-20110913 Reservoir Computing using delay systems], Nature Communications 2011
 
* [http://www.nature.com/srep/2012/120227/srep00287/full/srep00287.html Optoelectronic Reservoir Computing], Scientific Reports February 2012
 
* [http://www.nature.com/srep/2012/120227/srep00287/full/srep00287.html Optoelectronic Reservoir Computing], Scientific Reports February 2012
第204行: 第95行:  
* [http://www.mitpressjournals.org/doi/10.1162/NECO_a_00694#.WL4P9iHyvIo Memristor Models for Machine learning], Neural Computation 2014 [https://arxiv.org/abs/1406.2210 arxiv]
 
* [http://www.mitpressjournals.org/doi/10.1162/NECO_a_00694#.WL4P9iHyvIo Memristor Models for Machine learning], Neural Computation 2014 [https://arxiv.org/abs/1406.2210 arxiv]
   −
* Reservoir Computing using delay systems, Nature Communications 2011
  −
* Optoelectronic Reservoir Computing, Scientific Reports February 2012
  −
* Optoelectronic Reservoir Computing, Optics Express 2012
  −
* All-optical Reservoir Computing, Nature Communications 2013
  −
* Memristor Models for Machine learning, Neural Computation 2014 arxiv
  −
  −
光电子水库计算,Optics Express 2012,All-optical Reservoir Computing,Nature Communications 2013,Memristor Models for Machine learning,Neural calculation 2014 arxiv
  −
  −
[[Category:Artificial neural networks]]
     −
Category:Artificial neural networks
+
----
 +
本中文词条由神经动力学模型读书会词条梳理志愿者1210080212翻译,[[用户:薄荷|薄荷]]编辑,如有问题,欢迎在讨论页面留言。
   −
类别: 人工神经网络
     −
<noinclude>
+
'''本词条内容源自wikipedia及公开资料,遵守 CC3.0协议。'''
   −
<small>This page was moved from [[wikipedia:en:Reservoir computing]]. Its edit history can be viewed at [[储备池计算/edithistory]]</small></noinclude>
     −
[[Category:待整理页面]]
+
[[Category:人工神经网络]]
7,129

个编辑

导航菜单