更改

添加246字节 、 2020年11月26日 (四) 12:34
第1,182行: 第1,182行:     
====深层前馈神经网络 Deep feedforward neural networks ====
 
====深层前馈神经网络 Deep feedforward neural networks ====
  −
  −
  −
  −
  −
  −
  −
      
{{Main|Deep learning}}
 
{{Main|Deep learning}}
  −
  −
  −
  −
  −
      
[[Deep learning]] is any [[artificial neural network]] that can learn a long chain of causal links{{dubious|date=July 2019}}. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a [[deep learning#Credit assignment|"credit assignment path"]] (CAP) depth of seven{{citation needed|date=July 2019}}. Many deep learning systems need to be able to learn chains ten or more causal links in length.<ref name="schmidhuber2015"/> Deep learning has transformed many important subfields of artificial intelligence{{why|date=July 2019}}, including [[computer vision]], [[speech recognition]], [[natural language processing]] and others.<ref name="goodfellow2016">Ian Goodfellow, Yoshua Bengio, and Aaron Courville (2016). Deep Learning. MIT Press. [http://www.deeplearningbook.org Online] {{webarchive|url=https://web.archive.org/web/20160416111010/http://www.deeplearningbook.org/ |date=16 April 2016 }}</ref><ref name="HintonDengYu2012">{{cite journal | last1 = Hinton | first1 = G. | last2 = Deng | first2 = L. | last3 = Yu | first3 = D. | last4 = Dahl | first4 = G. | last5 = Mohamed | first5 = A. | last6 = Jaitly | first6 = N. | last7 = Senior | first7 = A. | last8 = Vanhoucke | first8 = V. | last9 = Nguyen | first9 = P. | last10 = Sainath | first10 = T. | last11 = Kingsbury | first11 = B. | year = 2012 | title = Deep Neural Networks for Acoustic Modeling in Speech Recognition – The shared views of four research groups | url = | journal = IEEE Signal Processing Magazine | volume = 29 | issue = 6| pages = 82–97 | doi=10.1109/msp.2012.2205597}}</ref><ref name="schmidhuber2015">{{cite journal |last=Schmidhuber |first=J. |year=2015 |title=Deep Learning in Neural Networks: An Overview |journal=Neural Networks |volume=61 |pages=85–117 |arxiv=1404.7828 |doi=10.1016/j.neunet.2014.09.003|pmid=25462637 }}</ref>
 
[[Deep learning]] is any [[artificial neural network]] that can learn a long chain of causal links{{dubious|date=July 2019}}. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a [[deep learning#Credit assignment|"credit assignment path"]] (CAP) depth of seven{{citation needed|date=July 2019}}. Many deep learning systems need to be able to learn chains ten or more causal links in length.<ref name="schmidhuber2015"/> Deep learning has transformed many important subfields of artificial intelligence{{why|date=July 2019}}, including [[computer vision]], [[speech recognition]], [[natural language processing]] and others.<ref name="goodfellow2016">Ian Goodfellow, Yoshua Bengio, and Aaron Courville (2016). Deep Learning. MIT Press. [http://www.deeplearningbook.org Online] {{webarchive|url=https://web.archive.org/web/20160416111010/http://www.deeplearningbook.org/ |date=16 April 2016 }}</ref><ref name="HintonDengYu2012">{{cite journal | last1 = Hinton | first1 = G. | last2 = Deng | first2 = L. | last3 = Yu | first3 = D. | last4 = Dahl | first4 = G. | last5 = Mohamed | first5 = A. | last6 = Jaitly | first6 = N. | last7 = Senior | first7 = A. | last8 = Vanhoucke | first8 = V. | last9 = Nguyen | first9 = P. | last10 = Sainath | first10 = T. | last11 = Kingsbury | first11 = B. | year = 2012 | title = Deep Neural Networks for Acoustic Modeling in Speech Recognition – The shared views of four research groups | url = | journal = IEEE Signal Processing Magazine | volume = 29 | issue = 6| pages = 82–97 | doi=10.1109/msp.2012.2205597}}</ref><ref name="schmidhuber2015">{{cite journal |last=Schmidhuber |first=J. |year=2015 |title=Deep Learning in Neural Networks: An Overview |journal=Neural Networks |volume=61 |pages=85–117 |arxiv=1404.7828 |doi=10.1016/j.neunet.2014.09.003|pmid=25462637 }}</ref>
第1,207行: 第1,193行:     
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])Credit Assignment Path未找到标准翻译
 
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])Credit Assignment Path未找到标准翻译
 +
[[用户:Qige96|Ricky]]([[用户讨论:Qige96|讨论]])中文的翻译一般来说就是“信用分配路径”,但其实这里的credit指的是贡献、声誉等。整个CAP要解决的是整条链路种每个神经元对最终结果的贡献是多少。
      第1,250行: 第1,237行:     
2016年Deepmind 的“阿尔法狗李”使用了有12个卷积层的 CNNs 和强化学习,击败了一个顶级围棋冠军。
 
2016年Deepmind 的“阿尔法狗李”使用了有12个卷积层的 CNNs 和强化学习,击败了一个顶级围棋冠军。
  −
  −
  −
      
====深层递归神经网络 Deep recurrent neural networks ====
 
====深层递归神经网络 Deep recurrent neural networks ====
370

个编辑