更改

跳到导航 跳到搜索
删除70字节 、 2018年8月9日 (四) 21:13
无编辑摘要
第1行: 第1行:  
{{#seo:
 
{{#seo:
|keywords=人工神经网络,统计,
+
|keywords=人工神经网络,统计学习,机器学习,数据挖掘,深度学习
|description=人工神经网络,统计,
+
|description=人工神经网络,机器学习,深度学习
 
}}
 
}}
   −
该词条由Cynthia翻译编辑,由【审校者】审校,【总审校者】总审校,翻译自Wikipedia词条[https://en.wikipedia.org/wiki/Artificial_neural_network Artificial neural networks]
+
该词条由 Cynthia 翻译编辑,由【审校者】审校,【总审校者】总审校,翻译自Wikipedia词条[https://en.wikipedia.org/wiki/Artificial_neural_network Artificial neural networks]
      第10行: 第10行:     
[[File:Colored neural network.svg|thumb|300px|An artificial neural network is an interconnected group of nodes, akin to the vast network of [[neuron]]s in a [[brain]]. Here, each circular node represents an [[artificial neuron]] and an arrow represents a connection from the output of one artificial neuron to the input of another.]]
 
[[File:Colored neural network.svg|thumb|300px|An artificial neural network is an interconnected group of nodes, akin to the vast network of [[neuron]]s in a [[brain]]. Here, each circular node represents an [[artificial neuron]] and an arrow represents a connection from the output of one artificial neuron to the input of another.]]
'''Artificial neural networks''' ('''ANNs''') or '''[[Connectionism|connectionist]] systems''' are computing systems vaguely inspired by the [[biological neural network]]s that constitute animal [[brain]]s.<ref>{{Cite web|url=https://www.frontiersin.org/research-topics/4817/artificial-neural-networks-as-models-of-neural-information-processing|title=Artificial Neural Networks as Models of Neural Information Processing {{!}} Frontiers Research Topic|language=en|access-date=2018-02-20}}</ref> Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. For example, in [[image recognition]], they might learn to identify images that contain cats by analyzing example images that have been manually [[Labeled data|labeled]] as "cat" or "no cat" and using the results to identify cats in other images. They do this without any prior knowledge about cats, e.g., that they have fur, tails, whiskers and cat-like faces. Instead, they automatically generate identifying characteristics from the learning material that they process.
+
'''人工神经网络''' ('''ANNs''') '''[https://en.wikipedia.org/wiki/Connectionism 联结主义]系统''' computing systems vaguely inspired by the [[biological neural network]]s that constitute animal [[brain]]s.<ref>{{Cite web|url=https://www.frontiersin.org/research-topics/4817/artificial-neural-networks-as-models-of-neural-information-processing|title=Artificial Neural Networks as Models of Neural Information Processing {{!}} Frontiers Research Topic|language=en|access-date=2018-02-20}}</ref> Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. For example, in [[image recognition]], they might learn to identify images that contain cats by analyzing example images that have been manually [[Labeled data|labeled]] as "cat" or "no cat" and using the results to identify cats in other images. They do this without any prior knowledge about cats, e.g., that they have fur, tails, whiskers and cat-like faces. Instead, they automatically generate identifying characteristics from the learning material that they process.
    
An ANN is based on a collection of connected units or nodes called [[artificial neuron]]s which loosely model the [[neuron]]s in a biological [[brain]]. Each connection, like the [[Synapse|synapses]] in a biological [[brain]], can transmit a signal from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it.
 
An ANN is based on a collection of connected units or nodes called [[artificial neuron]]s which loosely model the [[neuron]]s in a biological [[brain]]. Each connection, like the [[Synapse|synapses]] in a biological [[brain]], can transmit a signal from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it.
   −
In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called 'edges'. Artificial neurons and edges typically have a [[weight (mathematics)|weight]] that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.
+
在通常的人工神经网络实现中,在人工神经元之间连接的信号是一个实数,每个人工神经元的输出则由它输入之和的某个非线性函数计算。人工神经元之间的连接成为“边”。人工神经元和边通常有一个随学习进行而调整的[https://en.wikipedia.org/wiki/Weighting 权重]。这个权重增加或减少一个连接处的信号强度。人工神经元可能有一个阈值以便信号只有在聚集的信号越过这个阈值时才发出。一般地,人工神经元聚集成层,不同的层可能对它们的输入执行不同种类的转换。信号从第一层(输入层)传递到最后一层(输出层),可能会多次穿过这些层。
    
The original goal of the ANN approach was to solve problems in the same way that a [[human brain]] would. However, over time, attention moved to performing specific tasks, leading to deviations from [[biology]]. ANNs have been used on a variety of tasks, including [[computer vision]], [[speech recognition]], [[machine translation]], [[social network]] filtering, [[general game playing|playing board and video games]] and [[medical diagnosis]].
 
The original goal of the ANN approach was to solve problems in the same way that a [[human brain]] would. However, over time, attention moved to performing specific tasks, leading to deviations from [[biology]]. ANNs have been used on a variety of tasks, including [[computer vision]], [[speech recognition]], [[machine translation]], [[social network]] filtering, [[general game playing|playing board and video games]] and [[medical diagnosis]].
 
{{toclimit|3}}
 
{{toclimit|3}}
   −
==历史==
+
== 历史 ==
 
[[Warren McCulloch]] and [[Walter Pitts]]<ref>{{cite journal|last=McCulloch|first=Warren|author2=Walter Pitts|title=A Logical Calculus of Ideas Immanent in Nervous Activity|journal=Bulletin of Mathematical Biophysics|year=1943|volume=5|pages=115–133|doi=10.1007/BF02478259|issue=4}}</ref> (1943) created a computational model for neural networks based on [[mathematics]] and [[algorithm]]s called threshold logic. This model paved the way for neural network research to split into two approaches. One approach focused on biological processes in the brain while the other focused on the application of neural networks to [[artificial intelligence]]. This work led to work on nerve networks and their link to [[Finite state machine|finite automata]].<ref>{{Cite news|url=https://www.degruyter.com/view/books/9781400882618/9781400882618-002/9781400882618-002.xml|title=Representation of Events in Nerve Nets and Finite Automata|last=Kleene|first=S.C.|date=|work=Annals of Mathematics Studies|access-date=2017-06-17|archive-url=|archive-date=|dead-url=|publisher=Princeton University Press|year=1956|issue=34|pages=3–41|language=en}}</ref>
 
[[Warren McCulloch]] and [[Walter Pitts]]<ref>{{cite journal|last=McCulloch|first=Warren|author2=Walter Pitts|title=A Logical Calculus of Ideas Immanent in Nervous Activity|journal=Bulletin of Mathematical Biophysics|year=1943|volume=5|pages=115–133|doi=10.1007/BF02478259|issue=4}}</ref> (1943) created a computational model for neural networks based on [[mathematics]] and [[algorithm]]s called threshold logic. This model paved the way for neural network research to split into two approaches. One approach focused on biological processes in the brain while the other focused on the application of neural networks to [[artificial intelligence]]. This work led to work on nerve networks and their link to [[Finite state machine|finite automata]].<ref>{{Cite news|url=https://www.degruyter.com/view/books/9781400882618/9781400882618-002/9781400882618-002.xml|title=Representation of Events in Nerve Nets and Finite Automata|last=Kleene|first=S.C.|date=|work=Annals of Mathematics Studies|access-date=2017-06-17|archive-url=|archive-date=|dead-url=|publisher=Princeton University Press|year=1956|issue=34|pages=3–41|language=en}}</ref>
   −
===Hebbian learning===
+
=== 赫布学习(Hebbian learning)===
    
In the late 1940s, [[Donald O. Hebb|D.O. Hebb]]<ref>{{cite book|url={{google books |plainurl=y |id=ddB4AgAAQBAJ}}|title=The Organization of Behavior|last=Hebb|first=Donald|publisher=Wiley|year=1949|isbn=978-1-135-63190-1|location=New York|pages=}}</ref> created a learning hypothesis based on the mechanism of [[Neuroplasticity|neural plasticity]] that became known as [[Hebbian learning]]. Hebbian learning is [[unsupervised learning]]. This evolved into models for [[long term potentiation]]. Researchers started applying these ideas to computational models in 1948 with [[unorganized machine|Turing's B-type machines]].
 
In the late 1940s, [[Donald O. Hebb|D.O. Hebb]]<ref>{{cite book|url={{google books |plainurl=y |id=ddB4AgAAQBAJ}}|title=The Organization of Behavior|last=Hebb|first=Donald|publisher=Wiley|year=1949|isbn=978-1-135-63190-1|location=New York|pages=}}</ref> created a learning hypothesis based on the mechanism of [[Neuroplasticity|neural plasticity]] that became known as [[Hebbian learning]]. Hebbian learning is [[unsupervised learning]]. This evolved into models for [[long term potentiation]]. Researchers started applying these ideas to computational models in 1948 with [[unorganized machine|Turing's B-type machines]].
第536行: 第536行:  
{{div col end}}
 
{{div col end}}
   −
==External links==
+
==外部链接==
 
* [http://www.dkriesel.com/en/science/neural_networks A brief introduction to Neural Networks] (PDF), illustrated 250p textbook covering the common kinds of neural networks (CC license).
 
* [http://www.dkriesel.com/en/science/neural_networks A brief introduction to Neural Networks] (PDF), illustrated 250p textbook covering the common kinds of neural networks (CC license).
 
* [http://deeplearning4j.org/neuralnet-overview.html An Introduction to Deep Neural Networks].
 
* [http://deeplearning4j.org/neuralnet-overview.html An Introduction to Deep Neural Networks].
第557行: 第557行:  
[[Category:Mathematical and quantitative methods (economics)]]
 
[[Category:Mathematical and quantitative methods (economics)]]
   −
本词条内容翻译自 en.wikipedia.org,代码由 [ ] 贡献,遵守 CC3.0协议
+
本词条内容翻译自 en.wikipedia.org,遵守 CC3.0协议

导航菜单