赫布理论

来自集智百科 - 复杂系统|人工智能|复杂科学|复杂网络|自组织
跳到导航 跳到搜索

此词条暂由彩云小译翻译,翻译字数共2259,未经人工整理和审校,带来阅读不便,请见谅。

  • 此词条由神经动力学读书会词条梳理志愿者(雨晨)翻译审校,未经专家审核,带来阅读不便,请见谅。(第一段中文为机翻)

Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior.[1] The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:

赫布理论是一种神经科学理论,该理论认为突触效能的增加是突触前细胞对突触后细胞的重复和持续刺激引起的。它尝试解释突触可塑性的机制,即大脑神经元在学习过程中的适应性如何产生。Donald Hebb在他1949年的著作《行为的组织》中提出了这个概念。[1] 该理论又被称为赫布法则、赫布假设和细胞结集理论。赫布对其描述如下:


Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:

赫布理论是一种神经科学理论,该理论认为突触效能的增加是突触前细胞对突触后细胞的重复和持续刺激引起的。它尝试解释突触可塑性的机制,即大脑神经元在学习过程中的适应性如何产生。Donald Hebb在他1949年的著作《行为的组织》中提出了这个概念。该理论又被称为赫布法则、赫布假设和细胞结集理论。赫布对其描述如下:

Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability. ... When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.[1] 让我们假设混响活动(或“痕迹”)的持续或重复往往会引起持久的细胞变化,从而增加其稳定性. ...当A细胞的一个轴突离B细胞足够近,可以刺激B细胞,并反复或持续地参与刺激B细胞,其中一个或两个细胞会发生一些生长过程或代谢变化,这样当其中一个细胞刺激B细胞时,A细胞的效率就会提高。

Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability. ... When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.

让我们假设,持续或重复的反射活动(或“痕迹”)往往会诱发持久的细胞变化,增加其稳定性。......当细胞 A的轴突足够接近刺激细胞 B并且反复或持续地参与刺激 细胞B时,一个细胞或两个细胞都会发生某种生长过程或代谢变化,这样 ,A作为使B发放的细胞之一,其效能就会增加。

The theory is often summarized as "Cells that fire together wire together."[2] However, Hebb emphasized that cell A needs to "take part in firing" cell B, and such causality can occur only if cell A fires just before, not at the same time as, cell B. This aspect of causation in Hebb's work foreshadowed what is now known about spike-timing-dependent plasticity, which requires temporal precedence.[3]

这个理论通常被概括为“一起燃烧的神经元连接在一起。”然而,赫强调细胞需要计算单元B“参加射击”,这样的因果关系可以只有细胞发生火灾之前,不同时,B细胞。这方面的因果关系在赫的作品预示着现在知道spike-timing-dependent可塑性,这需要时间优先级。

The theory is often summarized as "Cells that fire together wire together."Siegrid Löwel, Göttingen University; The exact sentence is: "neurons wire together if they fire together" (Löwel, S. and Singer, W. (1992) Science 255 (published January 10, 1992) However, Hebb emphasized that cell A needs to "take part in firing" cell B, and such causality can occur only if cell A fires just before, not at the same time as, cell B. This aspect of causation in Hebb's work foreshadowed what is now known about spike-timing-dependent plasticity, which requires temporal precedence.

这个理论经常被总结为“一起发放的细胞,连接在一起。”[2] 确切的句子是: “如果神经元一起发放,它们就连在一起”(Löwel,s. 和 Singer,w. (1992) Science 255(发表于1992年1月10日)然而,Hebb 强调,细胞 A必须要对细胞B的发放“作出贡献”,而这种因果关系只有在A 刚好在 B之前发放时才会发生,而不是在与B 同时发放时发生。 Hebb 工作中因果关系的这个方面,预示了后来脉冲-时间依赖可塑性 Spike-Timing-Dependent Plasticity (STDP)的发现,STDP表明了可塑性与突触前后的发放顺序关系。[3]

The theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. It also provides a biological basis for errorless learning methods for education and memory rehabilitation. In the study of neural networks in cognitive function, it is often regarded as the neuronal basis of unsupervised learning.

该理论试图解释联想学习或赫布学习,即细胞的同时激活导致这些细胞之间的突触强度显著增加。它也为教育和记忆恢复的正确学习方法提供了生物学基础。在认知功能的神经网络研究中,常被认为是无监督学习的神经基础。

The theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. It also provides a biological basis for errorless learning methods for education and memory rehabilitation. In the study of neural networks in cognitive function, it is often regarded as the neuronal basis of unsupervised learning.

该理尝试对联合学习 Aassociative Learning(也称赫布学习)进行解释,在这种学习中,神经元的同时激活导致它们之间的突触强度显著增加。同时,它也为无误学习 Errorless Learning 的教育方法和记忆康复方法提供了生物学基础。在认知相关的神经网络研究中,它通常被认为是无监督学习 Unsupervised Learning的神经基础。

赫布记忆印痕与细胞结集理论

Hebbian theory concerns how neurons might connect themselves to become engrams. Hebb's theories on the form and function of cell assemblies can be understood from the following:[1]:70

赫布的理论关注的是神经元如何将自己连接起来成为记忆印痕。关于细胞结集的形式和功能的理论可以从以下几点来理解:

Hebbian theory concerns how neurons might connect themselves to become engrams. Hebb's theories on the form and function of cell assemblies can be understood from the following:

赫布的理论关注的是神经元如何将自己连接起来成为记忆印痕。关于细胞集群的形式和功能的理论可以从以下几点来理解:[1]:70

The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become 'associated' so that activity in one facilitates activity in the other.

The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become 'associated' so that activity in one facilitates activity in the other.

这是一个古老的想法,任何两个神经元或神经元组成的系统,如果总是反复被同时激发,它们的活动将趋向于成为一种相关联的“组合”,这样其中一个的激发将会促进另一个的激发。

Hebb also wrote:[1]:63

When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell. 当一个细胞重复地帮助激发另一个细胞时,第一个细胞的轴突就会与第二个细胞的体接触,产生突触旋钮(如果它们已经存在,则会放大它们)

Hebb also wrote: When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.

赫布还写道:[1]:63 当一个神经元反复激活另一个神经元使其发放时,前者的轴突在与后者的胞体接触时形成突触小体(如果已经存在,则会继续长大)。

[D. Alan Allport] posits additional ideas regarding cell assembly theory and its role in forming engrams, along the lines of the concept of auto-association, described as follows:

[D. Alan Allport] posits additional ideas regarding cell assembly theory and its role in forming engrams, along the lines of the concept of auto-association, described as follows:

高尔顿·威拉德·奥尔波特根据“自联想”概念,进一步提出了关于细胞集群理论的作用及其在记忆印痕形成中所扮演的角色:

If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly interassociated. That is, each element will tend to turn on every other element and (with negative weights) to turn off the elements that do not form part of the pattern. To put it another way, the pattern as a whole will become 'auto-associated'. We may call a learned (auto-associated) pattern an engram.[4]:44

如果系统的输入导致相同的活动模式重复发生,构成该模式的活动元素集将变得越来越紧密地相互关联。也就是说,每个元素都倾向于打开其他每个元素,并(使用负权值)关闭不构成模式一部分的元素。换句话说,这个模式作为一个整体会变得“自动关联”。我们可以把一种习得的(自动关联的)模式称为记忆印

If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly interassociated. That is, each element will tend to turn on every other element and (with negative weights) to turn off the elements that do not form part of the pattern. To put it another way, the pattern as a whole will become 'auto-associated'. We may call a learned (auto-associated) pattern an engram.

如果一个系统的输入导致同样的活动模式反复出现,那么构成该模式的一组活动元素之间的关联性将变得越来越强烈。也就是说,每个元素都倾向于激活其他所有元素,并且(以赋予更少权重的方式)抑制其它不参与该模式的元素。换句话说,这个模式作为一个整体将实现“自联想”。我们可以把一个习得的(自联想的)模式称为记忆痕迹。[4]:44

Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine gastropod Aplysia californica.[citation needed] Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study模板:Which reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms.

埃里克·坎德尔实验室的工作为赫布学习机制在海洋腹足纲动物加州海兔的突触中参与提供了证据。[citation needed]与海洋无脊椎动物中相对简单的外周神经系统突触的实验相比,脊椎动物中枢神经系统突触的Hebbian突触修饰机制的实验更难控制。脊椎动物神经元间持久的突触变化(如长期电位增强)的研究大多涉及对脑细胞的非生理实验刺激。然而,脊椎动物大脑中一些生理学相关的突触修饰机制似乎确实是赫比恩过程的例子。这样一个研究模板:评价实验,结果表明,持久的突触强度的变化可以引起生理相关的突触活动通过Hebbian和non-Hebbian工作机制。

Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine gastropod Aplysia californica. Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms.

埃里克 · 坎德尔实验室在海生腹足纲动物加州海兔上的工作,表明了赫布理论与这种突触学习机制之间的关联。在脊椎动物中枢神经系统突触上进行的赫布突触可塑性机制的实验比在海洋无脊椎动物中进行的相对简单的周围神经系统突触实验难控制很多。脊椎动物神经元之间的长期突触变化(例如长时程增强 Long-term Potentiation)的许多研究都涉及到对脑细胞的模拟,而非生理性实验。尽管如此,脊椎动物大脑中研究的一些突触可塑性相关的生理实验结果似乎确实符合赫布理论。

原理与公式

From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously, and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.

从人工神经元和人工神经网络的角度来看,Hebb原理可以描述为一种确定如何改变模型神经元之间权值的方法。如果两个神经元同时激活,它们之间的重量就会增加,如果它们单独激活,重量就会减少。倾向于同时为正或为负的节点具有强的正权值,而倾向于相反的节点具有强的负权值。

From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously, and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.

从人工神经元和人工神经网络的角度来看,赫布原理可以描述为一种确定如何改变模型神经元之间的权重的方法。如果两个神经元同步发放,那么两个神经元之间的权重就会增加; 如果两个神经元分别发放,权重就会减少。同时是正或负数的两个节点间具有很强的正权重,而相反的节点间具有很强的负权重。

The following is a formulaic description of Hebbian learning: (many other descriptions are possible)

以下是对赫比人学习的公式化描述:(可能有许多其他描述)

The following is a formulaic description of Hebbian learning: (many other descriptions are possible)

以下是赫布学习的公式化描述:

[math]\displaystyle{ \,w_{ij}=x_ix_j }[/math]


where [math]\displaystyle{ w_{ij} }[/math] is the weight of the connection from neuron [math]\displaystyle{ j }[/math] to neuron [math]\displaystyle{ i }[/math] and [math]\displaystyle{ x_i }[/math] the input for neuron [math]\displaystyle{ i }[/math]. Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections [math]\displaystyle{ w_{ij} }[/math] are set to zero if [math]\displaystyle{ i=j }[/math] (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.

where [math]\displaystyle{ w_{ij} }[/math] is the weight of the connection from neuron j to neuron i and x_i the input for neuron i . Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections w_{ij} are set to zero if i=j (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.

其中 [math]\displaystyle{ w_{ij} }[/math]是神经元 [math]\displaystyle{ j }[/math] 到神经元 [math]\displaystyle{ i }[/math]连接的权重,[math]\displaystyle{ x_i }[/math]是神经元 [math]\displaystyle{ i }[/math] 的输入。请注意,这是模式学习(每个训练样本都会导致权重更新)。在 Hopfield网络中,如果 [math]\displaystyle{ i=j }[/math],那么[math]\displaystyle{ w_{ij} }[/math]恒为零(每个神经元不和自身相连)。对于二进制神经元(激发值为0或1) ,如果连接的神经元有相同的发放模式,则连接将被设定为1。

When several training patterns are used the expression becomes an average of individual ones:

When several training patterns are used the expression becomes an average of individual ones:

当使用几种训练模式时,这个表达式就变成了单个模式的平均值:

[math]\displaystyle{ w_{ij} = \frac{1}{p} \sum_{k=1}^p x_i^k x_j^k = \lt x_i x_j\gt ,\, }[/math]


where [math]\displaystyle{ w_{ij} }[/math] is the weight of the connection from neuron [math]\displaystyle{ j }[/math] to neuron [math]\displaystyle{ i }[/math], [math]\displaystyle{ p }[/math] is the number of training patterns, [math]\displaystyle{ x_{i}^k }[/math] the [math]\displaystyle{ k }[/math]th input for neuron [math]\displaystyle{ i }[/math] and <> is the average over all training patterns. This is learning by epoch (weights updated after all the training examples are presented), being last term applicable to both discrete and continuous training sets. Again, in a Hopfield network, connections [math]\displaystyle{ w_{ij} }[/math] are set to zero if [math]\displaystyle{ i=j }[/math] (no reflexive connections).

where w_{ij} is the weight of the connection from neuron j to neuron i , p is the number of training patterns, x_{i}^k the k th input for neuron i and <> is the average over all training patterns. This is learning by epoch (weights updated after all the training examples are presented), being last term applicable to both discrete and continuous training sets. Again, in a Hopfield network, connections w_{ij} are set to zero if i=j (no reflexive connections).

其中 [math]\displaystyle{ w_{ij} }[/math]是从神经元[math]\displaystyle{ j }[/math] 到神经元[math]\displaystyle{ i }[/math]连接的权重,[math]\displaystyle{ p }[/math] 是训练模式的数量,[math]\displaystyle{ x_{i}^k }[/math]是神经元 [math]\displaystyle{ i }[/math] 的第[math]\displaystyle{ k }[/math]个输入,< > 表示所有训练模式的平均值。这是一种根据时段的学习方法,即在所有训练样本都给出之后再更新权重,同时适用于离散和连续的训练集。同样,在 Hopfield神经网络中,如果[math]\displaystyle{ i=j }[/math] (每个神经元不和自身相连) ,连接 [math]\displaystyle{ w_{ij} }[/math]将被设定为零。


A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry Klopf.[5] Klopf's model reproduces a great many biological phenomena, and is also simple to implement.

哈里·克洛普弗模型[5] 是赫布学习的一种变体,该模型考虑了阻断以及许多其他学习中的神经生理表现,可以再现许多生物学表现,而且实现起来也很简单。

与无监督学习的关联,稳定性和泛化性

Because of the simple nature of Hebbian learning, based only on the coincidence of pre- and post-synaptic activity, it may not be intuitively clear why this form of plasticity leads to meaningful learning. However, it can be shown that Hebbian plasticity does pick up the statistical properties of the input in a way that can be categorized as unsupervised learning.

由于赫布学习的简单本质,仅仅基于突触前和突触后活动的巧合,我们可能无法直观地明白为什么这种形式的可塑性会导致有意义的学习。然而,可以证明的是,赫比安的可塑性确实在某种程度上拾起了输入的统计特性,这可以被归类为无监督学习。

Because of the simple nature of Hebbian learning, based only on the coincidence of pre- and post-synaptic activity, it may not be intuitively clear why this form of plasticity leads to meaningful learning. However, it can be shown that Hebbian plasticity does pick up the statistical properties of the input in a way that can be categorized as unsupervised learning.

由于赫布族学习的简单性质,仅仅基于突触前和突触后活动的一致性,我们可能无法直观地理解为什么这种形式的可塑性会导致有意义的学习。然而,可以证明的是赫布可塑性确实以一种可归类为非监督学习的方式提取了输入的统计特性。

This can be mathematically shown in a simplified example. Let us work under the simplifying assumption of a single rate-based neuron of rate [math]\displaystyle{ y(t) }[/math], whose inputs have rates [math]\displaystyle{ x_1(t) ... x_N(t) }[/math]. The response of the neuron [math]\displaystyle{ y(t) }[/math] is usually described as a linear combination of its input, [math]\displaystyle{ \sum_i w_ix_i }[/math], followed by a response function [math]\displaystyle{ f() }[/math]:

[math]\displaystyle{ y = f\left(\sum_{i=1}^N w_i x_i \right). }[/math]

This can be mathematically shown in a simplified example. Let us work under the simplifying assumption of a single rate-based neuron of rate y(t), whose inputs have rates x_1(t) ... x_N(t). The response of the neuron y(t) is usually described as a linear combination of its input, \sum_i w_ix_i, followed by a response function f():

这在数学上可以用一个简单的例子来表示。让我们简化地假设,一个发放率为 [math]\displaystyle{ y(t) }[/math]的神经元,其输入发放率为[math]\displaystyle{ x_1(t) ... x_N(t) }[/math]。神经元[math]\displaystyle{ y(t) }[/math]的反应通常被描述为其输入的线性组合,即它关于[math]\displaystyle{ \sum_i w_ix_i }[/math]的反应函数[math]\displaystyle{ f() }[/math]:

[math]\displaystyle{ y = f\left(\sum_{i=1}^N w_i x_i \right). }[/math]

As defined in the previous sections, Hebbian plasticity describes the evolution in time of the synaptic weight [math]\displaystyle{ w }[/math]:

As defined in the previous sections, Hebbian plasticity describes the evolution in time of the synaptic weight w:

正如前面所定义的,赫布可塑性描述了突触权重 [math]\displaystyle{ w }[/math]在时间上的演化:

[math]\displaystyle{ \frac{dw_i}{dt} = \eta x_i y. }[/math]


Assuming, for simplicity, an identity response function [math]\displaystyle{ f(a)=a }[/math], we can write

[math]\displaystyle{ \frac{dw_i}{dt} = \eta x_i \sum_{j=1}^N w_j x_j }[/math]

Assuming, for simplicity, an identity response function f(a)=a, we can write

为了简单起见,定义一个单位响应函数[math]\displaystyle{ f(a)=a }[/math],可知:

[math]\displaystyle{ \frac{dw_i}{dt} = \eta x_i \sum_{j=1}^N w_j x_j }[/math]


or in matrix form:

[math]\displaystyle{ \frac{d\mathbf{w}}{dt} = \eta \mathbf{x}\mathbf{x}^T\mathbf{w}. }[/math]

or in matrix form:

或写为矩阵形式:

[math]\displaystyle{ \frac{d\mathbf{w}}{dt} = \eta \mathbf{x}\mathbf{x}^T\mathbf{w}. }[/math]


As in previous chapter, if training by epoch is done an average [math]\displaystyle{ \lt .\gt }[/math] over discrete or continuous (time) training set of [math]\displaystyle{ \mathbf{x} }[/math] can be done:

[math]\displaystyle{ \frac{d\mathbf{w}}{dt} = \lt \eta \mathbf{x}\mathbf{x}^T\mathbf{w} \gt = \eta \lt \mathbf{x}\mathbf{x}^T\gt \mathbf{w} = \eta C \mathbf{w}. }[/math]

where [math]\displaystyle{ C = \langle\, \mathbf{x}\mathbf{x}^T \rangle }[/math] is the correlation matrix of the input under the additional assumption that [math]\displaystyle{ \langle\mathbf{x}\rangle = 0 }[/math] (i.e. the average of the inputs is zero). This is a system of [math]\displaystyle{ N }[/math] coupled linear differential equations. Since [math]\displaystyle{ C }[/math] is symmetric, it is also diagonalizable, and the solution can be found, by working in its eigenvectors basis, to be of the form

As in previous chapter, if training by epoch is done an average <.> over discrete or continuous (time) training set of \mathbf{x} can be done:\frac{d\mathbf{w}}{dt} = < \eta \mathbf{x}\mathbf{x}^T\mathbf{w} > = \eta < \mathbf{x}\mathbf{x}^T>\mathbf{w} = \eta C \mathbf{w}.where C = \langle\, \mathbf{x}\mathbf{x}^T \rangle is the correlation matrix of the input under the additional assumption that \langle\mathbf{x}\rangle = 0 (i.e. the average of the inputs is zero). This is a system of N coupled linear differential equations. Since C is symmetric, it is also diagonalizable, and the solution can be found, by working in its eigenvectors basis, to be of the form

和之前一样,如果按时段进行训练,则对于离散或连续(时间)训练集 [math]\displaystyle{ \mathbf{x} }[/math]的平均值可写为:

[math]\displaystyle{ \frac{d\mathbf{w}}{dt} = \lt \eta \mathbf{x}\mathbf{x}^T\mathbf{w} \gt = \eta \lt \mathbf{x}\mathbf{x}^T\gt \mathbf{w} = \eta C \mathbf{w}. }[/math]


where [math]\displaystyle{ C = \langle\, \mathbf{x}\mathbf{x}^T \rangle }[/math] is the correlation matrix of the input under the additional assumption that [math]\displaystyle{ \langle\mathbf{x}\rangle = 0 }[/math] (i.e. the average of the inputs is zero). This is a system of [math]\displaystyle{ N }[/math] coupled linear differential equations. Since [math]\displaystyle{ C }[/math] is symmetric, it is also diagonalizable, and the solution can be found, by working in its eigenvectors basis, to be of the form

[math]\displaystyle{ \mathbf{w}(t) = k_1e^{\eta\alpha_1 t}\mathbf{c}_1 + k_2e^{\eta\alpha_2 t}\mathbf{c}_2 + ... + k_Ne^{\eta\alpha_N t}\mathbf{c}_N }[/math]

其中[math]\displaystyle{ C = \langle\, \mathbf{x}\mathbf{x}^T \rangle }[/math]是在[math]\displaystyle{ \langle\mathbf{x}\rangle = 0 }[/math](即。输入的平均值为零)条件下输入的相关矩阵。这是一个有 [math]\displaystyle{ N }[/math] 个耦合的线性微分方程组的系统。由于 [math]\displaystyle{ C }[/math]是对称的,所以它也是可对角化的,并且可以用它的特征基向量来求解:

[math]\displaystyle{ \mathbf{w}(t) = k_1e^{\eta\alpha_1 t}\mathbf{c}_1 + k_2e^{\eta\alpha_2 t}\mathbf{c}_2 + ... + k_Ne^{\eta\alpha_N t}\mathbf{c}_N }[/math]

where [math]\displaystyle{ k_i }[/math] are arbitrary constants, [math]\displaystyle{ \mathbf{c}_i }[/math] are the eigenvectors of [math]\displaystyle{ C }[/math] and [math]\displaystyle{ \alpha_i }[/math] their corresponding eigenvalues. Since a correlation matrix is always a positive-definite matrix, the eigenvalues are all positive, and one can easily see how the above solution is always exponentially divergent in time. This is an intrinsic problem due to this version of Hebb's rule being unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially. Intuitively, this is because whenever the presynaptic neuron excites the postsynaptic neuron, the weight between them is reinforced, causing an even stronger excitation in the future, and so forth, in a self-reinforcing way. One may think a solution is to limit the firing rate of the postsynaptic neuron by adding a non-linear, saturating response function [math]\displaystyle{ f }[/math], but in fact, it can be shown that for any neuron model, Hebb's rule is unstable.[6] Therefore, network models of neurons usually employ other learning theories such as BCM theory, Oja's rule,[7] or the generalized Hebbian algorithm.

where k_i are arbitrary constants, \mathbf{c}_i are the eigenvectors of C and \alpha_i their corresponding eigenvalues. Since a correlation matrix is always a positive-definite matrix, the eigenvalues are all positive, and one can easily see how the above solution is always exponentially divergent in time. This is an intrinsic problem due to this version of Hebb's rule being unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially. Intuitively, this is because whenever the presynaptic neuron excites the postsynaptic neuron, the weight between them is reinforced, causing an even stronger excitation in the future, and so forth, in a self-reinforcing way. One may think a solution is to limit the firing rate of the postsynaptic neuron by adding a non-linear, saturating response function f, but in fact, it can be shown that for any neuron model, Hebb's rule is unstable. Therefore, network models of neurons usually employ other learning theories such as BCM theory, Oja's rule, or the generalized Hebbian algorithm.

其中[math]\displaystyle{ k_i }[/math] 是任意的常数,数[math]\displaystyle{ \mathbf{c}_i }[/math][math]\displaystyle{ C }[/math] 的特征向量,而 [math]\displaystyle{ \alpha_i }[/math]是它们对应的特征值。因为一个相关矩阵总是一个正定矩阵,所以特征值都是正的,因此可以很容易地看到上述解决方案总是在时间上呈指数分岔。这是一个来自于赫布法则本身的问题,该版本的赫布规则是不稳定的,因为在任何有主导信号的网络中的突触权重将指数增加或减少。直观地说,这是因为每当突触前神经元激发突触后神经元时,它们之间的权重就会增强,并在未来引起更强烈的兴奋,以一种自我强化的方式不断重复。有人可能认为解决方案是通过增加一个非线性的饱和响应函数[math]\displaystyle{ f }[/math] 来限制突触后神经元的放电频率,但事实上,可以证明对于任何神经元模型,赫布法则是不稳定的。[6] 因此,神经元网络模型通常采用其他学习理论,如 BCM理论Oja 法则[7]广义赫布算法

BCM Theory BCM理论 Oja's Rule Oja 法则 Generalized Hebbian Algorithm 广义赫布算法

As in previous chapter, if training by epoch is done an average <.> over discrete or continuous (time) training set of \mathbf{x} can be done:\frac{d\mathbf{w}}{dt} = < \eta \mathbf{x}\mathbf{x}^T\mathbf{w} > = \eta < \mathbf{x}\mathbf{x}^T>\mathbf{w} = \eta C \mathbf{w}.where C = \langle\, \mathbf{x}\mathbf{x}^T \rangle is the correlation matrix of the input under the additional assumption that \langle\mathbf{x}\rangle = 0 (i.e. the average of the inputs is zero). This is a system of N coupled linear differential equations. Since C is symmetric, it is also diagonalizable, and the solution can be found, by working in its eigenvectors basis, to be of the form

\mathbf{w}(t) = k_1e^{\eta\alpha_1 t}\mathbf{c}_1 + k_2e^{\eta\alpha_2 t}\mathbf{c}_2 + ... + k_Ne^{\eta\alpha_N t}\mathbf{c}_N

where k_i are arbitrary constants, \mathbf{c}_i are the eigenvectors of C and \alpha_i their corresponding eigenvalues. Since a correlation matrix is always a positive-definite matrix, the eigenvalues are all positive, and one can easily see how the above solution is always exponentially divergent in time. This is an intrinsic problem due to this version of Hebb's rule being unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially. Intuitively, this is because whenever the presynaptic neuron excites the postsynaptic neuron, the weight between them is reinforced, causing an even stronger excitation in the future, and so forth, in a self-reinforcing way. One may think a solution is to limit the firing rate of the postsynaptic neuron by adding a non-linear, saturating response function f, but in fact, it can be shown that for any neuron model, Hebb's rule is unstable. Therefore, network models of neurons usually employ other learning theories such as BCM theory, Oja's rule, or the generalized Hebbian algorithm.

(机翻)这在数学上可以用一个简单的例子来表示。让我们在简化的假设下工作,一个速率为 y (t)的单个速率神经元,其输入速率为 x _ 1(t) ... x _ n (t)。神经元 y (t)的反应通常被描述为其输入的线性组合,sum _ i w _ ix _ i,然后是一个反应函数 f () : y = f left (sum { i = 1} ^ n w _ i x _ i right)。

正如前面几节所定义的,Hebbian 可塑性描述了突触重量 w: : frac { dw _ i }{ dt } = eta x _ y 在时间上的演化。

假设,为了简单起见,一个单位响应函数 f (a) = a,我们可以写: frac { dw i }{ dt } = eta x i sum { j = 1} ^ n w j x j 或以矩阵形式: frac { d mathbf { w }{ mathbf { x } ^ t { w }。

和前一章一样,如果按时间进行训练,则平均值为对于 mathbf { x }的离散或连续(时间)训练集,可以这样做: frac { d mathbf { w }}{ dt } = < eta mathbf { x } ^ t mathbf { w } > = eta < mathbf { x } ^ t > mathbf { w } = eta c mathbf { w }。其中 c = langle,mathbf { x } mathbf { x } ^ t rangle 是在 langle mathbf { x } rangle = 0(即。输入的平均值为零)。这是一个 n 个耦合的线性微分方程组。由于 c 是对称的,所以它也是可对角化的,并且可以用它的特征向量基来求解: mathbf { w }(t) = k1e ^ { eta alpha _ 1 t } bf { c }1 + k2e ^ { alpha _ 2 t } bf { c }2 + ... + kne ^ { eta alpha _ n t } bf { c } n,

其中 k _ i 是任意的常数,数{ c } _ i 是 c 的特征向量,而 α _ i 是它们对应的特征值。因为一个相关矩阵总是一个正定矩阵,所以特征值都是正的,你可以很容易地看到上述解决方案总是在时间上呈指数分叉。这是一个内在的问题,因为这个版本的 Hebb 规则是不稳定的,因为在任何网络的主导信号的突触权重将增加或减少指数。直观地说,这是因为每当突触前神经元激发突触后神经元时,它们之间的重量就会增强,在未来引起更强烈的兴奋,等等,以一种自我强化的方式。有人可能认为解决方案是通过增加一个非线性的饱和响应函数 f 来限制突触后神经元的放电频率,但事实上,可以证明,对于任何神经元模型,Hebb 规则是不稳定的。因此,神经元网络模型通常采用其他学习理论,如 BCM 理论、 Oja 规则或广义 Hebbian 算法。


Regardless, even for the unstable solution above, one can see that, when sufficient time has passed, one of the terms dominates over the others, and

[math]\displaystyle{ \mathbf{w}(t) \approx e^{\eta\alpha^* t}\mathbf{c}^* }[/math]

where [math]\displaystyle{ \alpha^* }[/math] is the largest eigenvalue of [math]\displaystyle{ C }[/math]. At this time, the postsynaptic neuron performs the following operation:

[math]\displaystyle{ y \approx e^{\eta\alpha^* t}\mathbf{c}^* \mathbf{x} }[/math]

Because, again, [math]\displaystyle{ \mathbf{c}^* }[/math] is the eigenvector corresponding to the largest eigenvalue of the correlation matrix between the [math]\displaystyle{ x_i }[/math]s, this corresponds exactly to computing the first principal component of the input.

Regardless, even for the unstable solution above, one can see that, when sufficient time has passed, one of the terms dominates over the others, and

\mathbf{w}(t) \approx e^{\eta\alpha^* t}\mathbf{c}^*

where \alpha^* is the largest eigenvalue of C. At this time, the postsynaptic neuron performs the following operation:

y \approx e^{\eta\alpha^* t}\mathbf{c}^* \mathbf{x}

Because, again, \mathbf{c}^* is the eigenvector corresponding to the largest eigenvalue of the correlation matrix between the x_is, this corresponds exactly to computing the first principal component of the input.

尽管如此,即使对于上面的不稳定解,我们仍然可以看到,经过足够久时间,其中一个项支会配其他项,并且有:

[math]\displaystyle{ \mathbf{w}(t) \approx e^{\eta\alpha^* t}\mathbf{c}^* }[/math]

其中[math]\displaystyle{ \alpha^* }[/math][math]\displaystyle{ C }[/math] 的最大特征值。在这个时候,突触后神经元执行以下运算:

[math]\displaystyle{ y \approx e^{\eta\alpha^* t}\mathbf{c}^* \mathbf{x} }[/math]

因为同样,[math]\displaystyle{ \mathbf{c}^* }[/math]是对应于 [math]\displaystyle{ x_i }[/math]之间相关矩阵的最大特征值的特征向量,这恰好对应于计算输入的第一个主成分。

This mechanism can be extended to performing a full PCA (principal component analysis) of the input by adding further postsynaptic neurons, provided the postsynaptic neurons are prevented from all picking up the same principal component, for example by adding lateral inhibition in the postsynaptic layer. We have thus connected Hebbian learning to PCA, which is an elementary form of unsupervised learning, in the sense that the network can pick up useful statistical aspects of the input, and "describe" them in a distilled way in its output.[8]

the postsynaptic neurons are prevented from all picking up the same principal component, for example by adding lateral inhibition in the postsynaptic layer. We have thus connected Hebbian learning to PCA, which is an elementary form of unsupervised learning, in the sense that the network can pick up useful statistical aspects of the input, and "describe" them in a distilled way in its output.


这种机制可以扩展到通过增加更多的突触后神经元来执行完整的主成分分析(principal component analysis, PCA),前提是突触后神经元不能全部拥有相同的主成分,例如在突触后的层增加侧抑制。因此,我们可以将赫布学习与无监督学习的一种基本形式——PCA连接起来,从某种意义上说,网络可以提取输入的有用的统计信息,并在输出中以一种提炼的方式“描述”它们。[8]

Lateral Inhibition 侧抑制

局限性

Despite the common use of Hebbian models for long-term potentiation, Hebb's principle does not cover all forms of synaptic long-term plasticity. Hebb did not postulate any rules for inhibitory synapses, nor did he make predictions for anti-causal spike sequences (presynaptic neuron fires after the postsynaptic neuron). Synaptic modification may not simply occur only between activated neurons A and B, but at neighboring synapses as well.[9] All forms of heterosynaptic and homeostatic plasticity are therefore considered non-Hebbian. An example is retrograde signaling to presynaptic terminals.[10] The compound most commonly identified as fulfilling this retrograde transmitter role is nitric oxide, which, due to its high solubility and diffusivity, often exerts effects on nearby neurons.[11] This type of diffuse synaptic modification, known as volume learning, is not included in the traditional Hebbian model.[12]

(机翻)尽管Hebbian模型用于长期电位增强,但Hebb的原理并不涵盖所有形式的突触长期可塑性。Hebb没有为抑制性突触设定任何规则,也没有预测反因果性脉冲序列(突触前神经元在突触后神经元后触发)。突触修饰可能不仅发生在激活的神经元A和B之间,而且也发生在邻近的突触上。[9] 因此,所有形式的异突触和稳态可塑性都被认为是非赫布的。一个例子是向突触前末端的逆行信号传导。[10] 最常见的被认为完成这种逆行传递作用的化合物是一氧化氮,由于其高溶解度和扩散性,它经常对附近的神经元产生影响。这种类型的弥漫性突触修饰,被称为体积学习,不包括在传统的Hebbian模型。

Despite the common use of Hebbian models for long-term potentiation, Hebb's principle does not cover all forms of synaptic long-term plasticity. Hebb did not postulate any rules for inhibitory synapses, nor did he make predictions for anti-causal spike sequences (presynaptic neuron fires after the postsynaptic neuron). Synaptic modification may not simply occur only between activated neurons A and B, but at neighboring synapses as well. All forms of heterosynaptic and homeostatic plasticity are therefore considered non-Hebbian. An example is retrograde signaling to presynaptic terminals. The compound most commonly identified as fulfilling this retrograde transmitter role is nitric oxide, which, due to its high solubility and diffusivity, often exerts effects on nearby neurons. This type of diffuse synaptic modification, known as volume learning, is not included in the traditional Hebbian model.

尽管赫布模型被广泛应用于长时程增强作用,赫布原理并不包括所有形式的长期突触可塑性。例如,它没有假设任何抑制性突触的规则,也没有考虑反因果脉冲序列(突触前神经元在突触后神经元之后发放)。另外,突触修饰可能不仅仅发生在活化的神经元 A 和 B之间,也会影响其它相邻的突触。[9] 所有形式的Heterosynaptic 异突触Homeostatic Plasticity 稳态可塑性 因此被认为是非赫布的。一个例子是Retrograde Signaling 逆向信号对突触前神经元的作用。[10] 传递逆向信号的化合物通常是一氧化氮,由于其高溶解性和扩散性,往往会对邻近的神经元产生影响。[11]这类扩散性地对突触机制地改变,被称为Volume Learning 集结学习,不是传统的赫布模型,但可以看作是一种补充[12]

赫布学习对镜像神经元的解释

Hebbian learning and spike-timing-dependent plasticity have been used in an influential theory of how mirror neurons emerge.[13][14] Mirror neurons are neurons that fire both when an individual performs an action and when the individual sees[15] or hears[16] another perform a similar action. The discovery of these neurons has been very influential in explaining how individuals make sense of the actions of others, by showing that, when a person perceives the actions of others, the person activates the motor programs which they would use to perform similar actions. The activation of these motor programs then adds information to the perception and helps predict what the person will do next based on the perceiver's own motor program. A challenge has been to explain how individuals come to have neurons that respond both while performing an action and while hearing or seeing another perform similar actions.


Hebbian learning and spike-timing-dependent plasticity have been used in an influential theory of how mirror neurons emerge. Keysers, C. (2011). The Empathic Brain. Mirror neurons are neurons that fire both when an individual performs an action and when the individual sees or hears another perform a similar action. The discovery of these neurons has been very influential in explaining how individuals make sense of the actions of others, by showing that, when a person perceives the actions of others, the person activates the motor programs which they would use to perform similar actions. The activation of these motor programs then adds information to the perception and helps predict what the person will do next based on the perceiver's own motor program. A challenge has been to explain how individuals come to have neurons that respond both while performing an action and while hearing or seeing another perform similar actions.

关于镜像神经元如何出现的研究中,一个颇具影响力的理论已经使用了赫布学习和脉冲-时间依赖突触可塑性的方法。[13][14] 镜像神经元是不仅在个体做出一个动作时发放,当个体看到[15] 或听到[16] 其他个体做出类似的动作时,这种神经元也会被激活。这些神经元的发现在解释个体如何理解他人的行为方面非常有影响力。当一个人感知到他人的行为时,激活了自己将用来执行类似行为的行动模式。这些行动模式的激活可以增加感知觉信息,并基于感知者自己的运动程序,帮助预测他人接下来将要做什么。然而,很难解释个体如何产生既能在执行一个动作时作出反应,又能在听到或看到另一个人执行相似的动作时作出反应的神经元。

Christian Keysers and David Perrett suggested that as an individual performs a particular action, the individual will see, hear, and feel the performing of the action. These re-afferent sensory signals will trigger activity in neurons responding to the sight, sound, and feel of the action. Because the activity of these sensory neurons will consistently overlap in time with those of the motor neurons that caused the action, Hebbian learning predicts that the synapses connecting neurons responding to the sight, sound, and feel of an action and those of the neurons triggering the action should be potentiated. The same is true while people look at themselves in the mirror, hear themselves babble, or are imitated by others. After repeated experience of this re-afference, the synapses connecting the sensory and motor representations of an action are so strong that the motor neurons start firing to the sound or the vision of the action, and a mirror neuron is created.

Christian Keysers和David Perrett认为,当一个人执行一个特定的行动时,他会看到、听到和感觉到行动的执行。这些再次传入的感觉信号会触发神经元活动,对视觉、声音和动作的感觉做出反应。因为这些感觉神经元的活动将持续时间重叠与运动神经元导致行动,Hebbian学习预测,神经元突触连接回应,声音,和感觉神经元触发的动作和行动应该会加强。当人们看着镜子中的自己,听到自己喋喋不休,或者被别人模仿时,情况也是如此。在反复经历这种再传入后,连接动作的感觉和运动表征的突触非常强大,以至于运动神经元开始对动作的声音或视觉发出信号,镜像神经元就产生了。

Christian Keysers and David Perrett suggested that as an individual performs a particular action, the individual will see, hear, and feel the performing of the action. These re-afferent sensory signals will trigger activity in neurons responding to the sight, sound, and feel of the action. Because the activity of these sensory neurons will consistently overlap in time with those of the motor neurons that caused the action, Hebbian learning predicts that the synapses connecting neurons responding to the sight, sound, and feel of an action and those of the neurons triggering the action should be potentiated. The same is true while people look at themselves in the mirror, hear themselves babble, or are imitated by others. After repeated experience of this re-afference, the synapses connecting the sensory and motor representations of an action are so strong that the motor neurons start firing to the sound or the vision of the action, and a mirror neuron is created.

克里斯蒂安·克瑟尔(Christian Keysers)和 David Perrett 认为,当一个个体在执行特定动作时,他们会看到、听到和感觉到自己在做这个动作。这些重新传入的感觉信号将触发这类动作所对应的神经元对视觉、声音和动作感觉的反应。因为随着时间的推移,这些感觉神经元总是会和引起动作的运动神经元的活动一致,赫布学习理论告诉我们,联接对应模态的感觉神经元的和运动神经元的突触,会在感觉的神经元激发时引起突触后运动神经元活动增强。当人们看着镜子中的自己,听到自己喃喃自语,或者被别人模仿时,情况也是如此。在反复经历这种自我感受经验之后,联接感觉和运动表征的突触将会增强,导致某个声音或图像就可以激发运动神经元产生兴奋,从而形成一个镜像神经元。

Evidence for that perspective comes from many experiments that show that motor programs can be triggered by novel auditory or visual stimuli after repeated pairing of the stimulus with the execution of the motor program (for a review of the evidence, see Giudice et al., 2009[17]). For instance, people who have never played the piano do not activate brain regions involved in playing the piano when listening to piano music. Five hours of piano lessons, in which the participant is exposed to the sound of the piano each time they press a key is proven sufficient to trigger activity in motor regions of the brain upon listening to piano music when heard at a later time.[18] Consistent with the fact that spike-timing-dependent plasticity occurs only if the presynaptic neuron's firing predicts the post-synaptic neuron's firing,[19] the link between sensory stimuli and motor programs also only seem to be potentiated if the stimulus is contingent on the motor program.

许多实验都为该观点提供了证据,这些实验表明,在刺激与运动计划的执行重复配对后,新的听觉或视觉刺激可以触发运动计划(回顾证据,见Giudice等人,2009年)。例如,从未弹钢琴的人在听钢琴曲时,不会激活与弹钢琴有关的大脑区域。在5个小时的钢琴课程中,参与者每次按下琴键就会听到钢琴的声音。事实证明,这足以在稍后听到钢琴音乐时触发大脑运动区域的活动。[18] 与“脉冲时间依赖的可塑性只有在突触前神经元的放电预示了突触后神经元的放电时才会发生”这一事实相一致的是,感觉刺激和运动程序之间的联系似乎也只有在刺激取决于运动程序时才会加强。

Evidence for that perspective comes from many experiments that show that motor programs can be triggered by novel auditory or visual stimuli after repeated pairing of the stimulus with the execution of the motor program (for a review of the evidence, see Giudice et al., 2009). For instance, people who have never played the piano do not activate brain regions involved in playing the piano when listening to piano music. Five hours of piano lessons, in which the participant is exposed to the sound of the piano each time they press a key is proven sufficient to trigger activity in motor regions of the brain upon listening to piano music when heard at a later time. Consistent with the fact that spike-timing-dependent plasticity occurs only if the presynaptic neuron's firing predicts the post-synaptic neuron's firing, the link between sensory stimuli and motor programs also only seem to be potentiated if the stimulus is contingent on the motor program.

许多实验都为该观点提供了证据,这些实验表明,在反复地将刺激与运动模式的进行配对之后,新的听觉或视觉刺激可以触发运动模式(一些证据参考Giudice 等,2009年[17])。例如,从来没有弹过钢琴的人在听钢琴曲时,大脑中与弹钢琴有关的区域不会被激活。在五个小时的钢琴课上,参与者每次按下一个键都会听到钢琴的声音,于是当在稍后听到钢琴音乐时,就可以触发大脑运动区域的活动。[18] 与STDP一致,感觉刺激和运动模式之间的联系似乎也只有在刺激与运动程序有关时才会增强。

See also


  • Dale's principle
  • Coincidence detection in neurobiology
  • Leabra
  • Metaplasticity
  • Tetanic stimulation
  • Synaptotropic hypothesis
  • Neuroplasticity


See also

  • Dale's principle
  • Coincidence detection in neurobiology
  • Leabra
  • Metaplasticity
  • Tetanic stimulation
  • Synaptotropic hypothesis
  • Neuroplasticity

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 Hebb, D.O. (1949). The Organization of Behavior. New York: Wiley & Sons. 
  2. 2.0 2.1 Siegrid Löwel, Göttingen University; The exact sentence is: "neurons wire together if they fire together" (Löwel, S. and Singer, W. (1992) Science 255 (published January 10, 1992) "Selection of Intrinsic Horizontal Connections in the Visual Cortex by Correlated Neuronal Activity". Science Magazine. United States: American Association for the Advancement of Science. pp. 209–212. ISSN 0036-8075.
  3. 3.0 3.1 Caporale N; Dan Y (2008). "Spike timing-dependent plasticity: a Hebbian learning rule". Annual Review of Neuroscience. 31: 25–46. doi:10.1146/annurev.neuro.31.060407.125639. PMID 18275283.
  4. 4.0 4.1 Allport, D.A. (1985). "Distributed memory, modular systems and dysphasia". In Newman, S.K.. Current Perspectives in Dysphasia. Edinburgh: Churchill Livingstone. ISBN 978-0-443-03039-0. 
  5. 5.0 5.1 Klopf, A. H. (1972). Brain function and adaptive systems—A heterostatic theory. Technical Report AFCRL-72-0164, Air Force Cambridge Research Laboratories, Bedford, MA.
  6. 6.0 6.1 Euliano, Neil R. (1999-12-21). "Neural and Adaptive Systems: Fundamentals Through Simulations" (PDF). Wiley. Archived from the original (PDF) on 2015-12-25. Retrieved 2016-03-16.
  7. 7.0 7.1 Shouval, Harel (2005-01-03). "The Physics of the Brain". The Synaptic basis for Learning and Memory: A theoretical approach. The University of Texas Health Science Center at Houston. Archived from the original on 2007-06-10. Retrieved 2007-11-14.
  8. 8.0 8.1 Gerstner, Wulfram; Kistler, Werner M.; Naud, Richard; Paninski, Liam (July 2014). Chapter 19: Synaptic Plasticity and Learning. Cambridge University Press. ISBN 978-1107635197. https://neuronaldynamics.epfl.ch/online/Ch19.S3.html. 
  9. 9.0 9.1 9.2 Horgan, John (May 1994). "Neural eavesdropping". Scientific American. 270 (5): 16. Bibcode:1994SciAm.270e..16H. doi:10.1038/scientificamerican0594-16. PMID 8197441.
  10. 10.0 10.1 10.2 Fitzsimonds, Reiko; Mu-Ming Poo (January 1998). "Retrograde Signaling in the Development and Modification of Synapses". Physiological Reviews. 78 (1): 143–170. doi:10.1152/physrev.1998.78.1.143. PMID 9457171. S2CID 11604896.
  11. 11.0 11.1 López, P; C.P. Araujo (2009). "A computational study of the diffuse neighbourhoods in biological and artificial neural networks" (PDF). International Joint Conference on Computational Intelligence.
  12. 12.0 12.1 Mitchison, G; N. Swindale (October 1999). "Can Hebbian Volume Learning Explain Discontinuities in Cortical Maps?". Neural Computation. 11 (7): 1519–1526. doi:10.1162/089976699300016115. PMID 10490935. S2CID 2325474.
  13. 13.0 13.1 Keysers C; Perrett DI (2004). "Demystifying social cognition: a Hebbian perspective". Trends in Cognitive Sciences. 8 (11): 501–507. doi:10.1016/j.tics.2004.09.005. PMID 15491904. S2CID 8039741.
  14. 14.0 14.1 Keysers, C. (2011). The Empathic Brain.
  15. 15.0 15.1 Gallese V; Fadiga L; Fogassi L; Rizzolatti G (1996). "Action recognition in the premotor cortex". Brain. 119 (Pt 2): 593–609. doi:10.1093/brain/119.2.593. PMID 8800951.
  16. 16.0 16.1 Keysers C; Kohler E; Umilta MA; Nanetti L; Fogassi L; Gallese V (2003). "Audiovisual mirror neurons and action recognition". Exp Brain Res. 153 (4): 628–636. CiteSeerX 10.1.1.387.3307. doi:10.1007/s00221-003-1603-5. PMID 12937876. S2CID 7704309.
  17. 17.0 17.1 Del Giudice M; Manera V; Keysers C (2009). "Programmed to learn? The ontogeny of mirror neurons" (PDF). Dev Sci. 12 (2): 350–363. doi:10.1111/j.1467-7687.2008.00783.x. hdl:2318/133096. PMID 19143807.
  18. 18.0 18.1 18.2 Lahav A; Saltzman E; Schlaug G (2007). "Action representation of sound: audiomotor recognition network while listening to newly acquired actions". J Neurosci. 27 (2): 308–314. doi:10.1523/jneurosci.4822-06.2007. PMC 6672064. PMID 17215391.
  19. Bauer EP; LeDoux JE; Nader K (2001). "Fear conditioning and LTP in the lateral amygdala are sensitive to the same stimulus contingencies". Nat Neurosci. 4 (7): 687–688. doi:10.1038/89465. PMID 11426221. S2CID 33130204.

Further reading

= 进一步阅读 =

External links

  • Overview
  • Hebbian Learning tutorial (Part 1: Novelty Filtering, Part 2: PCA)

= = 外部链接 =

  • 概述
  • Hebbian 学习教程(第1部分: 新颖性过滤,第2部分: PCA)

模板:Hebbian learning 模板:Neuroethology

Category:Unsupervised learning Category:Neuroplasticity

类别: 非监督式学习类别: 神经可塑性 *


This page was moved from wikipedia:en:Hebbian theory. Its edit history can be viewed at 赫布理论/edithistory