粒子群优化算法

来自集智百科 - 复杂系统|人工智能|复杂科学|复杂网络|自组织
跳到导航 跳到搜索

本词条由Yuling翻译

模板:Use American English

A particle swarm searching for the global minimum of a function

A particle swarm searching for the global minimum of a function

一个可以用于搜索函数全局最小值的粒子群

模板:Evolutionary algorithms

In computational science, particle swarm optimization (PSO)[1] is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

粒子群优化 Particle Swarm Optimization(简称PSO)是计算科学领域中的一种计算方法,它在给定的衡量优劣的指标下通过不断迭代问题的候选解来对进行优化。算法优化问题的方式是:设定一群候选解(这里被称为粒子),并根据简单的数学公式在搜索空间中移动这些粒子,更新所有粒子的位置和速度。每个粒子的运动都会受到其局部最优位置的影响,但也会被引导至搜索空间中的全局最优位置,并且这个全局最优位置也会随着其他粒子找到的更好位置而更新。总之,粒子群预计会朝着最优解的方向前进。

==Yuling讨论)“更新所有粒子的位置和速度。”这里面我认为over应该是“遍及”的意思,因为PSO算法应该是要在循环中的每一步更新所有粒子的速度和位置。


PSO is originally attributed to Kennedy, Eberhart and Shi[2][3] and was first intended for simulating social behaviour,[4] as a stylized representation of the movement of organisms in a bird flock or fish school. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart[5] describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli.[6][7] Recently, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.[8]

PSO is originally attributed to Kennedy, Eberhart and Shi and was first intended for simulating social behaviour, as a stylized representation of the movement of organisms in a bird flock or fish school. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli. Recently, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.

粒子群优化算法被认为最初是由 Kennedy,Eberhart 和 Shi 提出,它意图通过对鸟群或鱼群中生命体运动的模仿,从而实现对社会行为的仿真。该算法已经得到了简化,并且不断的在进行优化。Kennedy和Eberhart的书中阐释了 PSO 和群体智慧 Swarm Intelligence在哲学层面上的许多意义。Poli对粒子群优化算法的应用进行了广泛的研究。最近,Bonyadi 和 Michalewicz 对粒子群优化算法的理论和实验工作都进行了全面的综述。

==Yuling讨论)“作为鸟群或鱼群中有机体运动的人工表现”,字典上面的直译是“非写实的”,查阅了一下,英文对应的是non-naturalistic,naturalistic是模仿自然,所以就找了stylized的同义词artificial。
==Yuling讨论)“粒子群优化算法最初被认为是由 Kennedy,Eberhart 和 Shi 所发明并意图通过对鸟群或鱼群中有机体运动的人工仿真实现对社会行为的模拟。” 这句话语序更改较多。


PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found. Also, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods.

PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found. Also, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods.

粒子群优化算法是一种元启发式算法,因为它很少或根本没有对需要优化的问题进行假设,而且可以搜索非常大的候选解空间。然而,粒子群算法等启发式算法并不能在任何时候都保证找到最优解。此外,粒子群优化算法没有使用被优化问题的梯度,这意味着它不需要像经典的优化方法,如梯度下降 Gradient Descent拟牛顿法 quasi-newton methods,要求优化问题 Optimization Problem具有可微性。


Algorithm

算法

A basic variant of the PSO algorithm works by having a population (called a swarm) of candidate solutions (called particles). These particles are moved around in the search-space according to a few simple formulae. The movements of the particles are guided by their own best known position in the search-space as well as the entire swarm's best known position. When improved positions are being discovered these will then come to guide the movements of the swarm. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.

粒子群优化算法的一个基本变种是通过一个候选解(称为粒子)的总体(称为群体)来工作。这些粒子会根据几个简单的公式在搜索空间中进行移动。它们的运动受到搜索空间中自身附近的最优位置以及所有粒子的最优位置的引导。当更好的位置被发现时,群体便会朝向这些位置运动。这一过程将不断重复进行去寻找最优解,但不能保证最终一定会找到。

==Yuling讨论)“粒子群优化算法的一个基本变种是通过一个候选解(称为粒子)的群体来工作”,这句话里面的population和swarm有点区分不开。


Formally, let f: ℝn → ℝ be the cost function which must be minimized. The function takes a candidate solution as an argument in the form of a vector of real numbers and produces a real number as output which indicates the objective function value of the given candidate solution. The gradient of f is not known. The goal is to find a solution a for which f(a) ≤ f(b) for all b in the search-space, which would mean a is the global minimum.

形式上,设 f: ℝn → ℝ是需要最小化的成本函数 Cost Function。该函数将一个候选解作为实数向量形式的参数,并产生一个实数作为输出,该实数表示给定候选解时目标函数的值。 f的梯度并不知道。目标是找到搜索空间中所有 b对应的f(a) ≤ f(b)的解a,这意味着 a 是全局最小值。


Let S be the number of particles in the swarm, each having a position xi ∈ ℝn in the search-space and a velocity vi ∈ ℝn. Let pi be the best known position of particle i and let g be the best known position of the entire swarm. A basic PSO algorithm is then:

S为群中粒子的个数,每个粒子在搜索空间中有一个位置 xi ∈ ℝn,速度 vi ∈ ℝn。设 pi为粒子 i 的最优位置,g 为整个粒子群的最优位置。一个基本的粒子群优化算法是:



! -- 选择这个特殊的粒子群算法变种的原因请参阅讨论页面。-->

 for each particle i = 1, ..., S do
 for each particle i = 1, ..., S do

对于每个粒子i = 1, ..., S 进行如下操作

   Initialize the particle's position with a uniformly distributed random vector: xi ~ U(blobup)
   Initialize the particle's position with a uniformly distributed random vector: xi ~ U(blo, bup)

用一个均匀分布的随机向量初始化粒子的位置: xi ~ U(blo, bup)

   Initialize the particle's best known position to its initial position: pi ← xi
   Initialize the particle's best known position to its initial position: pi ← xi

将该粒子的初始位置初始化为它的最优位置: pi ← xi

   if f(pi) < f(g) then
   if f(pi) < f(g) then

如果f(pi) < f(g)则


       update the swarm's best known  position: g ← pi
       update the swarm's best known  position: g ← pi

更新整个种群的最优位置: g ← pi

   Initialize the particle's velocity: vi ~ U(-|bup-blo|, |bup-blo|)
   Initialize the particle's velocity: vi ~ U(-|bup-blo|, |bup-blo|)

初始化粒子的速度: vi ~ U(-|bup-blo|, |bup-blo|)

while a termination criterion is not met do:
while a termination criterion is not met do:

如不符合终止判据,则须:

   for each particle i = 1, ..., S do
   for each particle i = 1, ..., S do

对于每个粒子 i = 1, ..., S 进行如下操作

      for each dimension d = 1, ..., n do
      for each dimension d = 1, ..., n do

对于每个维度 d = 1, ..., n 进行如下操作

         Pick random numbers: rp, rg ~ U(0,1)
         Pick random numbers: rp, rg ~ U(0,1)

选择随机数:rp, rg ~ U(0,1)

         Update the particle's velocity: vi,d ← ω vi,d + φp rp (pi,d-xi,d) + φg rg (gd-xi,d)
         Update the particle's velocity: vi,d ← ω vi,d + φp rp (pi,d-xi,d) + φg rg (gd-xi,d)

更新粒子的速度:vi,d ← ω vi,d + φp rp (pi,d-xi,d) + φg rg (gd-xi,d)

      Update the particle's position: xi ← xi + vi
      Update the particle's position: xi ← xi + vi

更新粒子的位置: xi ← xi + vi

      if f(xi) < f(pi) then
      if f(xi) < f(pi) then

如果 f(xi) < f(pi),则

         Update the particle's best known position: pi ← xi
         Update the particle's best known position: pi ← xi

更新粒子最优位置为: pi ← xi

         if f(pi) < f(g) then
         if f(pi) < f(g) then

如果 f(pi) < f(g) 则

            Update the swarm's best known position: g ← pi
            Update the swarm's best known position: g ← pi

更新粒子最优位置为: g ← pi

The values blo and bup represents the lower and upper boundaries of the search-space. The termination criterion can be the number of iterations performed, or a solution where the adequate objective function value is found. The parameters ω, φp, and φg are selected by the practitioner and control the behaviour and efficacy of the PSO method, see below.

blobup的值表示搜索空间的上下边界。终止判据可以是执行的迭代次数,或者是找到了一个使目标函数的值满足条件的解。参数ω, φp 和 φg 一般自行适当设置,并影响着粒子群优化算法的运算和效果,见下文。


==Yuling讨论)“参数ω, φp 和 φg 一般由代码使用者设置”practitioner感觉直接翻译成“从业者”或者“专门人才”不太妥当,就翻译成了代码使用者

Parameter selection

参数选择

文件:PSO Meta-Fitness Landscape (12 benchmark problems).JPG
Performance landscape showing how a simple PSO variant performs in aggregate on several benchmark problems when varying two PSO parameters.

Performance landscape showing how a simple PSO variant performs in aggregate on several benchmark problems when varying two PSO parameters.

性能图显示了一个简单的粒子群优化算法在改变两个粒子群优化算法参数的情况下,对于多个基准问题的优化性能。


The choice of PSO parameters can have a large impact on optimization performance. Selecting PSO parameters that yield good performance has therefore been the subject of much research.

粒子群优化算法参数的选择对优化性能有很大的影响。因此,选择具有良好性能的粒子群优化算法参数一直是研究的热点。


The PSO parameters can also be tuned by using another overlaying optimizer, a concept known as meta-optimization, or even fine-tuned during the optimization, e.g., by means of fuzzy logic.

粒子群优化算法的参数也可以通过叠加另一个优化器来调整,这个概念被称为元优化,或者甚至可以在优化过程中进行微调,例如,使用模糊逻辑的方法。


Parameters have also been tuned for various optimization scenarios.

参数也针对不同的优化方案进行了调整。

Neighbourhoods and topologies

领域和拓扑结构

The topology of the swarm defines the subset of particles with which each particle can exchange information. thus different topologies have been used to control the flow of information among particles. For instance, in local topologies, particles only share information with a subset of particles. – for example "the m nearest particles" – or, more often, a social one, i.e. a set of particles that is not depending on any distance. In such cases, the PSO variant is said to be local best (vs global best for the basic PSO).

群体的拓扑结构定义了每个粒子可以交换信息的粒子子集。因此,不同的拓扑结构被用来控制粒子之间的信息流。例如,在局部拓扑结构中,粒子只与一部分粒子共享信息。- 例如“最接近的m个粒子”-或者更常见的是,一个社会群体,即一组不依赖于任何距离的粒子。在这种情况下,粒子群优化算法变种被认为是局部最优的(与基本的粒子群优化算法的全局最优相比)。


A commonly used swarm topology is the ring, in which each particle has just two neighbours, but there are many others. The topology is not necessarily static. In fact, since the topology is related to the diversity of communication of the particles, some efforts have been done to create adaptive topologies (SPSO, APSO, stochastic star, TRIBES, Cyber Swarm, and C-PSO).

一个常用的群拓扑是环,其中每个粒子邻域内只有两个粒子,但其实还有许多其他粒子。拓扑结构不一定是静态的。事实上,由于拓扑结构的复杂度与粒子间交流的多样性有关,所以很多人都在尝试创建有适应能力的拓扑结构.(SPSO, APSO, 随机星, TRIBES, Cyber Swarm 和 C-PSO)。

Inner workings

内部工作原理

There are several schools of thought as to why and how the PSO algorithm can perform optimization.

关于粒子群优化算法为什么可以实现优化以及如何执行优化现存有几种观点。


A common belief amongst researchers is that the swarm behaviour varies between exploratory behaviour, that is, searching a broader region of the search-space, and exploitative behaviour, that is, a locally oriented search so as to get closer to a (possibly local) optimum. This school of thought has been prevalent since the inception of PSO. This school of thought contends that the PSO algorithm and its parameters must be chosen so as to properly balance between exploration and exploitation to avoid premature convergence to a local optimum yet still ensure a good rate of convergence to the optimum. This belief is the precursor of many PSO variants, see below.

研究人员普遍认为,群体行为会在探索性行为(即在搜索空间中搜索更广泛的区域)和开发性行为(即面向局部的搜索,以便接近最优解(可能是局部的))之间变化。这种观点自从开始使用粒子群优化算法以来一直盛行。这一学派认为,粒子群优化算法及其参数的选择必须在探索和开发之间保持适当的平衡,以避免过早收敛到局部最优,同时还要保证良好的收敛速度到最优。这种想法是许多粒子群算法变种的先驱,见下文。


Another school of thought is that the behaviour of a PSO swarm is not well understood in terms of how it affects actual optimization performance, especially for higher-dimensional search-spaces and optimization problems that may be discontinuous, noisy, and time-varying. This school of thought merely tries to find PSO algorithms and parameters that cause good performance regardless of how the swarm behaviour can be interpreted in relation to e.g. exploration and exploitation. Such studies have led to the simplification of the PSO algorithm, see below.

另一种观点认为,我们没有很好的理解粒子群优化算法中所有粒子的行为是如何影响实际优化性能的,特别是对于高维搜索空间和可能不连续、有噪声和时变的优化问题。这个学派的观点仅仅是试图找到能够产生良好性能的粒子群算法和参数,而不考虑群体行为是如何解释的,例如,探索和开发之间的联系。这样的研究已经导致了粒子群优化算法的简化,见下文。


Convergence

收敛

In relation to PSO the word convergence typically refers to two different definitions:

对粒子群优化算法来说,收敛一词通常有两个不同的定义:


  • Convergence to a local optimum where all personal bests p or, alternatively, the swarm's best known position g, approaches a local optimum of the problem, regardless of how the swarm behaves.

无论群体行为如何,所有的粒子的最优位置p会收敛至局部最优或者是群体的最优位置收敛至问题的局部最优。


Convergence of the sequence of solutions has been investigated for PSO. that these simplifications do not affect the boundaries found by these studies for parameter where the swarm is convergent. Considerable effort has been made in recent years to weaken the modelling assumption utilized during the stability analysis of PSO , with the most recent generalized result applying to numerous PSO variants and utilized what was shown to be the minimal necessary modeling assumptions.

已有学者对粒子群优化算法解序列的收敛性进行了研究。这些简化并不影响这些研究中关于群体收敛参数的边界。近年来,人们在削弱粒子群优化算法稳定性分析中采用的建模假设方面做了大量工作,最新的结果普遍适用于许多粒子群优化算法的变种,并利用了最小必要建模假设。


Convergence to a local optimum has been analyzed for PSO in and. It has been proven that PSO need some modification to guarantee to find a local optimum.

在粒子群优化算法收敛到局部最优的过程的分析中,证明了粒子群优化算法需要一定的修改才能保证寻找到局部最优解。


This means that determining convergence capabilities of different PSO algorithms and parameters therefore still depends on empirical results. One attempt at addressing this issue is the development of an "orthogonal learning" strategy for an improved use of the information already existing in the relationship between p and g, so as to form a leading converging exemplar and to be effective with any PSO topology. The aims are to improve the performance of PSO overall, including faster global convergence, higher solution quality, and stronger robustness. However, such studies do not provide theoretical evidence to actually prove their claims.

这意味着确定不同粒子群算法和参数的收敛能力仍然依赖于经验结果。人们想通过尝试制定一个“正交学习”的策略解决这一问题,以便更好地利用pg之间关系中已经存在的信息,从而形成一个主要的收敛范例,并对任何粒子群优化算法中的拓扑有效。这样做的目的是从总体上提高粒子群算法的性能,包括全局收敛速度更快,解的质量更高,鲁棒性更强。然而,这些研究并没有提供理论证据来实际证明他们的观点。

Adaptive mechanisms

自适应机制

Without the need for a trade-off between convergence ('exploitation') and divergence ('exploration'), an adaptive mechanism can be introduced. Adaptive particle swarm optimization (APSO) features better search efficiency than standard PSO. APSO can perform global search over the entire search space with a higher convergence speed. It enables automatic control of the inertia weight, acceleration coefficients, and other algorithmic parameters at the run time, thereby improving the search effectiveness and efficiency at the same time. Also, APSO can act on the globally best particle to jump out of the likely local optima. However, APSO will introduce new algorithm parameters, it does not introduce additional design or implementation complexity nonetheless.

事实上可以引入一种不需要在收敛(“开发”)和发散(“探索”)之间进行权衡的自适应机制。自适应粒子群优化粒子群算法(APSO)比标准粒子群算法具有更好的搜索效率,可以以更快的收敛速度在整个搜索空间内进行全局搜索。实现了在运行时对惯性权重、加速度系数以及其他算法参数的自动控制,从而同时提高了搜索有效性与效率。此外,自适应粒子群优化粒子群算法可以作用于全局最优粒子,从而跳出可能的局部最优解。然而,尽管算法中引入了新的算法参数,但它并没有引入额外的设计或实现复杂度。

Variants

变种


! -- 当添加与粒子群优化算法的变种相关内容时,请提供期刊 / 会议论文、技术报告、硕士 / 博士论文等的完整参考文献。请只包括主要的研究贡献,并保持简短的说明-因为粒子群优化算法可能存在数百个的变种,以至于维基百科不是一个适当的地方将它们全部列出。-->


Numerous variants of even a basic PSO algorithm are possible. For example, there are different ways to initialize the particles and velocities (e.g. start with zero velocities instead), how to dampen the velocity, only update pi and g after the entire swarm has been updated, etc. Some of these choices and their possible performance impact have been discussed in the literature.

即使是基本的粒子群算法也可能有许多变种。例如,用不同的方法来初始化粒子和速度(例如:以初始速度为零开始) ,在整个种群更新后,如何抑制速度只更新pi and g等等。其中一些选择及其可能对性能产生的影响已在文献中讨论过。


A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances." The latest is Standard PSO 2011 (SPSO-2011).

主要的研究人员制定了一系列实施方案的标准,“目的是作为测试该技术标准后性能的基准,并向更广泛的优化算法中引入粒子群优化算法。一个为众人所熟知的、严格定义的标准算法提供了一个有价值的比较的基准,这个基准可用于整个研究领域,以便更好地测试优化算法新的进展。”最新的是标准粒子群优化算法 2011 (SPSO-2011)。

==Yuling讨论)该句存疑


Hybridization

杂交

New and more sophisticated PSO variants are also continually being introduced in an attempt to improve optimization performance. There are certain trends in that research; one is to make a hybrid optimization method using PSO combined with other optimizers, and the incorporation of an effective learning method.

许多新的和更复杂的粒子群算法的变种也不断被引入,以提高优化性能。这方面的研究有一定的发展趋势,其中一种方法是采用粒子群优化算法与其他优化算法相结合的混合优化方法,并结合一种高效的学习方法。


Alleviate premature convergence

减缓算法过早收敛


Another research trend is to try and alleviate premature convergence (that is, optimization stagnation), e.g. by reversing or perturbing the movement of the PSO particles, (multi-swarm optimization). The multi-swarm approach can also be used to implement multi-objective optimization. Finally, there are developments in adapting the behavioural parameters of PSO during optimization.

另一个研究趋势是试图减缓算法过早收敛(即优化停滞) ,例如:通过逆变或扰动粒子群的运动(多群优化)。多群算法也可用于实现多目标优化。最后,对粒子群优化过程中的行为参数进行了自适应改进。


Simplifications

算法简化

Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to as Occam's razor. Simplifying PSO was originally suggested by Kennedy and has been studied more extensively, where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems.

另一种观点认为,依据奥卡姆剃刀原理,应该在不影响其性能的情况下尽可能简化粒子群优化算法。简化粒子群优化算法最初是由 Kennedy 提出并得到了更广泛的研究,其中优化性能得到了改善,参数更易于调整,并且在不同的优化问题中表现得更加一致。


Another argument in favour of simplifying PSO is that metaheuristics can only have their efficacy demonstrated empirically by doing computational experiments on a finite number of optimization problems. This means a metaheuristic such as PSO cannot be proven correct and this increases the risk of making errors in its description and implementation. A good example of this presented a promising variant of a genetic algorithm (another popular metaheuristic) but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered. This bias was because of a programming error, and has now been fixed.

支持简化粒子群优化算法的另一个论点是,元启发法只能通过对有限数量的优化问题进行计算实验来证明其有效性。这意味着诸如 PSO 这样的元启发式算法不能被证明是正确的,这增加了其描述和实现中出错的风险。这方面的一个很好的例子是遗传算法(另一种流行的元启发式算法)的一个前景较好的变种,但后来发现它有缺陷,因为它在搜索空间中对不同维度的相似值进行优化搜索时存在较大的偏差,而这恰好是所考虑的基准问题的最优值。这种偏差是由于一个编程错误造成的,现在已经得到修正。


Initialization of velocities may require extra inputs. The Bare Bones PSO variant has been proposed in 2003 by James Kennedy, and does not need to use velocity at all.

初始化速度可能需要额外的输入。其中一种在2003年由 James Kennedy 提出的叫做骨干粒子群优化算法的变种,根本不需要设置速度。


Another simpler variant is the accelerated particle swarm optimization (APSO), which also does not need to use velocity and can speed up the convergence in many applications. A simple demo code of APSO is available.

另一个更简单的变种是加速粒子群优化算法,它也不需要使用速度,在许多应用中可以加快收敛速度。加速粒子群优化算法的一个简单演示代码是开源的。


Multi-objective optimization

多目标优化

PSO has also been applied to multi-objective problems, in which the objective function comparison takes pareto dominance into account when moving the PSO particles and non-dominated solutions are stored so as to approximate the pareto front.

粒子群优化算法也可以应用于多目标问题:在移动粒子时,粒子群优化算法会将目标函数比较引入帕累托优势,并存储非支配解以逼近帕累托前沿。
==Yuling讨论)“将目标函数比较引入帕累托优势”,这句话中,关于objective function comparison的专业的翻译没有查到

Binary, discrete, and combinatorial

二进制,离散以及组合问题

As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous domain, to apply a classical PSO, and then to demap the result. Such a mapping can be very simple (for example by just using rounded values) or more sophisticated.

尽管上面给出的粒子群优化算法对实数问题有效,但常用的离散问题求解方法是将离散搜索空间映射到连续区域,再应用经典粒子群优化算法对结果进行去映射。这样的映射可以非常简单(例如只使用舍入的值) ,也可以更复杂。


However, it can be noted that the equations of movement make use of operators that perform four actions:

然而,可以指出,运动方程使用了执行四个动作的操作符:

  • computing the difference of two positions. The result is a velocity (more precisely a displacement)

计算两个位置的差分从而得到速度(更准确的说是位移)

  • multiplying a velocity by a numerical coefficient

使速度和数值系数相乘

  • adding two velocities

添加两个速度

  • applying a velocity to a position

在某位置上添加一个速度


Usually a position and a velocity are represented by n real numbers, and these operators are simply -, *, +, and again +. But all these mathematical objects can be defined in a completely different way, in order to cope with binary problems (or more generally discrete ones), or even combinatorial ones. One approach is to redefine the operators based on sets.

通常位置和速度用n个实数表示,并且运算符是简单地减、 乘 、 加 和再次加。但是所有这些数学对象都可以用一种完全不同的方式来定义,以便处理二进制问题(或者更一般的离散问题),甚至是组合问题。一种方法是基于集合重新定义运算符。

See also

请参阅


! -- 请只添加在概念上与粒子群优化算法相关的优化算法。底部的导航框包含这个字段中最流行的优化算法,并且分类列出了所有优化算法器。广告将被清除。-->

人工蜂群算法

蜜蜂算法

无梯度优化算法

多粒子群优化

粒子滤波

群体智慧

鱼群搜索

分散型果蝇优化算法

References

参考文献


! -- 当添加与粒子群优化算法的变种相关内容时,请提供期刊 / 会议论文、技术报告、硕士 / 博士论文等的完整参考文献。请只包括主要的研究贡献,并保持简短的说明-因为粒子群优化算法可能存在数百个的变种,以至于维基百科不是一个适当的地方将它们全部列出。-->

引用错误:Closing tag missing for <references>

}}

External links

模板:Commonscat


! -- 在添加 / 删除链接之前,请参阅讨论页。-->

  • A brief video of particle swarms optimizing three benchmark functions.
  • Liu, Yang (2009). "Automatic calibration of a rainfall–runoff model using a fast and elitist multi-objective particle swarm algorithm". Expert Systems with Applications. 36 (5): 9533–9538. doi:10.1016/j.eswa.2008.10.086.

编者推荐

模板:Major subfields of optimization

模板:Swarming

Category:Metaheuristics

类别: 启发式元推理

Category:Evolutionary algorithms

类别: 进化算法


This page was moved from wikipedia:en:Particle swarm optimization. Its edit history can be viewed at 粒子群优化算法/edithistory