更改

跳到导航 跳到搜索
删除8,220字节 、 2020年10月31日 (六) 20:12
第257行: 第257行:  
== Variants ==
 
== Variants ==
 
变种<br>
 
变种<br>
  −
<!-- Please provide complete references to journal / conference papers, tech reports, msc/phd theses, etc. when adding PSO variants. Please only include major research contributions and keep the descriptions short - there are probably hundreds of PSO variants in existence and Wikipedia is not the proper place to list them all. -->
      
<!-- Please provide complete references to journal / conference papers, tech reports, msc/phd theses, etc. when adding PSO variants. Please only include major research contributions and keep the descriptions short - there are probably hundreds of PSO variants in existence and Wikipedia is not the proper place to list them all. -->
 
<!-- Please provide complete references to journal / conference papers, tech reports, msc/phd theses, etc. when adding PSO variants. Please only include major research contributions and keep the descriptions short - there are probably hundreds of PSO variants in existence and Wikipedia is not the proper place to list them all. -->
第265行: 第263行:       −
  −
Numerous variants of even a basic PSO algorithm are possible. For example, there are different ways to initialize the particles and velocities (e.g. start with zero velocities instead), how to dampen the velocity, only update '''p'''<sub>i</sub> and '''g''' after the entire swarm has been updated, etc. Some of these choices and their possible performance impact have been discussed in the literature.<ref name=carlisle01offtheshelf/>
      
Numerous variants of even a basic PSO algorithm are possible. For example, there are different ways to initialize the particles and velocities (e.g. start with zero velocities instead), how to dampen the velocity, only update p<sub>i</sub> and g after the entire swarm has been updated, etc. Some of these choices and their possible performance impact have been discussed in the literature.
 
Numerous variants of even a basic PSO algorithm are possible. For example, there are different ways to initialize the particles and velocities (e.g. start with zero velocities instead), how to dampen the velocity, only update p<sub>i</sub> and g after the entire swarm has been updated, etc. Some of these choices and their possible performance impact have been discussed in the literature.
第272行: 第268行:  
即使是基本的粒子群算法也可能有许多变种。例如,用不同的方法来初始化粒子和速度(例如:以初始速度为零开始) ,在整个种群更新后,如何抑制速度只更新'''p'''<sub>i</sub> and '''g'''等等。其中一些选择及其可能对性能产生的影响已在文献中讨论过。
 
即使是基本的粒子群算法也可能有许多变种。例如,用不同的方法来初始化粒子和速度(例如:以初始速度为零开始) ,在整个种群更新后,如何抑制速度只更新'''p'''<sub>i</sub> and '''g'''等等。其中一些选择及其可能对性能产生的影响已在文献中讨论过。
   −
  −
  −
A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances."<ref name=bratton2007/> The latest is Standard PSO 2011 (SPSO-2011).<ref name=Zambrano-Bigiarini2013/>
      
A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances." The latest is Standard PSO 2011 (SPSO-2011).
 
A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances." The latest is Standard PSO 2011 (SPSO-2011).
   −
主要的研究人员制定了一系列实施方案的标准,“目的是作为测试该技术标准后性能的基准,并向更广泛的优化算法中引入粒子群优化算法。一个为众人所熟知的、严格定义的标准算法提供了一个有价值的比较的基准,这个基准可用于整个研究领域,以便更好地测试优化算法新的进展。”最新的是标准粒子群优化算法2011(SPSO-2011)。
+
主要的研究人员制定了一系列实施方案的标准,“目的是作为测试该技术标准后性能的基准,并向更广泛的优化算法中引入粒子群优化算法。一个为众人所熟知的、严格定义的标准算法提供了一个有价值的比较的基准,这个基准可用于整个研究领域,以便更好地测试优化算法新的进展。”最新的是标准粒子群优化算法 2011 (SPSO-2011)。
    
==[[用户:Yuling|Yuling]]([[用户讨论:Yuling|讨论]])该句存疑
 
==[[用户:Yuling|Yuling]]([[用户讨论:Yuling|讨论]])该句存疑
第285行: 第278行:  
=== Hybridization ===
 
=== Hybridization ===
 
杂交<br>
 
杂交<br>
  −
New and more sophisticated PSO variants are also continually being introduced in an attempt to improve optimization performance. There are certain trends in that research; one is to make a hybrid optimization method using PSO combined with other optimizers,<ref name=lovbjerg02lifecycle/><ref name=niknam10efficient/><ref name=zx03depso/> e.g., combined PSO with biogeography-based optimization,<ref>{{cite journal|last1=Zhang|first1=Y.|last2=Wang|first2=S.|title=Pathological Brain Detection in Magnetic Resonance Imaging Scanning by Wavelet Entropy and Hybridization of Biogeography-based Optimization and Particle Swarm Optimization|journal=Progress in Electromagnetics Research – Pier|date=2015|volume=152|pages=41–58|doi=10.2528/pier15040602|doi-access=free}}</ref> and the incorporation of an effective learning method.<ref name=zhan10OLPSO/>
      
New and more sophisticated PSO variants are also continually being introduced in an attempt to improve optimization performance. There are certain trends in that research; one is to make a hybrid optimization method using PSO combined with other optimizers, and the incorporation of an effective learning method.
 
New and more sophisticated PSO variants are also continually being introduced in an attempt to improve optimization performance. There are certain trends in that research; one is to make a hybrid optimization method using PSO combined with other optimizers, and the incorporation of an effective learning method.
第297行: 第288行:  
减缓算法过早收敛<br>
 
减缓算法过早收敛<br>
   −
  −
Another research trend is to try and alleviate premature convergence (that is, optimization stagnation), e.g. by reversing or perturbing the movement of the PSO particles,<ref name=evers09thesis/><ref name=lovbjerg02extending/><ref name=xinchao10perturbed/><ref name=xzy02dpso/> another approach to deal with premature convergence is the use of multiple swarms<ref>Cheung, N. J., Ding, X.-M., & Shen, H.-B. (2013). OptiFel: A Convergent Heterogeneous Particle Sarm Optimization Algorithm for Takagi-Sugeno Fuzzy Modeling, IEEE Transactions on Fuzzy Systems, {{DOI|10.1109/TFUZZ.2013.2278972}}</ref> ([[multi-swarm optimization]]). The multi-swarm approach can also be used to implement multi-objective optimization.<ref name=nobile2012 /> Finally, there are  developments in adapting the behavioural parameters of PSO during optimization.<ref name=zhan09adaptive/><ref name=nobile2017/>
      
Another research trend is to try and alleviate premature convergence (that is, optimization stagnation), e.g. by reversing or perturbing the movement of the PSO particles, (multi-swarm optimization). The multi-swarm approach can also be used to implement multi-objective optimization. Finally, there are  developments in adapting the behavioural parameters of PSO during optimization.
 
Another research trend is to try and alleviate premature convergence (that is, optimization stagnation), e.g. by reversing or perturbing the movement of the PSO particles, (multi-swarm optimization). The multi-swarm approach can also be used to implement multi-objective optimization. Finally, there are  developments in adapting the behavioural parameters of PSO during optimization.
第307行: 第296行:  
=== Simplifications ===
 
=== Simplifications ===
 
算法简化<br>
 
算法简化<br>
  −
Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to as [[Occam's razor]]. Simplifying PSO was originally suggested by Kennedy<ref name=kennedy97particle/> and has been studied more extensively,<ref name=bratton08simplified/><ref name=pedersen08thesis/><ref name=pedersen08simplifying/><ref name=yang08nature/> where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems.
      
Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to as Occam's razor. Simplifying PSO was originally suggested by Kennedy and has been studied more extensively, where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems.
 
Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to as Occam's razor. Simplifying PSO was originally suggested by Kennedy and has been studied more extensively, where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems.
第315行: 第302行:       −
  −
Another argument in favour of simplifying PSO is that [[metaheuristic]]s can only have their efficacy demonstrated [[empirical]]ly by doing computational experiments on a finite number of optimization problems. This means a metaheuristic such as PSO cannot be [[Program correctness|proven correct]] and this increases the risk of making errors in its description and implementation. A good example of this<ref name=tu04robust/> presented a promising variant of a [[genetic algorithm]] (another popular metaheuristic) but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered. This bias was because of a programming error, and has now been fixed.<ref name=tu04corrections/>
      
Another argument in favour of simplifying PSO is that metaheuristics can only have their efficacy demonstrated empirically by doing computational experiments on a finite number of optimization problems. This means a metaheuristic such as PSO cannot be proven correct and this increases the risk of making errors in its description and implementation. A good example of this presented a promising variant of a genetic algorithm (another popular metaheuristic) but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered. This bias was because of a programming error, and has now been fixed.
 
Another argument in favour of simplifying PSO is that metaheuristics can only have their efficacy demonstrated empirically by doing computational experiments on a finite number of optimization problems. This means a metaheuristic such as PSO cannot be proven correct and this increases the risk of making errors in its description and implementation. A good example of this presented a promising variant of a genetic algorithm (another popular metaheuristic) but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered. This bias was because of a programming error, and has now been fixed.
第322行: 第307行:  
支持简化粒子群优化算法的另一个论点是,元启发法只能通过对有限数量的优化问题进行计算实验来证明其有效性。这意味着诸如 PSO 这样的元启发式算法不能被证明是正确的,这增加了其描述和实现中出错的风险。这方面的一个很好的例子是遗传算法(另一种流行的元启发式算法)的一个前景较好的变种,但后来发现它有缺陷,因为它在搜索空间中对不同维度的相似值进行优化搜索时存在较大的偏差,而这恰好是所考虑的基准问题的最优值。这种偏差是由于一个编程错误造成的,现在已经得到修正。
 
支持简化粒子群优化算法的另一个论点是,元启发法只能通过对有限数量的优化问题进行计算实验来证明其有效性。这意味着诸如 PSO 这样的元启发式算法不能被证明是正确的,这增加了其描述和实现中出错的风险。这方面的一个很好的例子是遗传算法(另一种流行的元启发式算法)的一个前景较好的变种,但后来发现它有缺陷,因为它在搜索空间中对不同维度的相似值进行优化搜索时存在较大的偏差,而这恰好是所考虑的基准问题的最优值。这种偏差是由于一个编程错误造成的,现在已经得到修正。
   −
  −
  −
Initialization of velocities may require extra inputs. The Bare Bones PSO variant<ref>{{Cite journal|last=Kennedy|first=James|date=2003|title=Bare Bones Particle Swarms|url=|journal=Proceedings of the 2003 IEEE Swarm Intelligence Symposium|volume=|issue=|pages=80–87|doi=10.1109/SIS.2003.1202251|pmid=|access-date=|isbn=0-7803-7914-4}}</ref> has been proposed in 2003 by James Kennedy, and does not need to use velocity at all.
      
Initialization of velocities may require extra inputs. The Bare Bones PSO variant has been proposed in 2003 by James Kennedy, and does not need to use velocity at all.
 
Initialization of velocities may require extra inputs. The Bare Bones PSO variant has been proposed in 2003 by James Kennedy, and does not need to use velocity at all.
第330行: 第312行:  
初始化速度可能需要额外的输入。其中一种在2003年由 James Kennedy 提出的叫做骨干粒子群优化算法的变种,根本不需要设置速度。
 
初始化速度可能需要额外的输入。其中一种在2003年由 James Kennedy 提出的叫做骨干粒子群优化算法的变种,根本不需要设置速度。
   −
  −
  −
Another simpler variant is the accelerated particle swarm optimization (APSO),<ref>X. S. Yang, S. Deb and S. Fong, [https://arxiv.org/pdf/1203.6577 Accelerated particle swarm optimization and support vector machine for business optimization and applications], NDT 2011, Springer CCIS 136, pp. 53-66 (2011).</ref> which also does not need to use velocity and can speed up the convergence in many applications. A simple demo code of APSO is available.<ref>{{Cite web | url=http://www.mathworks.com/matlabcentral/fileexchange/?term=APSO | title=Search Results: APSO - File Exchange - MATLAB Central}}</ref>
      
Another simpler variant is the accelerated particle swarm optimization (APSO), which also does not need to use velocity and can speed up the convergence in many applications. A simple demo code of APSO is available.
 
Another simpler variant is the accelerated particle swarm optimization (APSO), which also does not need to use velocity and can speed up the convergence in many applications. A simple demo code of APSO is available.
第342行: 第321行:  
===Multi-objective optimization===
 
===Multi-objective optimization===
 
多目标优化<br>
 
多目标优化<br>
  −
PSO has also been applied to [[multi-objective optimization|multi-objective problems]],<ref name=parsopoulos02particle/><ref name=coellocoello02MOPSO/><ref name=MASON2017188/> in which the objective function comparison takes [[pareto efficiency|pareto dominance]] into account when moving the PSO particles and non-dominated solutions are stored so as to approximate the pareto front.
      
PSO has also been applied to multi-objective problems, in which the objective function comparison takes pareto dominance into account when moving the PSO particles and non-dominated solutions are stored so as to approximate the pareto front.
 
PSO has also been applied to multi-objective problems, in which the objective function comparison takes pareto dominance into account when moving the PSO particles and non-dominated solutions are stored so as to approximate the pareto front.
第352行: 第329行:  
===Binary, discrete, and combinatorial===
 
===Binary, discrete, and combinatorial===
 
二进制,离散以及组合问题<br>
 
二进制,离散以及组合问题<br>
  −
As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous domain, to apply a classical PSO, and then to demap the result.  Such a mapping can be very simple (for example by just using rounded values) or more sophisticated.<ref>Roy, R., Dehuri, S., & Cho, S. B. (2012). [http://sclab.yonsei.ac.kr/publications/Papers/IJ/A%20Novel%20Particle%20Swarm%20Optimization%20Algorithm%20for%20Multi-Objective%20Combinatorial%20Optimization%20Problem.pdf A Novel Particle Swarm Optimization Algorithm for Multi-Objective Combinatorial Optimization Problem]. 'International Journal of Applied Metaheuristic Computing (IJAMC)', 2(4), 41-57</ref>
      
As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous domain, to apply a classical PSO, and then to demap the result.  Such a mapping can be very simple (for example by just using rounded values) or more sophisticated.
 
As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous domain, to apply a classical PSO, and then to demap the result.  Such a mapping can be very simple (for example by just using rounded values) or more sophisticated.
第359行: 第334行:  
尽管上面给出的粒子群优化算法对实数问题有效,但常用的离散问题求解方法是将离散搜索空间映射到连续区域,再应用经典粒子群优化算法对结果进行去映射。这样的映射可以非常简单(例如只使用舍入的值) ,也可以更复杂。
 
尽管上面给出的粒子群优化算法对实数问题有效,但常用的离散问题求解方法是将离散搜索空间映射到连续区域,再应用经典粒子群优化算法对结果进行去映射。这样的映射可以非常简单(例如只使用舍入的值) ,也可以更复杂。
   −
  −
However, it can be noted that the equations of movement make use of operators that perform four actions:
      
However, it can be noted that the equations of movement make use of operators that perform four actions:
 
However, it can be noted that the equations of movement make use of operators that perform four actions:
第375行: 第348行:  
在某位置上添加一个速度
 
在某位置上添加一个速度
   −
  −
Usually a position and a velocity are represented by ''n'' real numbers, and  these operators are simply -, *, +, and again +. But all these mathematical objects can be defined in a completely different way, in order to cope with binary problems (or more generally discrete ones), or even combinatorial ones.<ref>Kennedy, J. & Eberhart, R. C. (1997). [http://ahmetcevahircinar.com.tr/wp-content/uploads/2017/02/A_discrete_binary_version_of_the_particle_swarm_algorithm.pdf A discrete binary version of the particle swarm algorithm], Conference on Systems, Man, and Cybernetics, Piscataway, NJ: IEEE Service Center, pp. 4104-4109</ref><ref>Clerc, M. (2004). [https://link.springer.com/chapter/10.1007/978-3-540-39930-8_8 Discrete Particle Swarm Optimization, illustrated by the Traveling Salesman Problem], New Optimization Techniques in Engineering, Springer, pp. 219-239</ref><ref>Clerc, M. (2005). Binary Particle Swarm Optimisers: toolbox, derivations, and mathematical insights, [http://hal.archives-ouvertes.fr/hal-00122809/en/ Open Archive HAL]</ref><ref>{{cite journal|doi=10.1016/j.amc.2007.04.096|title=A combinatorial particle swarm optimization for solving multi-mode resource-constrained project scheduling problems|journal=Applied Mathematics and Computation|volume=195|pages=299–308|year=2008|last1=Jarboui|first1=B.|last2=Damak|first2=N.|last3=Siarry|first3=P.|last4=Rebai|first4=A.}}</ref> One approach is to redefine the operators based on sets.<ref name=Chen10SPSO />
      
Usually a position and a velocity are represented by n real numbers, and  these operators are simply -, *, +, and again +. But all these mathematical objects can be defined in a completely different way, in order to cope with binary problems (or more generally discrete ones), or even combinatorial ones. One approach is to redefine the operators based on sets.
 
Usually a position and a velocity are represented by n real numbers, and  these operators are simply -, *, +, and again +. But all these mathematical objects can be defined in a completely different way, in order to cope with binary problems (or more generally discrete ones), or even combinatorial ones. One approach is to redefine the operators based on sets.
   −
通常位置和速度用''n''个实数表示,并且运算符是简单地减、 乘 、 加 和再次加 。但是所有这些数学对象都可以用一种完全不同的方式来定义,以便处理二进制问题(或者更一般的离散问题) ,甚至是组合问题。一种方法是基于集合重新定义运算符。
+
通常位置和速度用''n''个实数表示,并且运算符是简单地减、 乘 、 加 和再次加。但是所有这些数学对象都可以用一种完全不同的方式来定义,以便处理二进制问题(或者更一般的离散问题),甚至是组合问题。一种方法是基于集合重新定义运算符。
    
== See also ==
 
== See also ==
51

个编辑

导航菜单