更改

跳到导航 跳到搜索
删除100,087字节 、 2021年3月14日 (日) 12:39
页面内容被替换为“thumb|200px|right <strong>Genetic algorithms</strong> are based on the classic view of a chromosome as a string of genes. R.A. Fisher used this v…”
第1行: 第1行: −
此词条暂由彩云小译翻译,翻译字数共3303,未经人工整理和审校,带来阅读不便,请见谅。
+
[[Image:dna.jpg|thumb|200px|right]]
   −
{{short description|Competitive algorithm for searching a problem space}}
+
<strong>Genetic algorithms</strong> are based on the classic view of a chromosome as a string of genes.  R.A. Fisher used this view to found mathematical genetics, providing mathematical formula specifying the rate at which particular genes would spread through a population [[#fisher1958|(Fisher, 1958)]].  Key elements of Fisher’s formulation are:
   −
{{Evolutionary algorithms}}
+
* a specified set of alternatives (alleles) for each gene, thereby specifying the allowable strings of genes (the possible chromosomes),
 +
* a generation-by-generation view of evolution where, at each stage, a population of individuals produces a set of offspring that constitutes the next generation,
 +
* a fitness function that assigns to each string of alleles the number of offspring the individual carrying that chromosome will contribute to the next generation, and
 +
* a set of genetic operators, particularly mutation in Fisher’s formulation, that modify the offspring of an individual so that the next generation differs from the current generation.
   −
{{Use dmy dates|date=November 2020}}
+
Natural selection, in this formulation, can be thought of as a procedure for searching through the set of possible individuals, the search space, to find individuals of progressively higher fitness.  Even when a natural population consists of a single species – say the current generation of humans – there is considerable variation within that population.  These variants constitute samples of the search space.
   −
[[Image:St 5-xband-antenna.jpg|thumb|The 2006 NASA [[Space Technology 5|ST5]] spacecraft antenna. This complicated shape was found by an evolutionary computer design program to create the best radiation pattern. It is known as an [[evolved antenna]].]]
+
== Definition ==
 +
A genetic algorithm (GA) is a generalized, computer-executable version of Fisher’s formulation [[#holland1995|(Holland J, 1995)]]. The generalizations consist of:
   −
ST5 spacecraft antenna. This complicated shape was found by an evolutionary computer design program to create the best radiation pattern. It is known as an evolved antenna.]]
+
* concern with interaction of genes on a chromosome, rather than assuming alleles act independently of each other, and
 +
* enlargement of the set of genetic operators to include other well-known genetic operators such as crossing-over (recombination) and inversion.
   −
ST5航天器天线。这个复杂的形状是由一个进化的计算机设计程序发现的,用来创建最佳的辐射图案。它被称为演化天线。]
+
Under the first generalization, the fitness function becomes a complex nonlinear function that usually cannot be usefully approximated by summing up the effects of individual genes.  The second generalization puts emphasis on genetic mechanisms, such as crossover, that operate regularly on chromosomes.  Crossover takes place in every mating whereas mutation of a given gene typically occurs in less than 1 in a million individuals.  Crossover is the central reason that mammals produce offspring exhibiting a mix of their parents’ characteristics – consider the human population for a vivid, familiar example – and crossover makes possible the artificial cross-breeding of selected animals and plants to produce superior varieties.  Clearly crossover’s high frequency gives it an important role in evolution, and it has an essential role in the operation of GAs, as outlined below.
   −
<!-- Deleted image removed: [[Image:ESA JAXA HUMIES Trajectory.png|thumb|The ESA/JAXA interplanetary Trajectory recipient of the [http://www.genetic-programming.org/combined.php 2013 gold HUMIES ] award. This complex tour of the Jovian Moons was found with the help of an evolutionary technique based on self-adaptation]] -->
+
Crossover in its simplest form (one-point crossover) is easily defined: Two chromosomes are lined up, a point along the chromosome is randomly selected, then the pieces to the left of that point are exchanged between the chromosomes, producing a pair of offspring chromosomes.
   −
<!-- Deleted image removed: The ESA/JAXA interplanetary Trajectory recipient of the [http://www.genetic-programming.org/combined.php 2013 gold HUMIES ] award. This complex tour of the Jovian Moons was found with the help of an evolutionary technique based on self-adaptation -->
+
[[Image:CrossOver.png]]
   −
<!-- 被删除的图片: ESA/JAXA 行星际轨道奖获得者。这次木星卫星之旅是在一种基于自我适应的进化技术的帮助下发现的 -->
+
This simple version of crossover is a good approximation to what actually takes places in mating organisms.  Under one-point crossover, alleles that are close together on the chromosome are likely to be passed as a unit to one of the offspring, while alleles that are far apart are likely to be separated by crossover with one allele appearing in one offspring  and the other allele appearing in the other offspring.  For example, given a chromosome with 1001 genes, there is only one chance in a thousand that an adjacent pair of alleles will be separated in the offspring, while alleles at opposite ends of the string will always be separated.  In standard genetic terminology, this phenomenon is called linkage.  By observing the frequency of separation of allele-determined characteristics in successive generations, linkage can be determined experimentally.  Indeed, assuming one-point crossover made gene sequencing possible long before we had any knowledge of DNA – in 1944, using experimentally determined linkage, the Carnegie Institution of Washington published a large volume recording the order of genes in the fruit fly [[#lindsley1944|(Lindsley & Grell, 1944)]].
   −
In [[computer science]] and [[operations research]], a '''genetic algorithm''' ('''GA''') is a [[metaheuristic]] inspired by the process of [[natural selection]] that belongs to the larger class of [[evolutionary algorithm]]s (EA). Genetic algorithms are commonly used to generate high-quality solutions to [[Optimization (mathematics)|optimization]] and [[Search algorithm|search problem]]s by relying on biologically inspired operators such as [[Mutation (genetic algorithm)|mutation]], [[crossover (genetic algorithm)|crossover]] and [[selection (genetic algorithm)|selection]].{{sfn|Mitchell|1996|p=2}}
+
The genetic algorithm, following Fisher’s formulation, uses the differing fitness of variants in the current generation to propagate improvements to the next generation, but the GA places strong emphasis on the variants produced by crossover.  The basic GA subroutine, which produces the next generation from the current one, executes the following steps (where, for simplicity, it is assumed that each individual is specified by a single chromosome):
   −
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection.
+
# Start with a population of <math>N</math> individual strings of alleles (perhaps generated at random).
 +
# Select two individuals at random from the current population, biasing selection toward individuals with higher fitness.
 +
# Use crossover (with occasional use of mutation) to produce two new individuals which are assigned to the next generation.
 +
# Repeat steps (1) and (2) <math>N/2</math> times to produce a new generation of N individuals.
 +
# Return to step (1) to produce the next generation.
   −
在计算机科学和运筹学研究中,'''<font color="#ff8000">遗传算法 genetic algorithm(GA)</font>'''是一种受自然选择过程启发的元启发式算法,属于[[演化算法]](EA)的较大类别。遗传算法通常依赖于生物学上的'''变异'''、'''交叉'''和'''选择'''算子来生成优化和搜索问题的高质量解。
+
Of course, there are many ways to modify these steps, but most characteristics of a GA are already exhibited by this basic program.
    +
== Crossover ==
 +
Crossover introduces a considerable complication in the study of successive generations:  Highly effective individuals in the parent generation (say, an “Einstein”), will not reappear in the next generation, because only a subset of the parent’s alleles is passed on to any given offspring.  This raises an important question:  What is passed from generation to generation if the parents’ specific chromosomes are never passed on?  An answer to this question requires a prediction of the generation-by-generation spread of ''clusters'' of alleles, requiring a substantial generalization of Fisher’s fundamental theorem.  The schema theorem is one such generalization; it deals with arbitrary allele clusters call schemas.  A ''schema'' is specified using the symbol * (‘don’t care’) to specify places along the chromosome not belonging to the cluster.  For example, if there are two distinct alleles for each position, call them 1 and 0, then the cluster consisting of allele 1 at position 2, allele 0 at position 4, and allele 0 at position 5, is designated by the string *1*00**...*.  Let N(s,t)  be the number of chromosomes carrying schema s in generation t.  The schema theorem specifies the (expected) number <math>N(s,t+1)</math> of chromosomes carrying schema <math>s</math> in the next generation.  A simplified version has the following form:
   −
== 方法论 ==
+
<math>N(s,t+1) = u(s,t)[1-e]N(s,t)</math>
   −
=== 最优化问题 ===
+
where <math>u(s,t)</math> is the average fitness of the chromosomes carrying schema <math>s</math> at time <math>t</math> (the observed average fitness), and <math>e</math> is the overall probability (usually quite small) that the cluster s will be destroyed (or created) by mutation or crossover.  (Note that <math>e</math> does become large if the alleles in the cluster are spread over a considerable part of the chromosome).  This formula for <math>N(s,t+1)</math> can be restated in terms of probabilities (proportions) <math>P(s,t)\ ,</math> a more typical form for mathematical genetics, by noting that <math>P(s,t) = N(s,t)/N(t)\ ,</math> were <math>N(t)</math> is the size of the population at time <math>t\ .</math>
   −
In a genetic algorithm, a [[population]] of [[candidate solution]]s (called individuals, creatures, or [[phenotype]]s) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its [[chromosome]]s or [[genotype]]) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.{{sfn|Whitley|1994|p=66}}
+
The schema theorem shows that ''every'' cluster of alleles present in the population increases or decreases its proportion in the population at a rate determined by the observed average fitness of its carriers.  In particular, schemas consistently associated with strings of above-average fitness spread rapidly through the population.  As a result, crossover regularly tries out new combinations of these above-average schemas; they become building blocks for further attempts at solution. Though a GA only directly processes <math>N(t)</math> strings in a generation, it effectively processes the much larger number of schemata carried in the population.  For example, the number of schemas on a single string of length 40 is <math>2 * exp(40) \sim 16,000\ ,</math> so a GA processing a few dozen strings of length 40 effectively processes thousands of schemas.  For problems in which schemas capture regularities in the search space, this is a tremendous speedup.
   −
In a genetic algorithm, a population of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.
+
== Uses ==
 +
Genetic algorithms are routinely used to find good solutions for problems that do not yield to standard techniques such as gradient ascent (“hill-climbing”) or additive approximations (“the whole equals the sum of the parts”) [[#mitchell2009|(Mitchell, 2009)]]. Some typical problems on which the GA has been used are control of pipelines, jet engine design, scheduling, protein folding, machine learning, modeling language acquisition and evolution, and modeling complex adaptive systems (such as markets and ecosystems).  To use the GA, the search space must be represented as strings over some fixed alphabet, much as biological chromosomes are represented by strings over 4 nucleotides.  The strings can represent anything from biological organisms, or rules for processing signals, to agents in a complex adaptive system.  The GA is initialized with a population of these strings, which may be simply selected at random from the search space, or the initial population may be “salted” with strings picked out using prior knowledge of the problem.  The GA then processes the strings, and successive generations uncover schemas of above-average observed fitness.  When above-average, well-linked schemas are regularly associated with improvements, the GA rapidly exploits those improvements.
   −
在遗传算法中,最优化问题的候选解(被称为个体、生物或表型)群体会向着更好的方向进化。每个候选解都有一组可以变异和改变的属性(染色体或基因型),传统上,解会用二进制编码成0和1的字符串,但其他编码也是可能的。
+
== References ==
 +
*{{Bibitem article 1|The Genetical Theory of Natural Selection|Oxford University Press||1958||R.A.|Fisher|label=fisher1958}}
    +
*{{Bibitem book 1|Hidden Order: How Adaptation Builds Complexity|Addison-Wesley|Redwood City, CA.|1995|J. H.|Holland|label=holland1995}}
    +
*{{Bibitem article 2|Genetic Variations of Drosophila Melanogaster|Carnegie Institution of Washington.||1944||D.L.|Lindsley|E.H.|Grell|label=lindsley1944}}
   −
The evolution usually starts from a population of randomly generated individuals, and is an [[Iteration|iterative process]], with the population in each iteration called a ''generation''. In each generation, the [[fitness (biology)|fitness]] of every individual in the population is evaluated; the fitness is usually the value of the [[objective function]] in the optimization problem being solved. The more fit individuals are [[Stochastics|stochastically]] selected from the current population, and each individual's genome is modified ([[Crossover (genetic algorithm)|recombined]] and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the [[algorithm]]. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
+
*{{Bibitem book 1|Complexity: A Guided Tour.|Oxford University Press|New York, NY|2009|M.|Mitchell|label=mitchell2009}}
   −
The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
+
[[Category:Computational Intelligence]]
 
+
[[Category:Evolutionary Computation]]
进化通常从一个随机生成的群体开始,是一个迭代过程,每次迭代中的群体称为一代。在每一代中,每一个个体的适合度都会被评估;'''<font color="#ff8000">适合 fitness</font>'''度通常是解决最佳化问题中目标函数的值。适合度更高的个体会被随机地从当前的种群中选出,每个个体的基因组被修改(重组和可能的随机突变) ,形成新一代。然后在算法的下一次迭代中使用新一代候选解。一般情况下,当迭代次数达到最大的预设值,或者群体达到了令人满意的适应水平时,算法就会终止。
  −
 
  −
 
  −
 
  −
A typical genetic algorithm requires:
  −
 
  −
A typical genetic algorithm requires:
  −
 
  −
一个典型的遗传算法需要:
  −
 
  −
# a [[genetic representation]] of the solution domain,
  −
# a [[fitness function]] to evaluate the solution domain.
  −
 
  −
# 解域的遗传表示,
  −
# 一个适应度函数来评估解域。
  −
 
  −
A standard representation of each candidate solution is as an [[bit array|array of bits]].{{sfn|Whitley|1994|p=66}} Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple [[Crossover (genetic algorithm)|crossover]] operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in [[genetic programming]] and graph-form representations are explored in [[evolutionary programming]]; a mix of both linear chromosomes and trees is explored in [[gene expression programming]].
  −
 
  −
A standard representation of each candidate solution is as an array of bits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming.
  −
 
  −
每个候选解的标准表示形式是一组二进制位,即'0101'这样的字符串。这种遗传表示方法最方便的主要特性是,由于大小固定,这些解的各个部分很容易对齐,这有利于简单的'''交叉 crossover'''操作。可变长度的表示方方法也可以使用,但交叉实现在这种情况下更复杂。在遗传编程中探讨了树形表示,在演化编程中探讨了图形表示,在基因表达式编程中探讨了线性染色体和树的混合。
  −
 
  −
 
  −
 
  −
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.
  −
 
  −
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.
  −
 
  −
一旦遗传表示和适应度函数被定义,一个遗传算法就会初始化一个解的种群,然后通过变异、交叉、反演和选择等操作的迭代地改进这些候选解。
  −
 
  −
 
  −
 
  −
==== 初始化====
  −
 
  −
The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the [[Feasible region|search space]]). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found.
  −
 
  −
The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found.
  −
 
  −
种群大小取决于问题的性质,但通常包含几百个或几千个可能的解。通常,初始种群是随机生成的,允许所有可能的解(搜索空间)。有时候,解可能会在有可能找到最佳解决方案的地区被“播种”。
  −
 
  −
 
  −
 
  −
==== 选择 ====
  −
 
  −
{{Main|Selection (genetic algorithm)}}
  −
 
  −
During each successive generation, a portion of the existing population is [[selection (genetic algorithm)|selected]] to breed a new generation. Individual solutions are selected through a ''fitness-based'' process, where [[Fitness (biology)|fitter]] solutions (as measured by a [[fitness function]]) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.
  −
 
  −
During each successive generation, a portion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.
  −
 
  −
在每一个代中,现有种群的一部分 个体被选中来繁殖新的一代。个体的解是通过基于适应度的过程来选择的,在这个过程中,通常更有可能选择适应度较高的解(用适应度函数来衡量)。某些选择方法评价每个解的适合度,并优先选择最佳解。因为这个过程可能非常耗时,所以还有其他方法,只对总体的随机样本进行评分,。
  −
 
  −
 
  −
 
  −
The fitness function is defined over the genetic representation and measures the ''quality'' of the represented solution. The fitness function is always problem dependent. For instance, in the [[knapsack problem]] one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The ''fitness'' of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.
  −
 
  −
The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.
  −
 
  −
适应度函数的定义要基于遗传表示的方法,并能够度量解的质量。适应度函数总是依赖于问题。例如,在背包问题中,人们希望最大化可以放入某个固定容量背包的物品的总价值。解决方案的表示形式可以是一个二进制串,其中每个位表示一个不同的对象,位(0或1)的值表示对象是否在背包中。当然并非所有这样的个体都是有效的,因为对象的大小可能超过背包的容量。解的适应性是背包中所有对象的值之和。如果表示无有效(即个体表示的解对应的重量超过了背包最大承载),则为0。
  −
 
  −
 
  −
 
  −
In some problems, it is hard or even impossible to define the fitness expression; in these cases, a [[Computer simulation|simulation]] may be used to determine the fitness function value of a [[phenotype]] (e.g. [[computational fluid dynamics]] is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even [[Interactive evolutionary computation|interactive genetic algorithms]] are used.
  −
 
  −
In some problems, it is hard or even impossible to define the fitness expression; in these cases, a simulation may be used to determine the fitness function value of a phenotype (e.g. computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even interactive genetic algorithms are used.
  −
 
  −
在某些问题中,适应度函数很难甚至不可能定义,在这些情况下,可以用模拟来确定个体的适应度(例如,计算流体力学来确定车辆的空气阻力,被编码的是车辆的形状) ,甚至交互式遗传算法也会被用上。
  −
 
  −
 
  −
 
  −
==== 遗传算子 ====
  −
 
  −
{{Main|Crossover (genetic algorithm)|Mutation (genetic algorithm)}}
  −
 
  −
 
  −
 
  −
The next step is to generate a second generation population of solutions from those selected through a combination of [[genetic operator]]s: [[crossover (genetic algorithm)|crossover]] (also called recombination), and [[mutation (genetic algorithm)|mutation]].
  −
 
  −
The next step is to generate a second generation population of solutions from those selected through a combination of genetic operators: crossover (also called recombination), and mutation.
  −
 
  −
下一步是通过遗传操作的组合: 交叉(也称为重组)和变异,从那些被选择出来的解中生成第二代种群。
  −
 
  −
 
  −
 
  −
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated.
  −
 
  −
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated.
  −
 
  −
对于生产的每一个新的个体,都会从以前选择出来的个体中选择一对“亲本”进行配对。通过使用上述交叉和变异方法生成“子”个体,创建了一个新的候选解,该解通常具有其“父母”的许多特征。不选选择旧个体配对生成新个体,这一过程将继续下去,直到生成具有适当规模的新群体为止。
  −
 
  −
 
  −
Although reproduction methods that are based on the use of two parents are more "biology inspired", some research<ref>Eiben, A. E. et al (1994). "Genetic algorithms with multi-parent recombination". PPSN III: Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature: 78&ndash;87. {{ISBN|3-540-58484-6}}.</ref><ref>Ting, Chuan-Kang (2005). "On the Mean Convergence Time of Multi-parent Genetic Algorithms Without Selection". Advances in Artificial Life: 403&ndash;412. {{ISBN|978-3-540-28848-0}}.</ref> suggests that more than two "parents" generate higher quality chromosomes.
  −
 
  −
Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests that more than two "parents" generate higher quality chromosomes.
  −
 
  −
虽然基于双亲的繁殖方法更具有“生物学启发性” ,但一些研究表明<ref>Eiben, A. E. et al (1994). "Genetic algorithms with multi-parent recombination". PPSN III: Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature: 78&ndash;87. {{ISBN|3-540-58484-6}}.</ref><ref>Ting, Chuan-Kang (2005). "On the Mean Convergence Time of Multi-parent Genetic Algorithms Without Selection". Advances in Artificial Life: 403&ndash;412. {{ISBN|978-3-540-28848-0}}.</ref> ,超过两个的“双亲”可以产生更高质量的染色体。
  −
 
  −
 
  −
 
  −
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.
  −
 
  −
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.
  −
 
  −
这些过程最终导致下一代染色体群体不同于最初的一代。一般来说,这个过程会增加种群的平均适应度,因为只有第一代中最好的生物体被选中进行繁殖,同时还有一小部分不太适应的解。这些不太适合的解确保了父母基因库中的遗传多样性,从而确保了下一代子女的遗传多样性。
  −
 
  −
 
  −
 
  −
Opinion is divided over the importance of crossover versus mutation. There are many references in [[David B. Fogel|Fogel]] (2006) that support the importance of mutation-based search.
  −
 
  −
Opinion is divided over the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based search.
  −
 
  −
关于交叉与变异的重要性,人们意见不一。Fogel (2006)中有许多参考文献支持基于变异的搜索的重要性。
  −
 
  −
 
  −
 
  −
Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.{{citation needed|date=November 2019}}
  −
 
  −
Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.
  −
 
  −
虽然交叉和变异算子是遗传算法的主要算子,但是在遗传算法中也可以使用其他算子,如重新组合、定殖-灭绝或迁移。
  −
 
  −
 
  −
 
  −
It is worth tuning parameters such as the [[Mutation (genetic algorithm)|mutation]] probability, [[Crossover (genetic algorithm)|crossover]] probability and population size to find reasonable settings for the problem class being worked on. A very small mutation rate may lead to [[genetic drift]] (which is non-[[Ergodicity|ergodic]] in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless [[#Elitism|elitist selection]] is employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required.
  −
 
  −
It is worth tuning parameters such as the mutation probability, crossover probability and population size to find reasonable settings for the problem class being worked on. A very small mutation rate may lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless elitist selection is employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required.
  −
 
  −
遗传算法种有一些需要调优的参数,如变异概率,交叉概率和种群大小,以找到适合特定问题类的合理设置。一个非常小的突变率可能会导致遗传漂变(在自然界中是非遍历性的)。过高的重组率可能导致遗传算法的早熟收敛。如果突变率过高,可能会导致失去好的解决方案,除非每一代保留最优的个体,不对其使用变异算子。适当的种群规模可以确保目前问题的足够遗传多样性,但如果设置的值大于所需值,则可能导致计算资源的浪费。
  −
 
  −
 
  −
 
  −
==== 启发式 ====
  −
 
  −
In addition to the main operators above, other [[heuristic]]s may be employed to make the calculation faster or more robust. The ''speciation'' heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature [[convergence (evolutionary computing)|convergence]] to a less optimal solution.<ref>{{Cite book|title=Handbook of Evolutionary Computation|last1=Deb|first1=Kalyanmoy|last2=Spears|first2=William M.|s2cid=3547258|publisher=Institute of Physics Publishing|year=1997|chapter=C6.2: Speciation methods}}</ref><ref>{{Cite book|title=Handbook of Natural Computing|last=Shir|first=Ofer M.|date=2012|publisher=Springer Berlin Heidelberg|isbn=9783540929093|editor-last=Rozenberg|editor-first=Grzegorz|pages=1035–1069|language=en|chapter=Niching in Evolutionary Algorithms|doi=10.1007/978-3-540-92910-9_32|editor-last2=Bäck|editor-first2=Thomas|editor-last3=Kok|editor-first3=Joost N.}}</ref>
  −
 
  −
In addition to the main operators above, other heuristics may be employed to make the calculation faster or more robust. The speciation heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution.
  −
 
  −
除了上面的主要运算符之外,还可以使用其他启发式算法来使计算更快或更健壮。例如,形态启发式可以惩罚候选解之间的交叉相似,这鼓励种群多样性,并有助于防止早熟收敛到一个不太优化的解决方案。<ref>{{Cite book|title=Handbook of Evolutionary Computation|last1=Deb|first1=Kalyanmoy|last2=Spears|first2=William M.|s2cid=3547258|publisher=Institute of Physics Publishing|year=1997|chapter=C6.2: Speciation methods}}</ref><ref>{{Cite book|title=Handbook of Natural Computing|last=Shir|first=Ofer M.|date=2012|publisher=Springer Berlin Heidelberg|isbn=9783540929093|editor-last=Rozenberg|editor-first=Grzegorz|pages=1035–1069|language=en|chapter=Niching in Evolutionary Algorithms|doi=10.1007/978-3-540-92910-9_32|editor-last2=Bäck|editor-first2=Thomas|editor-last3=Kok|editor-first3=Joost N.}}</ref>
  −
 
  −
 
  −
 
  −
 
  −
==== 终止 ====
  −
 
  −
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
  −
 
  −
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
  −
 
  −
此过程将重复进行,直到达到终止条件为止。常见的终止条件是:
  −
 
  −
 
  −
 
  −
* A solution is found that satisfies minimum criteria
  −
* Fixed number of generations reached
  −
* Allocated budget (computation time/money) reached
  −
* The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
  −
* Manual inspection
  −
* Combinations of the above
  −
 
  −
* 一个符合最低要求的解被找到了
  −
* 算法运行达到了最大的迭代次数
  −
* 计算时间或空间耗尽了
  −
* 算法收敛,即后代的适应度不再优于前代
  −
* 人工观察,决定时候终止
  −
* 上述方法的组合
  −
 
  −
==构件假设 The building block hypothesis ==
  −
 
  −
Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
  −
 
  −
Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
  −
 
  −
遗传算法实现起来很简单,但是它们的行为却很难理解。特别是,很难理解为什么这些算法在应用于实际问题时经常能够成功地产生高适应度的解决方案。构件假设包括:
  −
 
  −
 
  −
 
  −
# A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length [[Schema (genetic algorithms)|schemata]] with above average fitness.
  −
# A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic.
  −
 
  −
# 一种识别和重新组合“构建块”来适应的启发式算法描述。低次序、低定义长度的适应度高于平均水平的模式。
  −
# 一种假设,即遗传算法通过隐含和有效地实现这种启发式算法来实现适应。
  −
 
  −
 
  −
 
  −
Goldberg describes the heuristic as follows:
  −
 
  −
Goldberg describes the heuristic as follows:
  −
 
  −
戈德堡对这种启发式做了如下描述:
  −
 
  −
 
  −
 
  −
:"Short, low order, and highly fit schemata are sampled, [[crossover (genetic algorithm)|recombined]] [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings.
  −
 
  −
:“对短的、低次序的和高度适合的模式进行取样、重组(交叉) ,并重新取样以形成具有潜在更高适合度的表示串。在某种程度上,通过使用这些特定的模式[构建块] ,我们降低了问题的复杂性; 我们不是通过尝试每一种可以想象的组合来构建高性能表示串,而是从过去采样的最佳部分解决方案中构建越来越好的表示串。
  −
 
  −
 
  −
 
  −
:"Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks."{{sfn|Goldberg|1989|p=41}}
  −
 
  −
 
  −
:“由于低定义长度和低次序的高度拟合模式在遗传算法中起着如此重要的作用,我们已经给它们起了一个特殊的名字: 构件(building block)。就像一个孩子通过简单的木块排列来建造宏伟的堡垒一样,遗传算法也通过并列短小、低阶、高性能的图式或积木来寻求接近最优的性能。”{{sfn|Goldberg|1989|p=41}}
  −
 
  −
 
  −
 
  −
Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many [[estimation of distribution algorithm]]s, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold.<ref>{{cite book|last1=Harik|first1=Georges R.|last2=Lobo|first2=Fernando G.|last3=Sastry|first3=Kumara|title=Linkage Learning via Probabilistic Modeling in the Extended Compact Genetic Algorithm (ECGA)|journal=Scalable Optimization Via Probabilistic Modeling|volume=33|date=1 January 2006|pages=39–61|doi=10.1007/978-3-540-34954-9_3|language=en|series=Studies in Computational Intelligence|isbn=978-3-540-34953-2}}</ref><ref>{{cite book|last1=Pelikan|first1=Martin|last2=Goldberg|first2=David E.|last3=Cantú-Paz|first3=Erick|title=BOA: The Bayesian Optimization Algorithm|journal=Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1|date=1 January 1999|pages=525–532|url=http://dl.acm.org/citation.cfm?id=2933973|isbn=9781558606111|series=Gecco'99}}</ref> Although good results have been reported for some classes of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms.<ref>{{cite book|last1=Coffin|first1=David|last2=Smith|first2=Robert E.|title=Linkage Learning in Estimation of Distribution Algorithms|journal=Linkage in Evolutionary Computation|volume=157|date=1 January 2008|pages=141–156|doi=10.1007/978-3-540-85068-7_7|language=en|series=Studies in Computational Intelligence|isbn=978-3-540-85067-0}}</ref><ref>{{cite journal|last1=Echegoyen|first1=Carlos|last2=Mendiburu|first2=Alexander|last3=Santana|first3=Roberto|last4=Lozano|first4=Jose A.|title=On the Taxonomy of Optimization Problems Under Estimation of Distribution Algorithms|journal=Evolutionary Computation|date=8 November 2012|volume=21|issue=3|pages=471–495|doi=10.1162/EVCO_a_00095|pmid=23136917|s2cid=26585053|issn=1063-6560}}</ref><ref>{{cite book|last1=Sadowski|first1=Krzysztof L.|last2=Bosman|first2=Peter A.N.|last3=Thierens|first3=Dirk|title=On the Usefulness of Linkage Processing for Solving MAX-SAT|journal=Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation|date=1 January 2013|pages=853–860|doi=10.1145/2463372.2463474|isbn=9781450319638|series=Gecco '13|hdl=1874/290291|s2cid=9986768}}</ref>
  −
 
  −
Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many estimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold. Although good results have been reported for some classes of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms.
  −
 
  −
尽管对于构件假设的有效性缺乏共识,但多年来它一直在被评测,并被作为参考。例如,已经有许多'''<font color="#ff8000">分布估计算法 estimation of distribution algorithms</font>'''被提出,试图提供一个使假设成立的环境<ref>{{cite book|last1=Harik|first1=Georges R.|last2=Lobo|first2=Fernando G.|last3=Sastry|first3=Kumara|title=Linkage Learning via Probabilistic Modeling in the Extended Compact Genetic Algorithm (ECGA)|journal=Scalable Optimization Via Probabilistic Modeling|volume=33|date=1 January 2006|pages=39–61|doi=10.1007/978-3-540-34954-9_3|language=en|series=Studies in Computational Intelligence|isbn=978-3-540-34953-2}}</ref><ref>{{cite book|last1=Pelikan|first1=Martin|last2=Goldberg|first2=David E.|last3=Cantú-Paz|first3=Erick|title=BOA: The Bayesian Optimization Algorithm|journal=Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1|date=1 January 1999|pages=525–532|url=http://dl.acm.org/citation.cfm?id=2933973|isbn=9781558606111|series=Gecco'99}}</ref>。虽然在某些类别的问题上已经有了良好的结果,但是用构件假设来解释遗传算法有效性的普遍性和/或实用性仍然存在怀疑。事实上,有相当数量的工作试图从分布估计算法的角度来理解其局限性。
  −
<ref>{{cite book|last1=Coffin|first1=David|last2=Smith|first2=Robert E.|title=Linkage Learning in Estimation of Distribution Algorithms|journal=Linkage in Evolutionary Computation|volume=157|date=1 January 2008|pages=141–156|doi=10.1007/978-3-540-85068-7_7|language=en|series=Studies in Computational Intelligence|isbn=978-3-540-85067-0}}</ref><ref>{{cite journal|last1=Echegoyen|first1=Carlos|last2=Mendiburu|first2=Alexander|last3=Santana|first3=Roberto|last4=Lozano|first4=Jose A.|title=On the Taxonomy of Optimization Problems Under Estimation of Distribution Algorithms|journal=Evolutionary Computation|date=8 November 2012|volume=21|issue=3|pages=471–495|doi=10.1162/EVCO_a_00095|pmid=23136917|s2cid=26585053|issn=1063-6560}}</ref><ref>{{cite book|last1=Sadowski|first1=Krzysztof L.|last2=Bosman|first2=Peter A.N.|last3=Thierens|first3=Dirk|title=On the Usefulness of Linkage Processing for Solving MAX-SAT|journal=Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation|date=1 January 2013|pages=853–860|doi=10.1145/2463372.2463474|isbn=9781450319638|series=Gecco '13|hdl=1874/290291|s2cid=9986768}}</ref>
  −
 
  −
== 局限 ==
  −
 
  −
There are limitations of the use of a genetic algorithm compared to alternative optimization algorithms:
  −
 
  −
There are limitations of the use of a genetic algorithm compared to alternative optimization algorithms:
  −
 
  −
与其他优化算法相比,使用遗传算法有一些局限性:
  −
 
  −
 
  −
 
  −
* Repeated [[fitness function]] evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive [[fitness function]] evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods cannot deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an [[fitness approximation|approximated fitness]] that is computationally efficient. It is apparent that amalgamation of [[fitness approximation|approximate models]] may be one of the most promising approaches to convincingly use GA to solve complex real life problems.
  −
 
  −
* Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or a plane. In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts.
  −
 
  −
* The "better" solution is only in comparison to other solutions. As a result, the stop criterion is not clear in every problem.
  −
 
  −
* In many problems, GAs have a tendency to converge towards [[local optimum|local optima]] or even arbitrary points rather than the [[global optimum]] of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the [[fitness landscape]]: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions,<ref>{{cite journal|last1=Taherdangkoo|first1=Mohammad|last2=Paziresh |first2=Mahsa |last3=Yazdi |first3=Mehran |last4= Bagheri |first4=Mohammad Hadi |title=An efficient algorithm for function optimization: modified stem cells algorithm|journal=Central European Journal of Engineering|date=19 November 2012|volume=3|issue=1|pages=36–50|doi=10.2478/s13531-012-0047-8|doi-access=free}}</ref> although the [[No free lunch in search and optimization|No Free Lunch theorem]]<ref>Wolpert, D.H., Macready, W.G., 1995. No Free Lunch Theorems for Optimisation. Santa Fe Institute, SFI-TR-05-010, Santa Fe.</ref> proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and [[genetic programming]]) because crossing over a homogeneous population does not yield new solutions. In [[Evolution strategy|evolution strategies]] and [[evolutionary programming]], diversity is not essential because of a greater reliance on mutation.
  −
 
  −
* Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called ''triggered hypermutation''), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called ''random immigrants''). Again, [[Evolution strategy|evolution strategies]] and [[evolutionary programming]] can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems.
  −
 
  −
* GAs cannot effectively solve problems in which the only fitness measure is a single right/wrong measure (like [[decision problem]]s), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure.
  −
 
  −
* For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include [[Evolution strategy|evolution strategies]], [[evolutionary programming]], [[simulated annealing]], [[Gaussian adaptation]], [[hill climbing]], and [[swarm intelligence]] (e.g.: [[ant colony optimization]], [[particle swarm optimization]]) and methods based on [[integer linear programming]]. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches.
  −
 
  −
 
  −
 
  −
== 变体 ==
  −
 
  −
 
  −
 
  −
=== 染色体表示 ===
  −
 
  −
{{main | genetic representation }}
  −
 
  −
The simplest algorithm represents each chromosome as a [[Bit array|bit string]]. Typically, numeric parameters can be represented by [[integer]]s, though it is possible to use [[floating point]] representations. The floating point representation is natural to [[Evolution strategy|evolution strategies]] and [[evolutionary programming]]. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by [[John Henry Holland]] in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a [[linked list]], [[associative array|hashes]], [[object (computer science)|objects]], or any other imaginable [[data structure]]. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
  −
 
  −
The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
  −
 
  −
最简单的表示算法是将每个染色体表示为一个位串。通常,数值参数可以由整数表示,但也可以使用浮点表示。使用浮点数表示在进化策略和进化编程中更加自然。实值的遗传算法的概念已经提出,但实际上这个名词并不准确,因为它并不真正代表[[约翰·霍兰德_John_H_Holland|约翰·霍兰德]]在上世纪70年代提出的构件理论。不过这个理论并不是没有基于理论和实验结果(见下文)的证据支持。基本算法在二进制位(即'0101')层面进行交叉和变异操作。其他变体将染色体视为指令表、链表中的节点、散列、对象或任何其他可以想象的数据结构的索引的数列。交叉和变异的执行是为了尊重数据元素的边界。对于大多数数据类型,可以设计特定的操作。不同的染色体数据类型似乎对不同的特定问题域有或更好或更坏的作用。
  −
 
  −
 
  −
 
  −
When bit-string representations of integers are used, [[Gray coding]] is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-called ''Hamming walls'', in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.
  −
 
  −
When bit-string representations of integers are used, Gray coding is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.
  −
 
  −
当使用位串表示整数时,通常会用道'''<font color="#ff8000">格雷编码 Gray coding</font>'''。通过这种方式,整数中的微小变化很容易通过突变或交叉而受到影响。已经发现,这有助于防止所谓的'''<font color="#ff8000">汉明壁 Hamming walls</font>'''上的过早收敛,使得不需要发生太多的同时突变(或交叉事件) ,即可令染色体变成一个更好的解。
  −
 
  −
 
  −
 
  −
Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a ''virtual alphabet'' (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation.<ref name=Goldberg1991>{{cite book|last=Goldberg|first=David E.|title=Parallel Problem Solving from Nature|chapter=The theory of virtual alphabets|journal=Parallel Problem Solving from Nature, Lecture Notes in Computer Science|year=1991|volume=496|pages=13–22|doi=10.1007/BFb0029726|series=Lecture Notes in Computer Science|isbn=978-3-540-54148-6}}</ref><ref name=Janikow1991>{{cite journal|last1=Janikow|first1=C. Z.|first2=Z. |last2=Michalewicz |title=An Experimental Comparison of Binary and Floating Point Representations in Genetic Algorithms|journal=Proceedings of the Fourth International Conference on Genetic Algorithms|year=1991|pages=31–36|url=http://www.cs.umsl.edu/~janikow/publications/1991/GAbin/text.pdf|accessdate=2 July 2013}}</ref>
  −
 
  −
Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a virtual alphabet (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation.
  −
 
  −
其他方法包括使用实数数组代替位字符串来表示染色体。模式理论的结果表明,一般来说,可用字符集合越小,算法表现越好。但在起初研究人员感到惊讶,因为使用实值染色体也取得了良好的结果。这被解释为在有限的染色体群中形成一个虚拟可用字符集合(当选择和重组占主导地位时)的一组真实值,其基数比浮点表示法预期的低得多。<ref name=Goldberg1991>{{cite book|last=Goldberg|first=David E.|title=Parallel Problem Solving from Nature|chapter=The theory of virtual alphabets|journal=Parallel Problem Solving from Nature, Lecture Notes in Computer Science|year=1991|volume=496|pages=13–22|doi=10.1007/BFb0029726|series=Lecture Notes in Computer Science|isbn=978-3-540-54148-6}}</ref><ref name=Janikow1991>{{cite journal|last1=Janikow|first1=C. Z.|first2=Z. |last2=Michalewicz |title=An Experimental Comparison of Binary and Floating Point Representations in Genetic Algorithms|journal=Proceedings of the Fourth International Conference on Genetic Algorithms|year=1991|pages=31–36|url=http://www.cs.umsl.edu/~janikow/publications/1991/GAbin/text.pdf|accessdate=2 July 2013}}</ref>
  −
 
  −
 
  −
 
  −
An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome.<ref name=Patrascu2014>{{cite journal|last1=Patrascu|first1=M.|last2=Stancu|first2=A.F.|last3=Pop|first3=F.|title=HELGA: a heterogeneous encoding lifelike genetic algorithm for population evolution modeling and simulation|journal=Soft Computing|year=2014|volume=18|issue=12|pages=2565–2576|doi=10.1007/s00500-014-1401-y|s2cid=29821873}}</ref> This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes.
  −
 
  −
An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome. This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes.
  −
 
  −
通过将多种类型的异源编码基因串联到一个染色体中,可以对解决池进行更复杂的编码,从而扩展遗传算法可访问的问题域。这种特殊的方法允许解决需要根据问题参数完全不同的定义域的优化问题。例如,在级联控制器的整定问题中,内环控制器结构可能属于传统的三参数调节器,而外环控制器可以实现一个具有内在不同描述的语言控制器(如模糊系统)。这种特殊形式的编码需要一个专门的交叉机制,按部分重组染色体,这是一个有用的工具,建模和模拟复杂的自适应系统,特别是进化过程。
  −
 
  −
 
  −
 
  −
=== Elitism ===
  −
 
  −
A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as ''elitist selection'' and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next.<ref>{{cite conference |last1=Baluja |first1=Shumeet |first2=Rich |last2=Caruana |title=Removing the genetics from the standard genetic algorithm |conference=[[International Conference on Machine Learning|ICML]] |year=1995 |url=http://www.ri.cmu.edu/pub_files/pub2/baluja_shumeet_1995_1/baluja_shumeet_1995_1.pdf}}</ref>
  −
 
  −
A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as elitist selection and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next.
  −
 
  −
构建一个新种群的一般过程的一个实际变体是允许当代最好的生物体原封不动地传递到下一代。这种策略被称为精英选择策略,保证遗传算法得到的解质量不会从一代下降到下一代。
  −
 
  −
 
  −
 
  −
=== Parallel implementations ===
  −
 
  −
[[parallel algorithm|Parallel]] implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction.
  −
 
  −
Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction.
  −
 
  −
遗传算法的并行实现有两种形式。粗粒度并行遗传算法假定每个计算机节点上有一个种群,个体在节点之间迁移。细粒度并行遗传算法假设每个处理器节点上有一个个体,它与相邻的个体一起进行选择和繁殖。
  −
 
  −
Other variants, like genetic algorithms for [[online optimization]] problems, introduce time-dependence or noise in the fitness function.
  −
 
  −
Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function.
  −
 
  −
其他变种,如在线优化问题的遗传算法,在适应度函数中引入时间依赖性或噪声。
  −
 
  −
 
  −
 
  −
=== Adaptive GAs ===
  −
 
  −
Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Instead of using fixed values of ''pc'' and ''pm'', AGAs utilize the population information in each generation and adaptively adjust the ''pc'' and ''pm'' in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm),<ref>{{Cite journal |last1=Srinivas |first1=M. |last2=Patnaik |first2=L. |title=Adaptive probabilities of crossover and mutation in genetic algorithms |journal=IEEE Transactions on System, Man and Cybernetics |volume=24 |issue=4 |pages=656–667 |year=1994 |doi=10.1109/21.286385 |url=http://eprints.iisc.ac.in/6971/2/adaptive.pdf }}</ref> the adjustment of ''pc'' and ''pm'' depends on the fitness values of the solutions. In ''CAGA'' (clustering-based adaptive genetic algorithm),<ref>{{cite journal |last1=Zhang |first1=J. |last2=Chung |first2=H. |last3=Lo, W. L. |title=Clustering-Based Adaptive Crossover and Mutation Probabilities for Genetic Algorithms |journal=IEEE Transactions on Evolutionary Computation |volume=11 |issue=3 |pages=326&ndash;335 |year=2007 |doi=10.1109/TEVC.2006.880727 |s2cid=2625150 }}</ref> through the use of clustering analysis to judge the optimization states of the population, the adjustment of ''pc'' and ''pm'' depends on these optimization states.
  −
 
  −
Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Instead of using fixed values of pc and pm, AGAs utilize the population information in each generation and adaptively adjust the pc and pm in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm), the adjustment of pc and pm depends on the fitness values of the solutions. In CAGA (clustering-based adaptive genetic algorithm), through the use of clustering analysis to judge the optimization states of the population, the adjustment of pc and pm depends on these optimization states.
  −
 
  −
具有自适应参数的遗传算法(AGAs)是遗传算法的另一个重要且有前途的变体。交叉概率和变异概率决定了遗传算法的求解精度和收敛速度。自适应遗传算法利用每一代的种群信息,自适应地调整种群的种群多样性,以保持种群的收敛能力,而不是采用固定的种群多样性和种群多样性。在自适应遗传算法中,pc 和 pm 的调整取决于解的适应值。在基于聚类的自适应遗传算法(CAGA)中,通过聚类分析来判断种群的优化状态,并根据这些优化状态来调整种群的个体数和个体数。
  −
 
  −
It can be quite effective to combine GA with other optimization methods. GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as [[Hill climbing|simple hill climbing]]) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA {{Citation needed|date=July 2016}} while overcoming the lack of robustness of hill climbing.
  −
 
  −
It can be quite effective to combine GA with other optimization methods. GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA  while overcoming the lack of robustness of hill climbing.
  −
 
  −
将遗传算法与其他优化方法相结合可以取得很好的效果。遗传算法往往很擅长寻找一般良好的全局解决方案,但是在寻找最后几个突变来寻找绝对最优解时效率非常低。其他技术(例如简单的爬山)在有限的区域内找到绝对的最佳状态是相当有效的。交替遗传算法和爬山算法在克服爬山算法鲁棒性不足的同时,提高了遗传算法的效率。
  −
 
  −
 
  −
 
  −
This means that the rules of genetic variation may have a different meaning in the natural case. For instance &ndash; provided that steps are stored in consecutive order &ndash; crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the [[Chromosomal inversion|inversion operator]] has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency.<ref>See for instance [http://www.thisurlisfalse.com/evolution-in-a-nutshell/ Evolution-in-a-nutshell] {{Webarchive|url=https://web.archive.org/web/20160415193505/http://www.thisurlisfalse.com/evolution-in-a-nutshell/ |date=15 April 2016 }} or example in [[travelling salesman problem]], in particular the use of an [[edge recombination operator]].</ref>
  −
 
  −
This means that the rules of genetic variation may have a different meaning in the natural case. For instance &ndash; provided that steps are stored in consecutive order &ndash; crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the inversion operator has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency.
  −
 
  −
这意味着在自然情况下,遗传变异的规则可能有不同的含义。例如---- 假设步骤是以连续的顺序存储的---- 交叉可以从母体 DNA 加上从父体 DNA 加上若干步骤等等。这就像在表型景观中添加更可能遵循脊线的载体一样。因此,这个过程的效率可以通过许多数量级来提高。此外,倒置操作者有机会按顺序或任何其他适当顺序排列步骤,以利于生存或效率。
  −
 
  −
 
  −
 
  −
A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.
  −
 
  −
A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.
  −
 
  −
一种变异,即种群作为一个整体而不是其个体成员进化而来,被称为基因库重组。
  −
 
  −
 
  −
 
  −
A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA,<ref>{{cite journal |url=http://www.complex-systems.com/issues/03-5.html |first1=D. E. |last1=Goldberg |first2=B. |last2=Korb |first3=K. |last3=Deb |title=Messy Genetic Algorithms : Motivation Analysis, and First Results |journal=Complex Systems |volume=5 |issue=3 |pages=493–530 |year=1989 }}</ref> GEMGA<ref>[https://www.osti.gov/servlets/purl/524858 Gene expression: The missing link in evolutionary computation]</ref> and LLGA.<ref>{{cite thesis |last=Harik |first=G. |date=1997 |title=Learning linkage to efficiently solve problems of bounded difficulty using genetic algorithms |type=PhD |publisher=Dept. Computer Science, University of Michigan, Ann Arbour |docket= |url=http://portal.acm.org/citation.cfm?id=269517 }}</ref>
  −
 
  −
A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA, GEMGA and LLGA.
  −
 
  −
为了提高遗传算法在具有高度适应性上位性问题上的性能,已经开发了许多变体。其中解决方案的适应性包括相互作用的子集的变量。这样的算法旨在学习(在开发之前)这些有益的表型互动。因此,它们在自适应地减少分裂重组中与构建块假说一致。这种方法的典型例子包括 mGA、 GEMGA 和 LLGA。
  −
 
  −
 
  −
 
  −
== Problem domains ==
  −
 
  −
Problems which appear to be particularly appropriate for solution by genetic algorithms include [[Timeline|timetabling]] and scheduling problems, and many scheduling software packages are based on GAs{{Citation needed|date=December 2011}}. GAs have also been applied to [[engineering]]<ref>Tomoiagă B, Chindriş M, Sumper A, Sudria-Andreu A, Villafafila-Robles R. [http://www.mdpi.com/1996-1073/6/3/1439/pdf Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II. ] Energies. 2013; 6(3):1439-1455.</ref> and to reducing the length of psychological questionnaires.<ref name=Noetel2019>{{cite journal |doi= 10.1016/j.psychsport.2019.101545 |title=Using genetic algorithms to abbreviate the Mindfulness Inventory for Sport: A substantive-methodological synthesis |journal= Psychology of Sport and Exercise |volume=45 |issue=101545|year=2019 |last1=Noetel |first1= Michael |last2=Ciarrochi |first2=Joseph|last3=Sahdra |first3=Baljinder|last4=Lonsdale |first4=Chris|page=101545 |url=http://psyarxiv.com/ykqbp/ | name-list-style = vanc}}</ref> Genetic algorithms are often applied as an approach to solve [[global optimization]] problems.
  −
 
  −
Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering and to reducing the length of psychological questionnaires. Genetic algorithms are often applied as an approach to solve global optimization problems.
  −
 
  −
用遗传算法解决的问题包括时间表和调度问题,许多调度软件包都是基于遗传算法的。遗传算法也被应用于工程和减少心理问卷的长度。遗传算法是求解全局最优化问题的常用方法。
  −
 
  −
 
  −
 
  −
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex [[fitness landscape]] as mixing, i.e., [[Mutation (genetic algorithm)|mutation]] in combination with [[Crossover (genetic algorithm)|crossover]], is designed to move the population away from [[local optima]] that a traditional [[hill climbing]] algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a [[Markov chain]]).
  −
 
  −
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain).
  −
 
  −
作为一般的经验法则,遗传算法可能在具有复杂适应度的问题领域中有用,因为混合---- 也就是说,变异与交叉结合---- 被设计用来使种群远离传统爬山算法可能陷入的局部最优解。观察到常用的交叉算子不能改变任何一致的种群。变异本身可以提供整个遗传算法过程的遍历性(可以看作是一个马尔可夫链)。
  −
 
  −
 
  −
 
  −
Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector,<ref>{{cite web|last=Gross|first=Bill|title=A solar energy system that tracks the sun|url=http://www.ted.com/talks/bill_gross_on_new_energy.html|work=TED|accessdate=20 November 2013}}</ref> antennae designed to pick up radio signals in space,<ref>{{citation |first1=G. S. |last1=Hornby |first2=D. S. |last2=Linden |first3=J. D. |last3=Lohn |url=http://ti.arc.nasa.gov/m/pub-archive/1244h/1244%20(Hornby).pdf |title=Automated Antenna Design with Evolutionary Algorithms}}</ref> walking methods for computer figures,<ref>{{Cite web | url=http://goatstream.com/research/papers/SA2013/index.html | title=Flexible Muscle-Based Locomotion for Bipedal Creatures}}</ref> optimal design of aerodynamic bodies in complex flowfields<ref>{{Cite journal|last1=Evans|first1=B.|last2=Walton|first2=S.P.|date=December 2017|title=Aerodynamic optimisation of a hypersonic reentry vehicle based on solution of the Boltzmann–BGK equation and evolutionary optimisation|journal=Applied Mathematical Modelling|volume=52|pages=215–240|doi=10.1016/j.apm.2017.07.024|issn=0307-904X|url=https://cronfa.swan.ac.uk/Record/cronfa34688}}</ref>
  −
 
  −
Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector, antennae designed to pick up radio signals in space, walking methods for computer figures, optimal design of aerodynamic bodies in complex flowfields
  −
 
  −
遗传算法解决的问题包括: 设计用于将阳光集中到太阳能收集器的镜子,设计用于在太空中接收无线电信号的天线,计算机图形的行走方法,复杂流场中空气动力学物体的优化设计
  −
 
  −
 
  −
 
  −
In his ''Algorithm Design Manual'', [[Steven Skiena|Skiena]] advises against genetic algorithms for any task:
  −
 
  −
In his Algorithm Design Manual, Skiena advises against genetic algorithms for any task:
  −
 
  −
在他的算法设计手册中,Skiena 建议任何任务都不要使用遗传算法:
  −
 
  −
 
  −
 
  −
{{blockquote|[I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate.
  −
 
  −
{{blockquote|[I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate.
  −
 
  −
{{ blockquote | [ i ] t 用遗传操作(比如位串上的变异和交叉)来为应用程序建模是很不自然的。假生物学在你和你的问题之间增加了另一个层次的复杂性。其次,遗传算法在解决重大问题上花费了很长时间。[ ... ][ ... ][ t ]与进化的类比ーー重大进展需要[ sic ]数百万年ーー可能是非常恰当的。
  −
 
  −
[...]
  −
 
  −
[...]
  −
 
  −
[...]
  −
 
  −
I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to [[simulated annealing]] for your heuristic search voodoo needs.|Steven Skiena<ref>{{cite book |last=Skiena |first=Steven |authorlink=Steven Skiena |title = The Algorithm Design Manual |publisher=[[Springer Science+Business Media]] |edition=2nd |year = 2010 |isbn=978-1-849-96720-4}}</ref>{{rp|267}}}}
  −
 
  −
I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to simulated annealing for your heuristic search voodoo needs.|Steven Skiena}}
  −
 
  −
我从来没有遇到过遗传算法似乎是解决问题的正确方法的问题。此外,我从未见过任何使用遗传算法的计算结果给我留下良好印象的报道。坚持使用模拟退火搜索来满足你的启发式搜索巫术需求
  −
 
  −
 
  −
 
  −
== History ==
  −
 
  −
In 1950, [[Alan Turing]] proposed a "learning machine" which would parallel the principles of evolution.<ref name="mind.oxfordjournals.org">{{cite journal|last1=Turing|first1=Alan M.|title=Computing machinery and intelligence|journal=Mind|volume=LIX|issue=238|pages=433–460|doi=10.1093/mind/LIX.236.433|url=http://mind.oxfordjournals.org/content/LIX/236/433|date=October 1950}}</ref> Computer simulation of evolution started as early as in 1954 with the work of [[Nils Aall Barricelli]], who was using the computer at the [[Institute for Advanced Study]] in [[Princeton, New Jersey]].<ref name="Barricelli 1954 45–68">{{cite journal|last=Barricelli|first=Nils Aall|year=1954|authorlink=Nils Aall Barricelli|title=Esempi numerici di processi di evoluzione|journal=Methodos|pages=45–68}}</ref><ref name="Barricelli 1957 143–182">{{cite journal|last=Barricelli|first=Nils Aall|year=1957|authorlink=Nils Aall Barricelli|title=Symbiogenetic evolution processes realized by artificial methods|journal=Methodos|pages=143–182}}</ref> His 1954 publication was not widely noticed. Starting in 1957,<ref name="Fraser 1957 484–491">{{cite journal|last=Fraser|first=Alex|authorlink=Alex Fraser (scientist)|year=1957|title=Simulation of genetic systems by automatic digital computers. I. Introduction|journal=Aust. J. Biol. Sci.|volume=10|issue=4|pages=484–491|doi=10.1071/BI9570484|doi-access=free}}</ref> the Australian quantitative geneticist [[Alex Fraser (scientist)|Alex Fraser]] published a series of papers on simulation of [[artificial selection]] of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)<ref name="Fraser 1970">{{cite book|last1=Fraser|first1=Alex|authorlink=Alex Fraser (scientist)|first2=Donald |last2=Burnell|year=1970|title=Computer Models in Genetics|publisher=McGraw-Hill|location=New York|isbn=978-0-07-021904-5}}</ref> and Crosby (1973).<ref name="Crosby 1973">{{cite book|last=Crosby|first=Jack L.|year=1973|title=Computer Simulation in Genetics|publisher=John Wiley & Sons|location=London|isbn=978-0-471-18880-3}}</ref> Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, [[Hans-Joachim Bremermann]] published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms.<ref>[http://berkeley.edu/news/media/releases/96legacy/releases.96/14319.html 02.27.96 - UC Berkeley's Hans Bremermann, professor emeritus and pioneer in mathematical biology, has died at 69]</ref> Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by [[David B. Fogel|Fogel]] (1998).<ref>{{cite book|last=Fogel|first=David B. (editor)|year=1998|title=Evolutionary Computation: The Fossil Record|publisher=IEEE Press|location=New York|isbn=978-0-7803-3481-6}}</ref>
  −
 
  −
In 1950, Alan Turing proposed a "learning machine" which would parallel the principles of evolution. Computer simulation of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey. His 1954 publication was not widely noticed. Starting in 1957, the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms. Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998).
  −
 
  −
1950年,阿兰 · 图灵提出了一种“学习机器” ,它可以与进化论的原理相平行。进化论的计算机模拟早在1954年 Nils Aall Barricelli 的工作就开始了,他当时正在新泽西州普林斯顿的高级研究所使用计算机。他1954年的出版物没有引起广泛的注意。从1957年开始,澳大利亚定量遗传学家亚历克斯 · 弗雷泽发表了一系列关于模拟具有控制可测性状的多个位点的有机体的人工选择的论文。从这些开始,生物学家的进化计算机模拟在20世纪60年代早期变得更加普遍,弗雷泽和伯内尔(1970)和克罗斯比(1973)在书中描述了这些方法。弗雷泽的模拟包括了现代遗传算法的所有基本元素。此外,汉斯-约阿希姆 · 布雷默曼在20世纪60年代发表了一系列论文,这些论文也采用了解决优化问题的群体,经历了重组、突变和选择。布雷默曼的研究还包括了现代遗传算法的元素。其他值得注意的早期先驱者包括理查德 · 弗里德伯格、乔治 · 弗里德曼和迈克尔 · 康拉德。许多早期的论文由福格尔(1998)转载。
  −
 
  −
 
  −
 
  −
Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game,<ref>{{cite journal|last=Barricelli|first=Nils Aall|year=1963|title=Numerical testing of evolution theories. Part II. Preliminary tests of performance, symbiogenesis and terrestrial life|journal=Acta Biotheoretica|volume=16|issue=3–4|pages=99–126|doi=10.1007/BF01556602|s2cid=86717105}}</ref> [[artificial evolution]] only became a widely recognized optimization method as a result of the work of [[Ingo Rechenberg]] and [[Hans-Paul Schwefel]] in the 1960s and early 1970s &ndash; Rechenberg's group was able to solve complex engineering problems through [[Evolution strategy|evolution strategies]].<ref>{{cite book|last=Rechenberg|first=Ingo|year=1973|title=Evolutionsstrategie|place=Stuttgart|publisher=Holzmann-Froboog|isbn=978-3-7728-0373-4}}</ref><ref>{{cite book|last=Schwefel|first=Hans-Paul|year=1974|title=Numerische Optimierung von Computer-Modellen (PhD thesis)}}</ref><ref>{{cite book|last=Schwefel|first=Hans-Paul|year=1977|title=Numerische Optimierung von Computor-Modellen mittels der Evolutionsstrategie : mit einer vergleichenden Einführung in die Hill-Climbing- und Zufallsstrategie|place=Basel; Stuttgart | publisher=Birkhäuser| isbn=978-3-7643-0876-6}}</ref><ref>{{cite book|last=Schwefel|first=Hans-Paul|year=1981|title=Numerical optimization of computer models (Translation of 1977 Numerische Optimierung von Computor-Modellen mittels der Evolutionsstrategie|place=Chichester ; New York|publisher=Wiley|isbn=978-0-471-09988-8}}</ref> Another approach was the evolutionary programming technique of [[Lawrence J. Fogel]], which was proposed for generating artificial intelligence. [[Evolutionary programming]] originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of [[John Henry Holland|John Holland]] in the early 1970s, and particularly his book ''Adaptation in Natural and Artificial Systems'' (1975). His work originated with studies of [[cellular automata]], conducted by [[John Henry Holland|Holland]] and his students at the [[University of Michigan]]. Holland introduced a formalized framework for predicting the quality of the next generation, known as [[Holland's Schema Theorem]]. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in [[Pittsburgh, Pennsylvania]].
  −
 
  −
Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game, artificial evolution only became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s &ndash; Rechenberg's group was able to solve complex engineering problems through evolution strategies. Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania.
  −
 
  −
虽然 Barricelli 在1963年的报告中模拟了玩简单游戏的能力的进化,但是人工进化只是在20世纪60年代和70年代早期 Ingo Rechenberg 和 Hans-Paul Schwefel 的工作中才成为一种广泛认可的优化方法—— Rechenberg 的团队能够通过进化策略来解决复杂的工程问题。另一种方法是劳伦斯 · j · 福格尔的进化规划技术,该技术是为产生人工智能而提出的。进化规划最初使用有限状态机来预测环境,并使用变异和选择来优化预测逻辑。在20世纪70年代早期,约翰 · 霍兰的工作,特别是他的《自然和人工系统中的适应》(1975) ,使得遗传算法变得特别流行。他的工作起源于对细胞自动机的研究,由 Holland 和他在密歇根大学的学生负责。霍兰德介绍了一个预测下一代产品质量的形式化框架,称为霍兰德模式定理。遗传算法的研究一直停留在理论上,直到20世纪80年代中期,第一次遗传算法国际会议在宾夕法尼亚州匹兹堡举行。
  −
 
  −
 
  −
 
  −
===Commercial products===
  −
 
  −
In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes.<ref>{{Cite book|url=https://books.google.com/books?id=-MszVdu_PAMC&q=general+electric+genetic+algorithm+mainframe|title=An Approach to Designing an Unmanned Helicopter Autopilot Using Genetic Algorithms and Simulated Annealing|last=Aldawoodi|first=Namir|year=2008|isbn=978-0549773498|pages=99|via=Google Books}}</ref>
  −
 
  −
In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes.
  −
 
  −
20世纪80年代末,通用电气开始销售世界上第一个遗传算法产品,这是一个基于大型机的工具包,专为工业生产过程而设计。
  −
 
  −
In 1989, Axcelis, Inc. released [[Evolver (software)|Evolver]], the world's first commercial GA product for desktop computers. [[The New York Times]] technology writer [[John Markoff]] wrote<ref>{{cite news|last=Markoff|first=John|title=What's the Best Answer? It's Survival of the Fittest|newspaper=New York Times|url=https://www.nytimes.com/1990/08/29/business/business-technology-what-s-the-best-answer-it-s-survival-of-the-fittest.html|accessdate=2016-07-13|date=29 August 1990}}</ref> about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995.<ref>Ruggiero, Murray A.. (1 August 2009) [http://www.futuresmag.com/2009/08/01/fifteen-years-and-counting?t=technology&page=2 Fifteen years and counting] {{Webarchive|url=https://web.archive.org/web/20160130054823/http://www.futuresmag.com/2009/08/01/fifteen-years-and-counting?t=technology&page=2 |date=30 January 2016 }}. Futuresmag.com. Retrieved on 2013-08-07.</ref> Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version.<ref>[http://www.palisade.com/evolver/ Evolver: Sophisticated Optimization for Spreadsheets]. Palisade. Retrieved on 2013-08-07.</ref> Since the 1990s, [[MATLAB]] has built in three [[derivative-free optimization]] heuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search).<ref>[https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8736798&tag=1 Benchmarks for Evaluating Optimization Algorithms and Benchmarking MATLAB Derivative-Free Optimizers for Practitioners’Rapid Access], IEEE Access, vol.7, 2019.</ref>
  −
 
  −
In 1989, Axcelis, Inc. released Evolver, the world's first commercial GA product for desktop computers. The New York Times technology writer John Markoff wrote about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995. Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version. Since the 1990s, MATLAB has built in three derivative-free optimization heuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search).
  −
 
  −
1989年,Axcelis 公司发布了 Evolver,这是世界上第一款面向台式计算机的商用 GA 产品。1990年,《纽约时报》技术专栏作家约翰 · 马尔科夫写了关于 Evolver 的文章,直到1995年,Evolver 一直是唯一的交互式商业遗传算法。1997年出售给帕利塞德,被翻译成多种语言,目前是第6个版本。自20世纪90年代以来,MATLAB 已经内置了3个无导数的启发式算法(模拟退火、粒子群优化、遗传算法)和2个直接搜索算法(单纯形搜索、模式搜索)。
  −
 
  −
 
  −
 
  −
== Related techniques ==
  −
 
  −
{{See also|List of genetic algorithm applications}}
  −
 
  −
 
  −
 
  −
===Parent fields===
  −
 
  −
Genetic algorithms are a sub-field:
  −
 
  −
Genetic algorithms are a sub-field:
  −
 
  −
遗传算法是一个子领域:
  −
 
  −
*[[Evolutionary algorithms]]
  −
 
  −
*[[Evolutionary computing]]
  −
 
  −
*[[Metaheuristic]]s
  −
 
  −
*[[Stochastic optimization]]
  −
 
  −
*[[Optimization (mathematics)|Optimization]]
  −
 
  −
 
  −
 
  −
===Related fields===
  −
 
  −
 
  −
 
  −
====Evolutionary algorithms====
  −
 
  −
{{More citations needed section|date=May 2011}}
  −
 
  −
{{main|Evolutionary algorithm}}
  −
 
  −
Evolutionary algorithms is a sub-field of [[Evolutionary Computation|evolutionary computing]].
  −
 
  −
Evolutionary algorithms is a sub-field of evolutionary computing.
  −
 
  −
进化算法是进化计算的一个分支。
  −
 
  −
 
  −
 
  −
* [[Evolution strategy|Evolution strategies]] (ES, see Rechenberg, 1994) evolve individuals by means of mutation and intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in the real-value domain.<ref>{{cite book|last=Cohoon|first=J|display-authors=etal|title=Evolutionary algorithms for the physical design of VLSI circuits|url= https://www.ifte.de/mitarbeiter/lienig/cohoon.pdf|journal=Advances in Evolutionary Computing: Theory and Applications|publisher= Springer, pp. 683-712, 2003|isbn=978-3-540-43330-9|year=2002}}</ref> They use self-adaptation to adjust control parameters of the search. De-randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution Strategy ([[CMA-ES]]).
  −
 
  −
* [[Evolutionary programming]] (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents.
  −
 
  −
* [[Estimation of Distribution Algorithm]] (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled<ref>{{cite book|last1=Pelikan|first1=Martin|last2=Goldberg|first2=David E.|last3=Cantú-Paz|first3=Erick|title=BOA: The Bayesian Optimization Algorithm|journal=Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1|date=1 January 1999|pages=525–532|url=http://dl.acm.org/citation.cfm?id=2933973|isbn=9781558606111|series=Gecco'99}}</ref><ref>{{cite book|last1=Pelikan|first1=Martin|title=Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms|date=2005|publisher=Springer|location=Berlin [u.a.]|isbn=978-3-540-23774-7|edition=1st}}</ref> or generated from guided-crossover.<ref>{{cite book|last1=Thierens|first1=Dirk|chapter=The Linkage Tree Genetic Algorithm|journal=Parallel Problem Solving from Nature, PPSN XI|date=11 September 2010|pages=264–273|doi=10.1007/978-3-642-15844-5_27|language=en|isbn=978-3-642-15843-8}}</ref>
  −
 
  −
* [[Gene expression programming]] (GEP) also uses populations of computer programs. These complex computer programs are encoded in simpler linear chromosomes of fixed length, which are afterwards expressed as expression trees. Expression trees or computer programs evolve because the chromosomes undergo mutation and recombination in a manner similar to the canonical GA. But thanks to the special organization of GEP chromosomes, these genetic modifications always result in valid computer programs.<ref>{{cite web|last=Ferreira|first=C|title=Gene Expression Programming: A New Adaptive Algorithm for Solving Problems|url= http://www.gene-expression-programming.com/webpapers/GEP.pdf|publisher= Complex Systems, Vol. 13, issue 2: 87-129.}}</ref>
  −
 
  −
* [[Genetic programming]] (GP) is a related technique popularized by [[John Koza]] in which computer programs, rather than function parameters, are optimized. Genetic programming often uses [[Tree (data structure)|tree-based]] internal [[data structure]]s to represent the computer programs for adaptation instead of the [[List (computing)|list]] structures typical of genetic algorithms.
  −
 
  −
* [[Grouping genetic algorithm]] (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items.<ref name="Falkenauer">{{cite book|last=Falkenauer|first=Emanuel|authorlink=Emanuel Falkenauer|year=1997|title=Genetic Algorithms and Grouping Problems|publisher=John Wiley & Sons Ltd|location=Chichester, England|isbn=978-0-471-97150-4}}</ref> The idea behind this GA evolution proposed by [[Emanuel Falkenauer]] is that solving some complex problems, a.k.a. ''clustering'' or ''partitioning'' problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include [[bin packing problem|bin packing]], line balancing, [[cluster analysis|clustering]] with respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date.
  −
 
  −
* [[Interactive evolutionary algorithm]]s are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference.
  −
 
  −
 
  −
 
  −
====Swarm intelligence====
  −
 
  −
{{main|Swarm intelligence}}
  −
 
  −
Swarm intelligence is a sub-field of [[Evolutionary Computation|evolutionary computing]].
  −
 
  −
Swarm intelligence is a sub-field of evolutionary computing.
  −
 
  −
群体智能是进化计算的一个子领域。
  −
 
  −
 
  −
 
  −
* [[Ant colony optimization]] ('''ACO''') uses many ants (or agents) equipped with a pheromone model to traverse the solution space and find locally productive areas. Although considered an [[Estimation of distribution algorithm]],<ref>{{cite journal|last1=Zlochin|first1=Mark|last2=Birattari|first2=Mauro|last3=Meuleau|first3=Nicolas|last4=Dorigo|first4=Marco|title=Model-Based Search for Combinatorial Optimization: A Critical Survey|journal=Annals of Operations Research|date=1 October 2004|volume=131|issue=1–4|pages=373–395|doi=10.1023/B:ANOR.0000039526.52305.af|language=en|issn=0254-5330|citeseerx=10.1.1.3.427|s2cid=63137}}</ref>
  −
 
  −
* [[Particle swarm optimization]] (PSO) is a computational method for multi-parameter optimization which also uses population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of the particles is influenced both by their own best known position and swarm's global best known position. Like genetic algorithms, the PSO method depends on information sharing among population members. In some problems the PSO is often more computationally efficient than the GAs, especially in unconstrained problems with continuous variables.<ref>Rania Hassan, Babak Cohanim, Olivier de Weck, Gerhard Vente
  −
 
  −
r (2005) [https://www.mit.edu/~deweck/PDF_archive/3%20Refereed%20Conference/3_50_AIAA-2005-1897.pdf A comparison of particle swarm optimization and the genetic algorithm]</ref>
  −
 
  −
 
  −
 
  −
====Other evolutionary computing algorithms====
  −
 
  −
 
  −
 
  −
Evolutionary computation is a sub-field of the metaheuristic methods.
  −
 
  −
进化计算是元启发式方法的一个子领域。
  −
 
  −
Evolutionary computation is a sub-field of the [[metaheuristic]] methods.
  −
 
  −
 
  −
 
  −
* [[Electimize algorithm]] is an evolutionary algorithm that simulates the phenomenon of electron flow and electrical conductivity. Some current research showed Electimize to be more efficient in solving NP-hard optimization problems than traditional evolutionary algorithms. The algorithm provides higher capacity in searching the solution space extensively, and identifying global optimal alternatives. Unlike other evolutionary algorithms, Electimize evaluates the quality of the values in the solution string independently.<ref>{{Cite journal|last1=Khalafallah Ahmed|last2=Abdel-Raheem Mohamed|date=2011-05-01|title=Electimize: New Evolutionary Algorithm for Optimization with Application in Construction Engineering|journal=Journal of Computing in Civil Engineering|volume=25|issue=3|pages=192–201|doi=10.1061/(ASCE)CP.1943-5487.0000080}}</ref>
  −
 
  −
* [[Memetic algorithm]] (MA), often called ''hybrid genetic algorithm'' among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms comes from [[meme]]s, which unlike genes, can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms.
  −
 
  −
* [[Bacteriologic algorithm]]s (BA) inspired by [[evolutionary ecology]] and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is not one individual that fits the whole environment. So, one needs to reason at the population level. It is also believed BAs could be successfully applied to complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining.<ref>{{cite journal|url=http://www.irisa.fr/triskell/publis/2005/Baudry05d.pdf|first=Benoit|last=Baudry |author2=Franck Fleurey |author3-link=Jean-Marc Jézéquel|author3=Jean-Marc Jézéquel |author4=Yves Le Traon|title=Automatic Test Case Optimization: A Bacteriologic Algorithm|date=March–April 2005|pages=76–82|journal=IEEE Software|issue=2|doi=10.1109/MS.2005.30|volume=22|s2cid=3559602|accessdate=9 August 2009}}</ref>
  −
 
  −
* [[Cultural algorithm]] (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a knowledge component called the belief space.
  −
 
  −
* [[Differential evolution]] (DS) inspired by migration of superorganisms.<ref>{{cite journal|last=Civicioglu|first=P.|title=Transforming Geocentric Cartesian Coordinates to Geodetic Coordinates by Using Differential Search Algorithm|journal=Computers &Geosciences|year=2012|volume=46|pages=229–247|doi=10.1016/j.cageo.2011.12.011|bibcode=2012CG.....46..229C}}</ref>
  −
 
  −
* [[Gaussian adaptation]] (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information.<ref>{{cite journal|last=Kjellström|first=G.|title= On the Efficiency of Gaussian Adaptation|journal=Journal of Optimization Theory and Applications|volume=71|issue=3|pages=589–597|date=December 1991|doi= 10.1007/BF00941405|s2cid=116847975}}</ref> Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder ([[average information]]) of the Gaussian simultaneously keeping the [[mean fitness]] constant.
  −
 
  −
 
  −
 
  −
====其他元启发式方法====
  −
 
  −
 
  −
 
  −
Metaheuristic methods broadly fall within stochastic optimisation methods.
  −
 
  −
元启发式方法广泛地属于随机优化方法。
  −
 
  −
Metaheuristic methods broadly fall within [[Stochastic optimization|stochastic]] optimisation methods.
  −
 
  −
 
  −
 
  −
* [[Simulated annealing]] (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule.
  −
 
  −
* [[Tabu search]] (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
  −
 
  −
* [[Extremal optimization]] (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes [[local search (optimization)|local]] modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of ''emergent'' improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions.
  −
 
  −
 
  −
 
  −
====其他复杂的最优化算法====
  −
* The [[Cross-entropy method|cross-entropy (CE) method]] generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration.
  −
 
  −
* Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning and statistics, in particular [[reinforcement learning]], [[Active learning (machine learning)|active or query learning]], [[neural networks]], and [[metaheuristics]].
  −
 
  −
* [[交叉熵]]方法通过参数化概率分布生成候选解。模型的参数会通过交叉熵最小化,并在下一轮迭代中生成更好的样本。
  −
* 反应搜索优化
  −
 
  −
==更多==
  −
 
  −
* [[Genetic programming]] 遗传编程
  −
 
  −
* [[List of genetic algorithm applications]] 遗传算法应用
  −
 
  −
* [[particle filter|Genetic algorithms in signal processing (a.k.a. particle filters)]] 遗传算法在信号处理的引用(粒子过滤器)
  −
 
  −
* [[Propagation of schema]]模式的传播
  −
 
  −
* [[Universal Darwinism]]泛达尔文主义
  −
 
  −
* [[Metaheuristics]]元启发式
  −
 
  −
* [[Learning classifier system]]学习分类器系统
  −
 
  −
* [[Rule-based machine learning]]基于规则的机器学习
  −
 
  −
 
  −
 
  −
== 引用 ==
  −
 
  −
{{Reflist|30em}}
  −
 
  −
 
  −
 
  −
== 参考资料 ==
  −
 
  −
{{Refbegin}}
  −
 
  −
* {{Cite book | title=Genetic Programming &ndash; An Introduction | last1=Banzhaf | first1=Wolfgang | last2=Nordin | first2=Peter | last3=Keller | first3=Robert | last4=Francone | first4=Frank | year=1998 | isbn=978-1558605107 | publisher=Morgan Kaufmann | location=San Francisco, CA | ref=harv | url-access=registration | url=https://archive.org/details/geneticprogrammi00wolf }}
  −
 
  −
* {{Cite journal|last1=Bies|first1=Robert R. |last2=Muldoon |first2=Matthew F. |last3=Pollock |first3=Bruce G. |last4=Manuck |first4=Steven |last5=Smith |first5=Gwenn |last6=Sale |first6=Mark E. |year=2006|title=A Genetic Algorithm-Based, Hybrid Machine Learning Approach to Model Selection|journal=Journal of Pharmacokinetics and Pharmacodynamics|volume=33 |issue=2 |pages=196–221 |ref=harv|pmid=16565924 |doi=10.1007/s10928-006-9004-6 |s2cid=39571129 }}
  −
 
  −
* {{Cite journal|last1=Cha|first1=Sung-Hyuk|last2=Tappert |first2=Charles C. |year=2009|title=A Genetic Algorithm for Constructing Compact Binary Decision Trees|journal=Journal of Pattern Recognition Research |volume=4|issue=1|pages=1–13|doi=10.13176/11.44|citeseerx=10.1.1.154.8314 |ref=harv}}
  −
 
  −
* {{Cite journal|last=Fraser|first=Alex S.|year=1957|title=Simulation of Genetic Systems by Automatic Digital Computers. I. Introduction|journal=Australian Journal of Biological Sciences|volume=10|issue=4|pages=484–491|doi=10.1071/BI9570484 |ref=harv|doi-access=free}}
  −
 
  −
* {{Cite book| last=Goldberg | first=David | year=1989 | title=Genetic Algorithms in Search, Optimization and Machine Learning | isbn=978-0201157673 | publisher=Addison-Wesley Professional | location=Reading, MA | ref=harv}}
  −
 
  −
* {{Cite book| last=Goldberg | first=David | year=2002 | title=The Design of Innovation: Lessons from and for Competent Genetic Algorithms | publisher=Kluwer Academic Publishers | location=Norwell, MA | isbn=978-1402070983 | ref=harv}}
  −
 
  −
* {{Cite book| last=Fogel | first=David | title=Evolutionary Computation: Toward a New Philosophy of Machine Intelligence | publisher=IEEE Press | location=Piscataway, NJ | edition=3rd | isbn=978-0471669517 | year=2006 |ref=harv}}
  −
 
  −
* {{Cite book | last=Holland | first=John | title=Adaptation in Natural and Artificial Systems | publisher=MIT Press | location=Cambridge, MA | year=1992 | isbn=978-0262581110 | ref=harv | url-access=registration | url=https://archive.org/details/adaptationinnatu00holl }}
  −
 
  −
* {{Cite book | last=Koza | first=John | title=Genetic Programming: On the Programming of Computers by Means of Natural Selection | year = 1992 | publisher=MIT Press | location=Cambridge, MA | isbn=978-0262111706 | ref=harv}}
  −
 
  −
* {{Cite book | last=Michalewicz | first=Zbigniew | year=1996 | title=Genetic Algorithms + Data Structures = Evolution Programs | publisher=Springer-Verlag | isbn=978-3540606765 | ref=harv}}
  −
 
  −
* {{Cite book | last=Mitchell | first=Melanie | title=An Introduction to Genetic Algorithms | year=1996 | publisher=MIT Press | location=Cambridge, MA | isbn = 9780585030944 | ref=harv}}
  −
 
  −
* {{Cite book |last1=Poli |first1=R. |last2=Langdon |first2=W. B. |last3=McPhee |first3=N. F. |year=2008 |title=A Field Guide to Genetic Programming | publisher=Lulu.com, freely available from the internet | isbn = 978-1-4092-0073-4 |ref=harv}}{{self-published inline|date=February 2020}}
  −
 
  −
* Rechenberg, Ingo (1994): Evolutionsstrategie '94, Stuttgart: Fromman-Holzboog.
  −
 
  −
* {{cite journal |last1=Schmitt |first1=Lothar M. |last2=Nehaniv |first2=Chrystopher L. |last3=Fujii |first3=Robert H. |date=1998 |url=https://www.sciencedirect.com/science/article/pii/S0304397598000048/pdf?md5=28a658a4dc5aef635bbf3c8560129925&pid=1-s2.0-S0304397598000048-main.pdf&_valck=1 |title=Linear analysis of genetic algorithms |journal=Theoretical Computer Science |volume=208 |pages=111&ndash;148 |ref=harv}}
  −
 
  −
* {{cite journal |last1=Schmitt |first1=Lothar M. |date=2001 |title=Theory of Genetic Algorithms |journal=Theoretical Computer Science |volume=259 |issue=1–2 |pages=1&ndash;61 |ref=harv|doi=10.1016/S0304-3975(00)00406-0 }}
  −
 
  −
* {{cite journal |last1=Schmitt |first1=Lothar M. |date=2004 |title=Theory of Genetic Algorithms II: models for genetic operators over the string-tensor representation of populations and convergence to global optima for arbitrary fitness function under scaling |journal=Theoretical Computer Science |volume=310 |issue=1–3 |pages=181&ndash;231 |ref=harv|doi=10.1016/S0304-3975(03)00393-1 }}
  −
 
  −
* Schwefel, Hans-Paul (1974): Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977).
  −
 
  −
* {{Cite book | last=Vose | first=Michael | year=1999 | title=The Simple Genetic Algorithm: Foundations and Theory | publisher=MIT Press | location=Cambridge, MA | isbn=978-0262220583 | ref=harv | url-access=registration | url=https://archive.org/details/TheSimpleG_00_Vose }}
  −
 
  −
* {{Cite journal | last=Whitley | first=Darrell | title=A genetic algorithm tutorial | journal=Statistics and Computing | doi=10.1007/BF00175354 | volume=4 | issue=2 | pages=65–85 | year=1994 | ref=harv | url=http://cobweb.cs.uga.edu/~potter/CompIntell/ga_tutorial.pdf | citeseerx=10.1.1.184.3999 | s2cid=3447126 }}<!--| accessdate=5 January 2013-->
  −
 
  −
* {{Cite book | last1=Hingston | first1=Philip | last2=Barone | first2=Luigi | last3=Michalewicz | first3=Zbigniew | title=Design by Evolution: Advances in Evolutionary Design | year=2008 | publisher=Springer | isbn=978-3540741091 | ref=harv}}
  −
 
  −
* {{Cite book | last1=Eiben | first1=Agoston | last2=Smith | first2=James | year=2003 | title=Introduction to Evolutionary Computing | publisher=Springer | isbn=978-3540401841 | ref=harv}}
  −
 
  −
{{Refend}}
  −
 
  −
 
  −
 
  −
== 外部链接 ==
  −
 
  −
 
  −
 
  −
=== 资源 ===
  −
 
  −
* [https://web.archive.org/web/20160303215222/http://www.geneticprogramming.com/ga/index.htm] 遗传算法资源列表
  −
 
  −
* [https://www.staracle.com/general/evolutionaryAlgorithms.php 演化算法发展史概述]
  −
 
  −
 
  −
 
  −
=== 教程 ===
  −
 
  −
* [https://www2.econ.iastate.edu/tesfatsi/holland.gaintro.htm Genetic Algorithms - Computer programs that "evolve" in ways that resemble natural selection can solve complex problems even their creators do not fully understand] An excellent introduction to GA by John Holland and with an application to the Prisoner's Dilemma
  −
* [http://www.i4ai.org/EA-demo/ An online interactive Genetic Algorithm tutorial for a reader to practise or learn how a GA works]: Learn step by step or watch global convergence in batch, change the population size, crossover rates/bounds, mutation rates/bounds and selection mechanisms, and add constraints.
  −
* [https://web.archive.org/web/20130615042000/http://samizdat.mines.edu/ga_tutorial/ga_tutorial.ps A Genetic Algorithm Tutorial by Darrell Whitley Computer Science Department Colorado State University] An excellent tutorial with much theory
  −
* [http://cs.gmu.edu/~sean/book/metaheuristics/ "Essentials of Metaheuristics"], 2009 (225 p). Free open text by Sean Luke.
  −
* [http://www.it-weise.de/projects/book.pdf Global Optimization Algorithms &ndash; Theory and Application]
  −
* [https://mpatacchiola.github.io/blog/2017/03/14/dissecting-reinforcement-learning-5.html Genetic Algorithms in Python] Tutorial with the intuition behind GAs and Python implementation.
  −
* [http://www-personal.umich.edu/~axe/research/Evolving.pdf Genetic Algorithms evolves to solve the prisoner's dilemma.] Written by Robert Axelrod.
  −
 
  −
 
  −
* [https://www2.econ.iastate.edu/tesfatsi/holland.gaintro.htm 遗传算法——可以计演化的计算机程序,使用自然选择解决复杂问题]由[[约翰·霍兰德_John_H_Holland|约翰·霍兰德]]编写,介绍了遗传算法在囚徒困境问题的应用
  −
* [http://www.i4ai.org/EA-demo/ 一个交互式遗传算法教程]: 观察遗传算法每一步的工作机制
  −
* [https://web.archive.org/web/20130615042000/http://samizdat.mines.edu/ga_tutorial/ga_tutorial.ps 科罗拉多州立大学遗传算法较重] 极佳的理论教程
  −
* [http://cs.gmu.edu/~sean/book/metaheuristics/ "元启发式的关键要点"],
  −
* [http://www.it-weise.de/projects/book.pdf Global Optimization Algorithms &ndash; Theory and Application]
  −
* [https://mpatacchiola.github.io/blog/2017/03/14/dissecting-reinforcement-learning-5.html 遗传算法-Python] 遗传算法的一些直观理解和基于Python的实现
  −
* [http://www-personal.umich.edu/~axe/research/Evolving.pdf 遗传算法解决囚徒困境问题] By [[罗伯特·阿克塞尔罗德_Robert_Axelrod|阿克塞尔罗德]].
  −
 
  −
 
  −
 
  −
 
  −
 
  −
{{Authority control}}
  −
 
  −
{{DEFAULTSORT:Genetic Algorithm}}
  −
 
  −
[[Category:Genetic algorithms| ]]
  −
 
  −
Category:Evolutionary algorithms
  −
 
  −
类别: 进化算法
  −
 
  −
[[Category:Evolutionary algorithms]]
  −
 
  −
Category:Search algorithms
  −
 
  −
类别: 搜索算法
  −
 
  −
[[Category:Search algorithms]]
  −
 
  −
Category:Cybernetics
  −
 
  −
类别: 控制论
  −
 
  −
[[Category:Cybernetics]]
  −
 
  −
Category:Digital organisms
  −
 
  −
类别: 数字有机体
  −
 
  −
[[Category:Digital organisms]]
  −
 
  −
Category:Machine learning
  −
 
  −
分类: 机器学习
  −
 
  −
[[Category:Machine learning]]
  −
 
  −
 
  −
 
  −
sv:Genetisk programmering#Genetisk algoritm
  −
 
  −
编程 # genedisk algoritm
  −
 
  −
<noinclude>
  −
 
  −
<small>This page was moved from [[wikipedia:en:Genetic algorithm]]. Its edit history can be viewed at [[遗传算法/edithistory]]</small></noinclude>
  −
 
  −
[[Category:待整理页面]]
 
370

个编辑

导航菜单