更改

跳到导航 跳到搜索
删除15,288字节 、 2021年9月23日 (四) 21:02
无编辑摘要
第1行: 第1行: −
此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。
+
{{#seo:
 +
|keywords=民俗学,集体智慧,知识表示
 +
|description=是一个允许最终用户将大众化的标签应用到网络条目的分类系统
 +
}}
   −
{{more footnotes|date=March 2010}}
+
[[File:Binary search vs Linear search example svg.svg|thumb|为了在给定的有序列表中查找给定的条目,可以使用二进制和线性搜索算法(忽略排序)。对前一种算法和后一种算法的分析表明,对于长度为n的列表,它最多只需要log<sub>2</sub>(''n'')''n''个检查步骤。在所描述的长度为33的示例列表中,搜索“Morin,Arthur”分别需要5步和28步,使用二进制和线性搜索。]]
 
+
[[File:comparison_computational_complexity.svg|thumb|在算法分析中常用函数图,表示每个函数的操作数 ''N'' 与输入大小 ''n'' 的对应关系]]
[[File:Binary search vs Linear search example svg.svg|thumb|For looking up a given entry in a given ordered list, both the [[binary search algorithm|binary]] and the [[linear search]] algorithm (which ignores ordering) can be used. The analysis of the former and the latter algorithm shows that it takes at most log<sub>2</sub>(''n'') and ''n'' check steps, respectively, for a list of length ''n''. In the depicted example list of length 33, searching for ''"Morin, Arthur"'' takes 5 and 28 steps with binary (shown in {{color|#008080|cyan}}) and linear ({{color|#800080|magenta}}) search, respectively. 为了在给定的有序列表中查找给定的条目,可以使用二进制和线性搜索算法(忽略排序)。对前一种算法和后一种算法的分析表明,对于长度为n的列表,它最多只需要log2(n)和n个检查步骤。在所描述的长度为33的示例列表中,搜索“Morin,Arthur”分别需要5步和28步,使用二进制和线性搜索。]]
  −
 
  −
For looking up a given entry in a given ordered list, both the binary and the linear search algorithm (which ignores ordering) can be used. The analysis of the former and the latter algorithm shows that it takes at most log<sub>2</sub>(''n'') and ''n'' check steps, respectively, for a list of length n. In the depicted example list of length 33, searching for "Morin, Arthur" takes 5 and 28 steps with binary and linear search, respectively.
  −
 
  −
二进制和线性搜索算法(忽略排序)可以使用。对前者和后者的分析表明,对于一个长度 n 的列表,分别采用最多的 log 子2 / sub (n)和 n 个检查步骤。 在长度为33的示例列表中,搜索“ Morin,Arthur”需要5步和28步,分别使用二进制(如图所示)和线性(如图所示)搜索
  −
 
  −
[[File:comparison_computational_complexity.svg|thumb|Graphs of functions commonly used in the analysis of algorithms, showing the number of operations ''N'' versus input size ''n'' for each function]]
  −
 
  −
Graphs of functions commonly used in the analysis of algorithms, showing the number of operations N versus input size n for each function
  −
 
  −
在算法分析中常用函数图,表示每个函数的操作数 N 与输入大小 n 的对应关系
      
In [[computer science]], the '''analysis of algorithms''' is the process of finding the [[computational complexity]] of algorithms – the amount of time, storage, or other resources needed to [[computation|execute them]]. Usually, this involves determining a [[Function (mathematics)|function]] that relates the length of an algorithm's input to the number of steps it takes (its [[time complexity]]) or the number of storage locations it uses (its [[space complexity]]). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same length may cause the algorithm to have different behavior, so [[best, worst and average case]] descriptions might all be of practical interest.  When not otherwise specified, the function describing the performance of an algorithm is usually an [[upper bound]], determined from the worst case inputs to the algorithm.
 
In [[computer science]], the '''analysis of algorithms''' is the process of finding the [[computational complexity]] of algorithms – the amount of time, storage, or other resources needed to [[computation|execute them]]. Usually, this involves determining a [[Function (mathematics)|function]] that relates the length of an algorithm's input to the number of steps it takes (its [[time complexity]]) or the number of storage locations it uses (its [[space complexity]]). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same length may cause the algorithm to have different behavior, so [[best, worst and average case]] descriptions might all be of practical interest.  When not otherwise specified, the function describing the performance of an algorithm is usually an [[upper bound]], determined from the worst case inputs to the algorithm.
第19行: 第11行:  
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms – the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same length may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest.  When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.
 
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms – the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same length may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest.  When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.
   −
在'''<font color="#ff8000">计算机科学 Computer Science</font>'''中,算法分析是计算查找算法复杂性的过程——包括算法执行时间、所需存储空间或其他资源的数量。通常这是一个确定函数,该函数将一个算法的输入长度与所需的迭代步数(时间复杂度)或使用的存储大小(空间复杂度)相关联。当某个算法其算法复杂性函数的取值很小,或者其函数值大小的增长速度与算法输入大小的相比更加缓慢时,我们认为该算法是有效的。相同长度的不同输入可能导致算法表现出不同的效果,因此对算法运行的最佳、最差和平均情况的描述都可能具有一定的实际意义。但是当没有特殊说明时,算法性能描述函数通常反映的是算法复杂性一种上界,由算法表现最差情况的输入来确定。
+
在'''计算机科学 Computer Science'''中,算法分析是计算查找算法复杂性的过程——包括算法执行时间、所需存储空间或其他资源的数量。通常这是一个确定函数,该函数将一个算法的输入长度与所需的迭代步数(时间复杂度)或使用的存储大小(空间复杂度)相关联。当某个算法其算法复杂性函数的取值很小,或者其函数值大小的增长速度与算法输入大小的相比更加缓慢时,我们认为该算法是有效的。相同长度的不同输入可能导致算法表现出不同的效果,因此对算法运行的最佳、最差和平均情况的描述都可能具有一定的实际意义。但是当没有特殊说明时,算法性能描述函数通常反映的是算法复杂性一种上界,由算法表现最差情况的输入来确定。
      第32行: 第24行:  
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. [[Big O notation]], [[Big-omega notation]] and [[Big-theta notation]] are used to this end. For instance, [[binary search]] is said to run in a number of steps proportional to the logarithm of the length of the sorted list being searched, or in O(log(n)), colloquially "in [[logarithmic time]]". Usually [[Asymptotic analysis|asymptotic]] estimates are used because different [[implementation]]s of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor  called a ''hidden constant''.
 
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. [[Big O notation]], [[Big-omega notation]] and [[Big-theta notation]] are used to this end. For instance, [[binary search]] is said to run in a number of steps proportional to the logarithm of the length of the sorted list being searched, or in O(log(n)), colloquially "in [[logarithmic time]]". Usually [[Asymptotic analysis|asymptotic]] estimates are used because different [[implementation]]s of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor  called a ''hidden constant''.
   −
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the length of the sorted list being searched, or in O(log(n)), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor  called '''<font color="#32CD32">a hidden constant</font>'''.
+
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the length of the sorted list being searched, or in O(log(n)), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor  called '''<font color="#32CD32">a hidden constant'''.
   −
在算法的理论分析中,通常是在渐近意义上估计它们的复杂性,即对任意大输入的复杂度函数进行估计。为了达到这个目的,我们使用到了大O符号、大Ω符号和大θ符号。例如,二进制搜索被称为以与“被搜索的排序列表长度的对数成正比”的若干步长来运行,或者用O(log(n))表示,通俗地说就是“以对数时间”运行。之所以我们通常使用渐近估计,是因为同一算法的不同实现的算法效率表现会有所不同。然而,给定算法的任何两个“合理”实现的效率,是由一个称为'''<font color="#32CD32">隐常数</font>'''的常数乘法因子相关联的。
+
在算法的理论分析中,通常是在渐近意义上估计它们的复杂性,即对任意大输入的复杂度函数进行估计。为了达到这个目的,我们使用到了大O符号、大Ω符号和大θ符号。例如,二进制搜索被称为以与“被搜索的排序列表长度的对数成正比”的若干步长来运行,或者用O(log(n))表示,通俗地说就是“以对数时间”运行。之所以我们通常使用渐近估计,是因为同一算法的不同实现的算法效率表现会有所不同。然而,给定算法的任何两个“合理”实现的效率,是由一个称为'''<font color="#32CD32">隐常数'''的常数乘法因子相关联的。
      第40行: 第32行:  
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called [[model of computation]]. A model of computation may be defined in terms of an [[abstract machine|abstract computer]], e.g., [[Turing machine]], and/or by postulating that certain operations are executed in unit time.
 
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called [[model of computation]]. A model of computation may be defined in terms of an [[abstract machine|abstract computer]], e.g., [[Turing machine]], and/or by postulating that certain operations are executed in unit time.
   −
'''<font color="#32CD32">Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that certain operations are executed in unit time.</font>'''
+
'''<font color="#32CD32">Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that certain operations are executed in unit time.'''
   −
'''<font color="#32CD32">算法效率的精确(这里就不是渐近了)度量有时是可以计算的,但它们通常需要一定的关于算法特定实现的假设,这种算法被称为计算模型。一个计算模型可以用抽象计算机来定义,例如图灵机,和 / 或通过假设某些操作在单位时间内执行。</font>'''
+
'''<font color="#32CD32">算法效率的精确(这里就不是渐近了)度量有时是可以计算的,但它们通常需要一定的关于算法特定实现的假设,这种算法被称为计算模型。一个计算模型可以用抽象计算机来定义,例如图灵机,和 / 或通过假设某些操作在单位时间内执行。'''
    
For example, if the sorted list to which we apply binary search has ''n'' elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log<sub>2</sub> ''n'' + 1 time units are needed to return an answer.
 
For example, if the sorted list to which we apply binary search has ''n'' elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log<sub>2</sub> ''n'' + 1 time units are needed to return an answer.
第116行: 第108行:  
Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), there are additional significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms.
 
Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), there are additional significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms.
   −
由于算法是'''<font color="#ff8000">平台无关的 Platform-Independent</font>'''(即一个给定的算法,可以在任意运行任意'''<font color="#ff8000">操作系统 Operating System</font>'''的计算机上用任意'''<font color="#ff8000">编程语言 Programming Language</font>'''实现),使用经验方法来衡量一组给定算法的比较性能还有一个明显的缺点。
+
由于算法是'''平台无关的 Platform-Independent'''(即一个给定的算法,可以在任意运行任意'''操作系统 Operating System'''的计算机上用任意'''编程语言 Programming Language'''实现),使用经验方法来衡量一组给定算法的比较性能还有一个明显的缺点。
      第309行: 第301行:  
假设算法运行时间遵循幂律规则,通过对一些问题规模点<math>\{n_1, n_2\}</math>的运行时间<math>\{t_1, t_2\}</math>进行经验测量,并计算出𝑡<math>t_2/t_1 = (n_2/n_1)^a</math>,从而得到系数<math>a = \log(t_2/t_1) / \log(n_2/n_1)</math>。
 
假设算法运行时间遵循幂律规则,通过对一些问题规模点<math>\{n_1, n_2\}</math>的运行时间<math>\{t_1, t_2\}</math>进行经验测量,并计算出𝑡<math>t_2/t_1 = (n_2/n_1)^a</math>,从而得到系数<math>a = \log(t_2/t_1) / \log(n_2/n_1)</math>。
 
换言之,这衡量了在某个规模点上,执行时间与问题大小的双对数曲线 log-log plot上经验线的斜率。如果增长顺序确实遵循幂律(因此双对数曲线图上的直线确实是一条直线),那么a的经验值将在不同的范围内将保持不变,如果不是,它将发生改变(而且这条线是一条曲线),但仍然可以用来比较任何两种给定算法的经验局部增长顺序。应用于上表:
 
换言之,这衡量了在某个规模点上,执行时间与问题大小的双对数曲线 log-log plot上经验线的斜率。如果增长顺序确实遵循幂律(因此双对数曲线图上的直线确实是一条直线),那么a的经验值将在不同的范围内将保持不变,如果不是,它将发生改变(而且这条线是一条曲线),但仍然可以用来比较任何两种给定算法的经验局部增长顺序。应用于上表:
 +
    
{| class="wikitable"
 
{| class="wikitable"
   
|-_
 
|-_
   
! ''n'' (list size)
 
! ''n'' (list size)
   
! Computer A run-time<br />(in [[nanosecond]]s)
 
! Computer A run-time<br />(in [[nanosecond]]s)
   
! Local order of growth<br />(n^_)
 
! Local order of growth<br />(n^_)
   
! Computer B run-time<br />(in [[nanosecond]]s)
 
! Computer B run-time<br />(in [[nanosecond]]s)
   
! Local order of growth<br />(n^_)
 
! Local order of growth<br />(n^_)
   
|-
 
|-
   
| 15
 
| 15
   
| 7
 
| 7
   
|  
 
|  
   
| 100,000
 
| 100,000
   
|  
 
|  
   
|-
 
|-
   
| 65
 
| 65
   
| 32
 
| 32
   
| 1.04
 
| 1.04
   
| 150,000
 
| 150,000
   
| 0.28
 
| 0.28
   
|-
 
|-
   
| 250
 
| 250
   
| 125
 
| 125
   
| 1.01
 
| 1.01
   
| 200,000
 
| 200,000
   
| 0.21
 
| 0.21
   
|-
 
|-
   
| 1,000
 
| 1,000
   
| 500
 
| 500
   
| 1.00
 
| 1.00
   
| 250,000
 
| 250,000
   
| 0.16
 
| 0.16
   
|-
 
|-
   
| ...
 
| ...
   
| ...
 
| ...
   
|  
 
|  
   
| ...
 
| ...
   
|  
 
|  
   
|-
 
|-
   
| 1,000,000
 
| 1,000,000
   
| 500,000
 
| 500,000
   
| 1.00
 
| 1.00
   
| 500,000
 
| 500,000
   
| 0.10
 
| 0.10
   
|-
 
|-
   
| 4,000,000
 
| 4,000,000
   
| 2,000,000
 
| 2,000,000
   
| 1.00
 
| 1.00
   
| 550,000
 
| 550,000
   
| 0.07
 
| 0.07
   
|-
 
|-
   
| 16,000,000
 
| 16,000,000
   
| 8,000,000
 
| 8,000,000
   
| 1.00
 
| 1.00
   
| 600,000
 
| 600,000
   
| 0.06
 
| 0.06
   
|-
 
|-
   
| ...
 
| ...
   
| ...
 
| ...
   
|  
 
|  
   
| ...
 
| ...
   
|
 
|
   
|}
 
|}
   −
It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one.
  −
  −
It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one.
      
可以清楚地看到,第一个算法展示了一个线性增长的序列,这确实遵循幂规则。第二个经验值正在迅速递减,这表明它遵循了另一条增长规律,而且无论如何,从经验上看,其当下的增长(及其进一步的增长)远低于第一个。
 
可以清楚地看到,第一个算法展示了一个线性增长的序列,这确实遵循幂规则。第二个经验值正在迅速递减,这表明它遵循了另一条增长规律,而且无论如何,从经验上看,其当下的增长(及其进一步的增长)远低于第一个。
   −
=== 运行时复杂性度量 Evaluating run-time complexity ===
     −
The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions.  Consider the following [[pseudocode]]:
+
=== 运行时复杂性度量 ===
 
  −
The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions.  Consider the following pseudocode:
      
给定算法的最差运行复杂度有时可以通过检查算法的结构和做一些简化的假设来进行评估。例如下面这段伪代码:
 
给定算法的最差运行复杂度有时可以通过检查算法的结构和做一些简化的假设来进行评估。例如下面这段伪代码:
        第459行: 第384行:       −
A given computer will take a [[DTIME|discrete amount of time]] to execute each of the [[Instruction (computer science)|instructions]] involved with carrying out this algorithm.  The specific amount of time to carry out a given instruction will vary depending on which instruction is being executed and which computer is executing it, but on a conventional computer, this amount will be [[Deterministic system (mathematics)|deterministic]].<ref>However, this is not the case with a [[quantum computer]]</ref>  Say that the actions carried out in step 1 are considered to consume time ''T''<sub>1</sub>, step 2 uses time ''T''<sub>2</sub>, and so forth.
+
一台给定的计算机在执行算法有关的每一条指令时花费的时间是离散的。执行给定指令的具体时间量取决于正在执行的指令和正在执行的计算机,但在传统计算机上,这个时间量是确定的。假设在步骤1中执行的操作消耗时间为''T''<sub>1</sub>,步骤2的时间为''T''<sub>2</sub>,依此类推。
   −
A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm.  The specific amount of time to carry out a given instruction will vary depending on which instruction is being executed and which computer is executing it, but on a conventional computer, this amount will be deterministic.  Say that the actions carried out in step 1 are considered to consume time T<sub>1</sub>, step 2 uses time T<sub>2</sub>, and so forth.
  −
  −
一台给定的计算机在执行算法有关的每一条指令时花费的时间是离散的。执行给定指令的具体时间量取决于正在执行的指令和正在执行的计算机,但在传统计算机上,这个时间量是确定的。假设在步骤1中执行的操作消耗时间为T1,步骤2的时间为T2,依此类推。
  −
  −
In the algorithm above, steps 1, 2 and 7 will only be run once.  For a worst-case evaluation, it should be assumed that step 3 will be run as well.  Thus the total amount of time to run steps 1-3 and step 7 is:
  −
  −
In the algorithm above, steps 1, 2 and 7 will only be run once.  For a worst-case evaluation, it should be assumed that step 3 will be run as well.  Thus the total amount of time to run steps 1-3 and step 7 is:
      
在上面的算法中,步骤1、2和7只运行一次。对于最坏情况的评估,应该假定步骤3也将运行。因此,运行步骤1-3和步骤7所需的总时间是:
 
在上面的算法中,步骤1、2和7只运行一次。对于最坏情况的评估,应该假定步骤3也将运行。因此,运行步骤1-3和步骤7所需的总时间是:
        第476行: 第393行:        +
步骤4、5和6中的循环更难计算。步骤4中的外部循环测试将执行(''n'' + 1)次(注意,终止 for 循环需要一个额外的步骤,因此循环次数是 ''n'' + 1 次而不是 ''n''次),这将消耗时间T<sub>4</sub>( ''n''+1)。另一方面,内部循环由变量 j 的值控制,取值范围为 1 到 i。在第一次通过外循环时,j从1迭代到1:内循环进行一次传递,因此运行内循环(步骤6)消耗时间T6,内循环测试(步骤5)消耗时间为2T<sub>5</sub>。在下一次通过外部循环时,j 从1迭代到2: 内部循环进行两次后结束,因此运行内部循环体(步骤6)消耗时间为2T<sub>6</sub>,而内部循环测试(步骤5)消耗的时间为3T<sub>5</sub>。
   −
The [[program loop|loops]] in steps 4, 5 and 6 are trickier to evaluate.  The outer loop test in step 4 will execute ( ''n'' + 1 )times (note that an extra step is required to terminate the for loop, hence n + 1 and not n executions), which will consume ''T''<sub>4</sub>( ''n'' + 1 ) time.  The inner loop, on the other hand, is governed by the value of j, which [[iteration|iterates]] from 1 to ''i''.  On the first pass through the outer loop, j iterates from 1 to 1:  The inner loop makes one pass, so running the inner loop body (step 6) consumes ''T''<sub>6</sub> time, and the inner loop test (step 5) consumes 2''T''<sub>5</sub> time.  During the next pass through the outer loop, j iterates from 1 to 2:  the inner loop makes two passes, so running the inner loop body (step 6) consumes 2''T''<sub>6</sub> time, and the inner loop test (step 5) consumes 3''T''<sub>5</sub> time.
  −
  −
The loops in steps 4, 5 and 6 are trickier to evaluate.  The outer loop test in step 4 will execute ( n + 1 )times (note that an extra step is required to terminate the for loop, hence n + 1 and not n executions), which will consume T<sub>4</sub>( n + 1 ) time.  The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i.  On the first pass through the outer loop, j iterates from 1 to 1:  The inner loop makes one pass, so running the inner loop body (step 6) consumes T<sub>6</sub> time, and the inner loop test (step 5) consumes 2T<sub>5</sub> time.  During the next pass through the outer loop, j iterates from 1 to 2:  the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T<sub>6</sub> time, and the inner loop test (step 5) consumes 3T<sub>5</sub> time.
  −
  −
步骤4、5和6中的循环更难计算。步骤4中的外部循环测试将执行(''n'' + 1)次(注意,终止 for 循环需要一个额外的步骤,因此循环次数是 n + 1 次而不是 n 次),这将消耗时间T4(n+1)。另一方面,内部循环由变量 j 的值控制,取值范围为 1 到 i。在第一次通过外循环时,j从1迭代到1:内循环进行一次传递,因此运行内循环(步骤6)消耗时间T6,内循环测试(步骤5)消耗时间为2T5。在下一次通过外部循环时,j 从1迭代到2: 内部循环进行两次后结束,因此运行内部循环体(步骤6)消耗时间为2T6,而内部循环测试(步骤5)消耗的时间为3T5。
  −
  −
  −
  −
Altogether, the total time required to run the inner loop body can be expressed as an [[arithmetic progression]]:
  −
  −
Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression:
      
总之,运行内部循环体所需的总时间可以用等差数列表示:
 
总之,运行内部循环体所需的总时间可以用等差数列表示:
         
:<math>T_6 + 2T_6 + 3T_6 + \cdots + (n-1) T_6 + n T_6</math>
 
:<math>T_6 + 2T_6 + 3T_6 + \cdots + (n-1) T_6 + n T_6</math>
   −
  −
  −
which can be [[factorization|factored]]<ref>It can be proven by [[Mathematical induction|induction]] that <math>1 + 2 + 3 + \cdots + (n-1) + n = \frac{n(n+1)}{2}</math></ref> as
  −
  −
which can be factored as:
      
可以整理成:
 
可以整理成:
         
:<math>T_6 \left[ 1 + 2 + 3 + \cdots + (n-1) + n \right] = T_6 \left[ \frac{1}{2} (n^2 + n) \right] </math>
 
:<math>T_6 \left[ 1 + 2 + 3 + \cdots + (n-1) + n \right] = T_6 \left[ \frac{1}{2} (n^2 + n) \right] </math>
   −
  −
  −
The total time required to run the outer loop test can be evaluated similarly:
  −
  −
The total time required to run the outer loop test can be evaluated similarly:
      
运行外部循环测试所需的总时间也可以用类似的方法来计算:
 
运行外部循环测试所需的总时间也可以用类似的方法来计算:
         
:<math>\begin{align}
 
:<math>\begin{align}
      
& 2T_5 + 3T_5 + 4T_5 + \cdots + (n-1) T_5 + n T_5 + (n + 1) T_5\\
 
& 2T_5 + 3T_5 + 4T_5 + \cdots + (n-1) T_5 + n T_5 + (n + 1) T_5\\
      
=\ &T_5 + 2T_5 + 3T_5 + 4T_5 + \cdots + (n-1)T_5 + nT_5 + (n+1)T_5 - T_5
 
=\ &T_5 + 2T_5 + 3T_5 + 4T_5 + \cdots + (n-1)T_5 + nT_5 + (n+1)T_5 - T_5
      
\end{align}</math>
 
\end{align}</math>
   −
  −
  −
which can be factored as
  −
  −
which can be factored as
      
进一步整理为:
 
进一步整理为:
        第551行: 第436行:  
\end{align}</math>
 
\end{align}</math>
   −
  −
  −
Therefore, the total running time for this algorithm is:
  −
  −
Therefore, the total running time for this algorithm is:
      
因此,此算法的总运行时间为:
 
因此,此算法的总运行时间为:
         
:<math>f(n) = T_1 + T_2 + T_3 + T_7 + (n + 1)T_4 + \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2+3n) \right] T_5 </math>、
 
:<math>f(n) = T_1 + T_2 + T_3 + T_7 + (n + 1)T_4 + \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2+3n) \right] T_5 </math>、
   −
  −
  −
which [[reduction (mathematics)|reduces]] to
  −
  −
which reduces to
      
可以简化为:
 
可以简化为:
       +
:<math>f(n) = \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7</math>
   −
:<math>f(n) = \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7、
     −
</math>
+
根据'''经验法则 Rule-of-thumb''',我们可以假设任意给定函数中的最高阶项主导其增长率,从而定义其运行时顺序。在这个例子中,最高阶项为n<sup>2</sup>,因此可以得出f(n) = O(n<sup>2</sup>)。证明如下:
       +
证明:<math>\left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le cn^2,\ n \ge n_0</math>
   −
As a [[rule-of-thumb]], one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order.  In this example, n<sup>2</sup> is the highest-order term, so one can conclude that f(n) = O(n<sup>2</sup>).  Formally this can be proven as follows:
     −
As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order.  In this example, n<sup>2</sup> is the highest-order term, so one can conclude that f(n) = O(n<sup>2</sup>).  Formally this can be proven as follows:
  −
  −
根据'''<font color="#ff8000">经验法则 Rule-of-thumb</font>''',我们可以假设任意给定函数中的最高阶项主导其增长率,从而定义其运行时顺序。在这个例子中,最高阶项为n<sup>2</sup>,因此可以得出f(n) = O(n<sup>2</sup>)。证明如下:
  −
  −
  −
证明:<math>\left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le cn^2,\ n \ge n_0</math>
  −
<br /><br />
   
<math>\begin{align}
 
<math>\begin{align}
 
&\left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7\\
 
&\left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7\\
 
\le &( n^2 + n )T_6 + ( n^2 + 3n )T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \ (\text{for } n \ge 0 )
 
\le &( n^2 + n )T_6 + ( n^2 + 3n )T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \ (\text{for } n \ge 0 )
 
\end{align}</math>
 
\end{align}</math>
<br /><br />
+
 
 +
 
 
设''k''为一个大于或等于[''T''<sub>1</sub>..''T''<sub>7</sub>]的常数:
 
设''k''为一个大于或等于[''T''<sub>1</sub>..''T''<sub>7</sub>]的常数:
<br /><br />
+
 
 +
 
 
<math>\begin{align}
 
<math>\begin{align}
 
&T_6( n^2 + n ) + T_5( n^2 + 3n ) + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le k( n^2 + n ) + k( n^2 + 3n ) + kn + 5k\\
 
&T_6( n^2 + n ) + T_5( n^2 + 3n ) + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le k( n^2 + n ) + k( n^2 + 3n ) + kn + 5k\\
 
= &2kn^2 + 5kn + 5k \le 2kn^2 + 5kn^2 + 5kn^2 \ (\text{for } n \ge 1) = 12kn^2
 
= &2kn^2 + 5kn + 5k \le 2kn^2 + 5kn^2 + 5kn^2 \ (\text{for } n \ge 1) = 12kn^2
 
\end{align}</math>
 
\end{align}</math>
<br /><br />
  −
因此: <math>\left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le cn^2, n \ge n_0 \text{ for } c = 12k, n_0 = 1</math>
         +
因此:
   −
A more [[elegance|elegant]] approach to analyzing this algorithm would be to declare that [''T''<sub>1</sub>..''T''<sub>7</sub>] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps.  This would mean that the algorithm's running time breaks down as follows:<ref>This approach, unlike the above approach, neglects the constant time consumed by the loop tests which terminate their respective loops, but it is [[Trivial (mathematics)|trivial]] to prove that such omission does not affect the final result</ref>
+
<math>\left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le cn^2, n \ge n_0 \text{ for } c = 12k, n_0 = 1</math>
   −
A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's running time breaks down as follows:
      
分析这个算法的一个更好的方法,是声明[''T''<sub>1</sub>..''T''<sub>7</sub>]都等于一个时间单位,在同一个单位制中,选择这样一个单位大于或等于这些步骤的实际时间。这意味着该算法的运行时间分解如下:
 
分析这个算法的一个更好的方法,是声明[''T''<sub>1</sub>..''T''<sub>7</sub>]都等于一个时间单位,在同一个单位制中,选择这样一个单位大于或等于这些步骤的实际时间。这意味着该算法的运行时间分解如下:
 +
    
<math>4+\sum_{i=1}^n i\leq 4+\sum_{i=1}^n n=4+n^2\leq5n^2 \ (\text{for } n \ge 1) =O(n^2).</math>
 
<math>4+\sum_{i=1}^n i\leq 4+\sum_{i=1}^n n=4+n^2\leq5n^2 \ (\text{for } n \ge 1) =O(n^2).</math>
   −
===其他资源增长率分析 Growth rate analysis of other resources===
     −
The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of [[DSPACE|memory space]].  As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a [[computer file|file]] which that program manages:
+
===其他资源增长率分析===
 
  −
The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of memory space.  As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a file which that program manages:
      
运行时分析的方法也可用于预测其他增长率,如内存空间的消耗。例如下面的这个伪代码,它表示根据程序管理的文件大小来管理和重新分配程序的内存使用:
 
运行时分析的方法也可用于预测其他增长率,如内存空间的消耗。例如下面的这个伪代码,它表示根据程序管理的文件大小来管理和重新分配程序的内存使用:
        第627行: 第491行:  
         ''double the amount of memory reserved''
 
         ''double the amount of memory reserved''
   −
  −
In this instance, as the file size n increases, memory will be consumed at an [[exponential growth]] rate, which is order O(2<sup>n</sup>). This is an extremely rapid and most likely unmanageable growth rate for consumption of memory [[Resource (computer science)|resources]].
  −
  −
In this instance, as the file size n increases, memory will be consumed at an exponential growth rate, which is order O(2<sup>n</sup>). This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources.
      
在这个例子中,随着文件大小 n 的增加,内存消耗的速度将达到指数增长,即O(2<sup>n</sup>)。对于内存资源的消耗来讲,这种量级的增长率是很大的并且很有可能无法控制。
 
在这个例子中,随着文件大小 n 的增加,内存消耗的速度将达到指数增长,即O(2<sup>n</sup>)。对于内存资源的消耗来讲,这种量级的增长率是很大的并且很有可能无法控制。
   −
==意义 Relevance==
     −
Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless.
+
==意义==
 
  −
Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless.
      
算法分析在实践中非常重要,因为偶然或无意地使用低效算法会显著影响系统性能。在对时间敏感的应用程序中,运行时间过长的算法可能使其结果过时或无用。一个低效的算法也可能最终需要不经济的计算能力或存储量才能运行,再一次使它变得毫无用处。
 
算法分析在实践中非常重要,因为偶然或无意地使用低效算法会显著影响系统性能。在对时间敏感的应用程序中,运行时间过长的算法可能使其结果过时或无用。一个低效的算法也可能最终需要不经济的计算能力或存储量才能运行,再一次使它变得毫无用处。
   −
==常数因子 Constant factors==
     −
Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 2<sup>32</sup> = 4 GiB (greater if [[segmented memory]] is used) and on 64-bit machines 2<sup>64</sup> = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are O(1) for a large enough constant, or for small enough data.
+
==常数因子==
   −
Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 2<sup>32</sup> = 4 GiB (greater if segmented memory is used) and on 64-bit machines 2<sup>64</sup> = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are O(1) for a large enough constant, or for small enough data.
+
算法的分析通常侧重于算法的渐近性能,尤其是在初级阶段,但在实际应用中,常系数非常重要,但实际数据的大小往往是有限的。该限制通常表现为是可寻址内存的大小,因此在32位计算机上2<sup>32</sup> = 4 GiB(如果使用分段内存,则更大),在64位计算机上2<sup>64</sup> = 16 EiB。因此,给定一个有限的大小,一个增长的顺序(时间或空间)可以被一个常数因子代替,在这个意义上,所有实际的算法对于足够大的常数或足够小的数据都是O(1)
   −
算法的分析通常侧重于算法的渐近性能,尤其是在初级阶段,但在实际应用中,常系数非常重要,但实际数据的大小往往是有限的。该限制通常表现为是可寻址内存的大小,因此在32位计算机上232=4 GiB(如果使用分段内存,则更大),在64位计算机上264=16 EiB。因此,给定一个有限的大小,一个增长的顺序(时间或空间)可以被一个常数因子代替,在这个意义上,所有实际的算法对于足够大的常数或足够小的数据都是O(1)。
      +
这种解释主要适用于增长极慢的函数:(二进制)所有实际数据(2<sup>65536</sup>位)的迭代对数(log<sup>*</sup>)小于5;(二进制)对数对数(log log n)对于几乎所有实际数据(2<sup>64</sup>位)小于6;对于几乎所有实际数据(2<sup>64</sup>位),二进制对数(log n)小于64。然而,如果恒定时间算法的开销导致更大的常数因子,则具有非恒定复杂度的算法在实际数据上可能比具有恒定复杂度的算法更有效,例如,只要<math>K/k > 6</math>且<math>n < 2^{2^6} = 2^{64}</math>,则可以具有<math>K > k \log \log n</math>。
   −
This interpretation is primarily useful for functions that grow extremely slowly: (binary) [[iterated logarithm]] (log<sup>*</sup>) is less than 5 for all practical data (2<sup>65536</sup> bits); (binary) log-log (log log ''n'') is less than 6 for virtually all practical data (2<sup>64</sup> bits); and binary log (log ''n'') is less than 64 for virtually all practical data (2<sup>64</sup> bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have <math>K > k \log \log n</math> so long as <math>K/k > 6</math> and <math>n < 2^{2^6} = 2^{64}</math>.
     −
This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log<sup>*</sup>) is less than 5 for all practical data (2<sup>65536</sup> bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (2<sup>64</sup> bits); and binary log (log n) is less than 64 for virtually all practical data (2<sup>64</sup> bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have <math>K > k \log \log n</math> so long as <math>K/k > 6</math> and <math>n < 2^{2^6} = 2^{64}</math>.
+
对于大数据来讲,线性或二次项系数是不能忽略的,但对于小数据,渐近低效的算法可能更有效。这尤其适用于混合算法,如Timsort,它使用渐近有效的算法(这里指归并排序,时间复杂度<math>n \log n</math>),但对于小数据,转换为渐近低效的算法(这里是插入排序,时间复杂度为<math>n^2</math>),因为简单算法在小数据上更快。
   −
这种解释主要适用于增长极慢的函数:(二进制)所有实际数据(265536位)的迭代对数(log*)小于5;(二进制)对数对数(log log log n)对于几乎所有实际数据(264位)小于6;对于几乎所有实际数据(264位),二进制对数(logn)小于64。然而,如果恒定时间算法的开销导致更大的常数因子,则具有非恒定复杂度的算法在实际数据上可能比具有恒定复杂度的算法更有效,例如,只要𝐾/𝑘>6且𝑛<226=264,则可以具有𝐾>𝑘loglog𝑛。
     −
For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in [[hybrid algorithm]]s, like [[Timsort]], which use an asymptotically efficient algorithm (here [[merge sort]], with time complexity <math>n \log n</math>), but switch to an asymptotically inefficient algorithm (here [[insertion sort]], with time complexity <math>n^2</math>) for small data, as the simpler algorithm is faster on small data.
+
== 参见==
 +
* [[平摊分析]]
 +
* [[并行算法分析]]  
 +
* [[渐近计算复杂性]]
 +
* [[计算复杂性理论]]
 +
* [[主定理(算法分析)]]
 +
* [[NP-完全]]
 +
* [[数值分析]]
 +
* [[程序优化]] 
 +
* [[仿形(计算机编程)]] 
 +
* [[可扩展性]] 
 +
* [[平滑分析]]
 +
* [[终止分析]]
 +
* [[时间复杂度]]
 +
* [[基于信息的复杂性]] 
   −
For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity <math>n \log n</math>), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity <math>n^2</math>) for small data, as the simpler algorithm is faster on small data.
     −
对于大数据来讲,线性或二次项系数是不能忽略的,但对于小数据,渐近低效的算法可能更有效。这尤其适用于混合算法,如Timsort,它使用渐近有效的算法(这里指归并排序,时间复杂度𝑛log𝑛),但对于小数据,转换为渐近低效的算法(这里是插入排序,时间复杂度为2),因为简单算法在小数据上更快。
  −
  −
== 参见 See also==
  −
  −
* [[Amortized analysis]]  平摊分析
  −
  −
* [[Analysis of parallel algorithms]]  并行算法分析
  −
  −
* [[Asymptotic computational complexity]]  渐近计算复杂性
  −
  −
* [[Best, worst and average case]]  最佳、最差和一般情况
  −
  −
* [[Big O notation]]  大O符号
  −
  −
* [[Computational complexity theory]]  计算复杂性理论
  −
  −
* [[Master theorem (analysis of algorithms)]]  主定理(算法分析)
  −
  −
* [[NP-Complete]]  NP完全
  −
  −
* [[Numerical analysis]]  数值分析
  −
  −
* [[Polynomial time]]  多项式时间
  −
  −
* [[Program optimization]]  程序优化
  −
  −
* [[Profiling (computer programming)]]  仿形(计算机编程)
  −
  −
* [[Scalability]]  可扩展性
  −
  −
* [[Smoothed analysis]] 平滑分析
  −
  −
* [[Termination analysis]] &mdash; the subproblem of checking whether a program will terminate at all
  −
  −
终止分析
  −
  −
* [[Time complexity]] — includes table of orders of growth for common algorithms
  −
  −
时间复杂度:包括常用算法的增长顺序表
  −
  −
* [[Information-based complexity]]  基于信息的复杂性
  −
  −
==Notes==
      +
==参考文献==
 
{{reflist}}
 
{{reflist}}
 +
*{{Cite book |first=Thomas H. |last=Cormen |first2=Charles E. |last2=Leiserson |first3=Ronald L. |last3=Rivest |lastauthoramp=yes |authorlink4=Clifford Stein |first4=Clifford |last4=Stein |title=Introduction to Algorithms |edition=Second |publisher=MIT Press and McGraw-Hill |location=Cambridge, MA |year=2001 |isbn=0-262-03293-7 |others=Chapter 1: Foundations |pages=3–122 }}
    +
*{{Cite book |title=Algorithms in C, Parts 1-4: Fundamentals, Data Structures, Sorting, Searching |edition=3rd |first=Robert |last=Sedgewick |location=Reading, MA |publisher=Addison-Wesley Professional |year=1998 |isbn=978-0-201-31452-6 |url-access=registration |url=https://archive.org/details/algorithmsinc00sedg }}
   −
 
+
*{{Cite book |title=The Art of Computer Programming |first=Donald |last=Knuth |location= |publisher=Addison-Wesley |isbn= |year= }}
==References==
  −
 
  −
*{{Cite book |authorlink=Thomas H. Cormen |first=Thomas H. |last=Cormen |authorlink2=Charles E. Leiserson |first2=Charles E. |last2=Leiserson |authorlink3=Ronald L. Rivest |first3=Ronald L. |last3=Rivest |lastauthoramp=yes |authorlink4=Clifford Stein |first4=Clifford |last4=Stein |title=[[Introduction to Algorithms]] |edition=Second |publisher=MIT Press and McGraw-Hill |location=Cambridge, MA |year=2001 |isbn=0-262-03293-7 |others=Chapter 1: Foundations |pages=3–122 }}
  −
 
  −
*{{Cite book |title=Algorithms in C, Parts 1-4: Fundamentals, Data Structures, Sorting, Searching |edition=3rd |authorlink=Robert Sedgewick (computer scientist) |first=Robert |last=Sedgewick |location=Reading, MA |publisher=Addison-Wesley Professional |year=1998 |isbn=978-0-201-31452-6 |url-access=registration |url=https://archive.org/details/algorithmsinc00sedg }}
  −
 
  −
*{{Cite book |title=[[The Art of Computer Programming]] |authorlink=Donald Knuth |first=Donald |last=Knuth |location= |publisher=Addison-Wesley |isbn= |year= }}
      
*{{Cite book |first=Daniel A. |last=Greene |first2=Donald E. |last2=Knuth |title=Mathematics for the Analysis of Algorithms |edition=Second |location= |publisher=Birkhäuser |year=1982 |isbn=3-7643-3102-X }}
 
*{{Cite book |first=Daniel A. |last=Greene |first2=Donald E. |last2=Knuth |title=Mathematics for the Analysis of Algorithms |edition=Second |location= |publisher=Birkhäuser |year=1982 |isbn=3-7643-3102-X }}
第722行: 第542行:       −
 
+
==编者推荐==
==External links==
+
===集智课程===
 
+
====[]====
*{{Commonscatinline}}
  −
 
  −
 
  −
 
  −
{{Computer science}}
           −
{{DEFAULTSORT:Analysis Of Algorithms}}
     −
[[Category:Analysis of algorithms| ]]
     −
[[Category:Computational complexity theory]]
+
----
 +
本中文词条由[[用户:薄荷|薄荷]]编辑,如有问题,欢迎在讨论页面留言。
   −
Category:Computational complexity theory
+
'''本词条内容源自wikipedia及公开资料,遵守 CC3.0协议。'''
   −
类别: 计算复杂性理论
     −
<noinclude>
     −
<small>This page was moved from [[wikipedia:en:Analysis of algorithms]]. Its edit history can be viewed at [[算法分析/edithistory]]</small></noinclude>
+
[[Category:算法分析]]
   −
[[Category:待整理页面]]
+
[[Category:计算复杂性理论]]
7,129

个编辑

导航菜单