算法分析
此词条暂由彩云小译翻译，未经人工整理和审校，带来阅读不便，请见谅。
For looking up a given entry in a given ordered list, both the binary and the linear search algorithm (which ignores ordering) can be used. The analysis of the former and the latter algorithm shows that it takes at most log_{2}(n) and n check steps, respectively, for a list of length n. In the depicted example list of length 33, searching for "Morin, Arthur" takes 5 and 28 steps with binary and linear search, respectively.
二进制和线性搜索算法（忽略排序）可以使用。对前者和后者的分析表明，对于一个长度 n 的列表，分别采用最多的 log 子2 / sub (n)和 n 个检查步骤。 在长度为33的示例列表中，搜索“ Morin，Arthur”需要5步和28步，分别使用二进制(如图所示)和线性(如图所示)搜索
Graphs of functions commonly used in the analysis of algorithms, showing the number of operations N versus input size n for each function
在算法分析中常用函数图，表示每个函数的操作数 N 与输入大小 n 的对应关系
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms – the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same length may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms – the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same length may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.
在计算机科学 Computer Science中，算法分析是计算查找算法复杂性的过程——包括算法执行时间、所需存储空间或其他资源的数量。通常这是一个确定函数，该函数将一个算法的输入长度与所需的迭代步数（时间复杂度）或使用的存储大小（空间复杂度）相关联。当某个算法其算法复杂性函数的取值很小，或者其函数值大小的增长速度与算法输入大小的相比更加缓慢时，我们认为该算法是有效的。相同长度的不同输入可能导致算法表现出不同的效果，因此对算法运行的最佳、最差和平均情况的描述都可能具有一定的实际意义。但是当没有特殊说明时，算法性能描述函数通常反映的是算法复杂性一种上界，由算法表现最差情况的输入来确定。
The term "analysis of algorithms" was coined by Donald Knuth.^{[1]} Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.
The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.
“算法分析”这个术语是由 Donald Knuth 提出的。算法分析是广义计算复杂性理论的重要组成部分，它为任何解决给定计算问题的算法所需的资源提供理论估计。这些估计为有效算法的合理性评价提供了思路。
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the length of the sorted list being searched, or in O(log(n)), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant.
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the length of the sorted list being searched, or in O(log(n)), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant.
在算法的理论分析中，通常是在渐近意义上估计它们的复杂性，即对任意大输入的复杂度函数进行估计。为了达到这个目的，我们使用到了大O符号、大Ω符号和大θ符号。例如，二进制搜索被称为以与“被搜索的排序列表长度的对数成正比”的若干步长来运行，或者用O(log(n))表示，通俗地说就是“以对数时间”运行。之所以我们通常使用渐近估计，是因为同一算法的不同实现的算法效率表现会有所不同。然而，给定算法的任何两个“合理”实现的效率，是由一个称为隐常数的常数乘法因子相关联的。
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that certain operations are executed in unit time.
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that certain operations are executed in unit time.
算法效率的精确（这里就不是渐近了）度量有时是可以计算的，但它们通常需要一定的关于算法特定实现的假设，这种算法被称为计算模型。一个计算模型可以用抽象计算机来定义，例如图灵机，和 / 或通过假设某些操作在单位时间内执行。
For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log_{2} n + 1 time units are needed to return an answer.
For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log_{2} n + 1 time units are needed to return an answer.
例如，如果我们应用二分法搜索的待排序列表有 n 个元素，并且我们可以保证列表中每个元素的查找都可以在单位时间内完成，那么最多需要 log_{2} n + 1 的时间就可以得到返回值结果。
成本模型 Cost models
Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual execution time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant.
Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual execution time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant.
时间效率的估计取决于我们对步骤的定义。为了使分析有效地对应于实际执行时间，执行一个步骤所需的时间必须保证一个常数。这里需要注意的是，例如，有些分析把两个数字的相加计算为一个步骤。在某些情况下，这种假设可能是不正当的。例如，如果计算中涉及的数字可能是任意大的，那么单个加法所需的时间就不能再假定为常数。
Two cost models are generally used:^{[2]}^{[3]}^{[4]}^{[5]}^{[6]}
Two cost models are generally used:
通常使用两种成本模型:
- the uniform cost model, also called uniform-cost measurement (and similar variations), assigns a constant cost to every machine operation, regardless of the size of the numbers involved
统一成本模型：也称为“统一成本计量”（和类似的变体），为每台机器操作分配一个固定的算法消耗，与涉及的数量大小无关；
- the logarithmic cost model, also called logarithmic-cost measurement (and similar variations), assigns a cost to every machine operation proportional to the number of bits involved
“对数成本模型”，也称为“对数成本测量”（及类似变体），为每台机器操作分配一个与所涉及的位数成比例的成本；
The latter is more cumbersome to use, so it's only employed when necessary, for example in the analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography.
The latter is more cumbersome to use, so it's only employed when necessary, for example in the analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography.
后者使用起来比较麻烦，所以只有在必要的时候才会使用，例如在分析高精度计算算法时，就像那些在密码学中使用的算法。
A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible.^{[7]}
A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible.
而一个经常被我们忽略的关键点是，常见问题的下界通常是给定一个计算模型，这个模型比你在实践中可以使用的操作集更加有限，因此存在一些算法的效率比我们认为的下界要更快。
运行时分析 Run-time analysis
Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time) of an algorithm as its input size (usually denoted as n) increases. Run-time efficiency is a topic of great interest in computer science: A program can take seconds, hours, or even years to finish executing, depending on which algorithm it implements. While software profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis.
Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time) of an algorithm as its input size (usually denoted as n) increases. Run-time efficiency is a topic of great interest in computer science: A program can take seconds, hours, or even years to finish executing, depending on which algorithm it implements. While software profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis.
运行时分析是一种理论分类，它得出的算法预期运行时间（或运行时间）是随着算法输入大小（通常表示为 n）的增加而增加的。运行时效率是计算机科学中一个热门话题: 一个程序究竟是需要几秒钟还是几小时甚至几年才能完成执行，这取决于它具体实现的方法。虽然软件分析技术可用于在实践中测量算法的运行时间，但它们不能为所有无限多可能的输入提供计时数据; 后者只能通过运行时间分析的理论方法来实现。
经验指标的缺陷 Shortcomings of empirical metrics
Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), there are additional significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms.
Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), there are additional significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms.
由于算法是平台无关的 Platform-Independent（即一个给定的算法，可以在任意运行任意操作系统 Operating System的计算机上用任意编程语言 Programming Language实现），使用经验方法来衡量一组给定算法的比较性能还有一个明显的缺点。
Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a much slower machine, using a binary search algorithm. Benchmark testing on the two computers running their respective programs might look something like the following:
Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a much slower machine, using a binary search algorithm. Benchmark testing on the two computers running their respective programs might look something like the following:
举个例子，一个程序在一个大小为n的排序列表中查找一个特定的条目。假设这个程序是在A计算机上实现的，A计算机是最先进的机器，使用线性搜索算法，而B计算机使用二分法进行搜索，但是B计算机本身的运行速度要慢得多。两台计算机分别运行该程序的基准测试结果如下所示:
n (list size) | Computer A run-time (in nanoseconds) |
Computer B run-time (in nanoseconds) |
---|---|---|
16 | 8 | 100,000 |
63 | 32 | 150,000 |
250 | 125 | 200,000 |
1,000 | 500 | 250,000 |
Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is far superior in efficiency to that of Computer B. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error:
Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is far superior in efficiency to that of Computer B. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error:
基于这些度量，我们很容易得出结论：A计算机运行的算法在效率上远远优于B计算机。然而，如果输入列表的大小增加到一个足够大的数字，这个结论就会发生戏剧性的反转。
n (list size) | Computer A run-time (in nanoseconds) |
Computer B run-time (in nanoseconds) |
---|---|---|
16 | 8 | 100,000 |
63 | 32 | 150,000 |
250 | 125 | 200,000 |
1,000 | 500 | 250,000 |
... | ... | ... |
1,000,000 | 500,000 | 500,000 |
4,000,000 | 2,000,000 | 550,000 |
16,000,000 | 8,000,000 | 600,000 |
... | ... | ... |
63,072 × 10^{12} | 31,536 × 10^{12} ns, or 1 year |
1,375,000 ns, or 1.375 milliseconds |
Computer A, running the linear search program, exhibits a linear growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithmic growth rate. Quadrupling the input size only increases the run time by a constant amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it's running an algorithm with a much slower growth rate.
Computer A, running the linear search program, exhibits a linear growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithmic growth rate. Quadrupling the input size only increases the run time by a constant amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it's running an algorithm with a much slower growth rate.
运行线性搜索程序的计算机A的效率呈现线性增长，即程序的运行时间与其输入大小成正比，也就是说将输入大小增加一倍时运行时间也将增加一倍，将输入大小增加四倍则运行时间也会增加四倍等等。另一方面，运行二分法搜索程序的计算机B的效率呈现出对数增长趋势，将输入大小变成原来的两倍只会使运行时增加一个常量（在本例中为50,000 ns）。尽管从表面上看计算机A是一台速度更快的机器，但计算机B在运行时势必会地超过计算机A，因为它运行的算法增长速度要慢得多。
生长规律 Orders of growth
Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond a certain input size n, the function 模板:Tmath times a positive constant provides an upper bound or limit for the run-time of that algorithm. In other words, for a given input size n greater than some n_{0} and a constant c, the running time of that algorithm will never be larger than 模板:Tmath. This concept is frequently expressed using Big O notation. For example, since the run-time of insertion sort grows quadratically as its input size increases, insertion sort can be said to be of order O(n^{2}).
Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond a certain input size n, the function times a positive constant provides an upper bound or limit for the run-time of that algorithm. In other words, for a given input size n greater than some n_{0} and a constant c, the running time of that algorithm will never be larger than . This concept is frequently expressed using Big O notation. For example, since the run-time of insertion sort grows quadratically as its input size increases, insertion sort can be said to be of order O(n^{2}).
非正式地，一个算法可以说表现出一个数学函数级的增长率，如果超过一定的输入大小 n，函数乘以一个正常数为该算法的运行时间提供了一个上限或极限。换言之，对于给定的输入大小n大于某个n0和常数c，该算法的运行时间永远不会大于。这个概念经常用大O符号表示。这个概念经常使用大 o 符号表示。例如，由于插入排序的运行时随着其输入大小的增加而二次增长，因此插入排序的时间复杂度可以表示为O(n^{2})。
Big O notation is a convenient way to express the worst-case scenario for a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario for quicksort is O(n^{2}), but the average-case run-time is O(n log n).
Big O notation is a convenient way to express the worst-case scenario for a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario for quicksort is O(n^{2}), but the average-case run-time is .
对于给定的算法，大O表示法是表达算法最坏情况的一种简便方法，尽管它也可以用来表示平均情况-例如，快速排序的最差表现是 O(n^{2}) ，但是平均情况下其运行时是O(n log n)。
经验增长规律 Empirical orders of growth
Assuming the execution time follows power rule, t ≈ k n^{a}, the coefficient a can be found ^{[8]} by taking empirical measurements of run time [math]\displaystyle{ \{t_1, t_2\} }[/math] at some problem-size points [math]\displaystyle{ \{n_1, n_2\} }[/math], and calculating [math]\displaystyle{ t_2/t_1 = (n_2/n_1)^a }[/math] so that [math]\displaystyle{ a = \log(t_2/t_1) / \log(n_2/n_1) }[/math]. In other words, this measures the slope of the empirical line on the log–log plot of execution time vs. problem size, at some size point. If the order of growth indeed follows the power rule (and so the line on log–log plot is indeed a straight line), the empirical value of a will stay constant at different ranges, and if not, it will change (and the line is a curved line) - but still could serve for comparison of any two given algorithms as to their empirical local orders of growth behaviour. Applied to the above table:
Assuming the execution time follows power rule, , the coefficient a can be found by taking empirical measurements of run time [math]\displaystyle{ \{t_1, t_2\} }[/math] at some problem-size points [math]\displaystyle{ \{n_1, n_2\} }[/math], and calculating [math]\displaystyle{ t_2/t_1 = (n_2/n_1)^a }[/math] so that [math]\displaystyle{ a = \log(t_2/t_1) / \log(n_2/n_1) }[/math]. In other words, this measures the slope of the empirical line on the log–log plot of execution time vs. problem size, at some size point. If the order of growth indeed follows the power rule (and so the line on log–log plot is indeed a straight line), the empirical value of a will stay constant at different ranges, and if not, it will change (and the line is a curved line) - but still could serve for comparison of any two given algorithms as to their empirical local orders of growth behaviour. Applied to the above table:
假设算法运行时间遵循幂律规则，通过对一些问题规模点[math]\displaystyle{ \{n_1, n_2\} }[/math]的运行时间[math]\displaystyle{ \{t_1, t_2\} }[/math]进行经验测量，并计算出𝑡[math]\displaystyle{ t_2/t_1 = (n_2/n_1)^a }[/math]，从而得到系数[math]\displaystyle{ a = \log(t_2/t_1) / \log(n_2/n_1) }[/math]。 换言之，这衡量了在某个规模点上，执行时间与问题大小的双对数曲线 log-log plot上经验线的斜率。如果增长顺序确实遵循幂律（因此双对数曲线图上的直线确实是一条直线），那么a的经验值将在不同的范围内将保持不变，如果不是，它将发生改变（而且这条线是一条曲线），但仍然可以用来比较任何两种给定算法的经验局部增长顺序。应用于上表：
n (list size) | Computer A run-time (in nanoseconds) |
Local order of growth (n^_) |
Computer B run-time (in nanoseconds) |
Local order of growth (n^_) |
---|---|---|---|---|
15 | 7 | 100,000 | ||
65 | 32 | 1.04 | 150,000 | 0.28 |
250 | 125 | 1.01 | 200,000 | 0.21 |
1,000 | 500 | 1.00 | 250,000 | 0.16 |
... | ... | ... | ||
1,000,000 | 500,000 | 1.00 | 500,000 | 0.10 |
4,000,000 | 2,000,000 | 1.00 | 550,000 | 0.07 |
16,000,000 | 8,000,000 | 1.00 | 600,000 | 0.06 |
... | ... | ... |
It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one.
It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one.
可以清楚地看到，第一个算法展示了一个线性增长的序列，这确实遵循幂规则。第二个经验值正在迅速递减，这表明它遵循了另一条增长规律，而且无论如何，从经验上看，其当下的增长（及其进一步的增长）远低于第一个。
运行时复杂性度量 Evaluating run-time complexity
The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode:
The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode:
给定算法的最差运行复杂度有时可以通过检查算法的结构和做一些简化的假设来进行评估。例如下面这段伪代码:
1 get a positive integer n from input 2 if n > 10 3 print "This might take a while..." 4 for i = 1 to n 5 for j = 1 to i 6 print i * j 7 print "Done!"
A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. The specific amount of time to carry out a given instruction will vary depending on which instruction is being executed and which computer is executing it, but on a conventional computer, this amount will be deterministic.^{[9]} Say that the actions carried out in step 1 are considered to consume time T_{1}, step 2 uses time T_{2}, and so forth.
A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. The specific amount of time to carry out a given instruction will vary depending on which instruction is being executed and which computer is executing it, but on a conventional computer, this amount will be deterministic. Say that the actions carried out in step 1 are considered to consume time T_{1}, step 2 uses time T_{2}, and so forth.
一台给定的计算机在执行算法有关的每一条指令时花费的时间是离散的。执行给定指令的具体时间量取决于正在执行的指令和正在执行的计算机，但在传统计算机上，这个时间量是确定的。假设在步骤1中执行的操作消耗时间为T1，步骤2的时间为T2，依此类推。
In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1-3 and step 7 is:
In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1-3 and step 7 is:
在上面的算法中，步骤1、2和7只运行一次。对于最坏情况的评估，应该假定步骤3也将运行。因此，运行步骤1-3和步骤7所需的总时间是:
- [math]\displaystyle{ T_1 + T_2 + T_3 + T_7. \, }[/math]
The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( n + 1 )times (note that an extra step is required to terminate the for loop, hence n + 1 and not n executions), which will consume T_{4}( n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes T_{6} time, and the inner loop test (step 5) consumes 2T_{5} time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T_{6} time, and the inner loop test (step 5) consumes 3T_{5} time.
The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( n + 1 )times (note that an extra step is required to terminate the for loop, hence n + 1 and not n executions), which will consume T_{4}( n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes T_{6} time, and the inner loop test (step 5) consumes 2T_{5} time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T_{6} time, and the inner loop test (step 5) consumes 3T_{5} time.
步骤4、5和6中的循环更难计算。步骤4中的外部循环测试将执行(n + 1)次（注意，终止 for 循环需要一个额外的步骤，因此循环次数是 n + 1 次而不是 n 次），这将消耗时间T4(n+1)。另一方面，内部循环由变量 j 的值控制，取值范围为 1 到 i。在第一次通过外循环时，j从1迭代到1：内循环进行一次传递，因此运行内循环（步骤6）消耗时间T6，内循环测试（步骤5）消耗时间为2T5。在下一次通过外部循环时，j 从1迭代到2: 内部循环进行两次后结束，因此运行内部循环体（步骤6）消耗时间为2T6，而内部循环测试（步骤5）消耗的时间为3T5。
Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression:
Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression:
总之，运行内部循环体所需的总时间可以用等差数列表示:
- [math]\displaystyle{ T_6 + 2T_6 + 3T_6 + \cdots + (n-1) T_6 + n T_6 }[/math]
which can be factored^{[10]} as
which can be factored as：
可以整理成：
- [math]\displaystyle{ T_6 \left[ 1 + 2 + 3 + \cdots + (n-1) + n \right] = T_6 \left[ \frac{1}{2} (n^2 + n) \right] }[/math]
The total time required to run the outer loop test can be evaluated similarly:
The total time required to run the outer loop test can be evaluated similarly:
运行外部循环测试所需的总时间也可以用类似的方法来计算:
- [math]\displaystyle{ \begin{align} & 2T_5 + 3T_5 + 4T_5 + \cdots + (n-1) T_5 + n T_5 + (n + 1) T_5\\ =\ &T_5 + 2T_5 + 3T_5 + 4T_5 + \cdots + (n-1)T_5 + nT_5 + (n+1)T_5 - T_5 \end{align} }[/math]
which can be factored as
which can be factored as
进一步整理为：
- [math]\displaystyle{ \begin{align} & T_5 \left[ 1+2+3+\cdots + (n-1) + n + (n + 1) \right] - T_5 \\ =& \left[ \frac{1}{2} (n^2 + n) \right] T_5 + (n + 1)T_5 - T_5 \\ =& T_5 \left[ \frac{1}{2} (n^2 + n) \right] + n T_5 \\ =& \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 \end{align} }[/math]
Therefore, the total running time for this algorithm is:
Therefore, the total running time for this algorithm is:
因此，此算法的总运行时间为:
- [math]\displaystyle{ f(n) = T_1 + T_2 + T_3 + T_7 + (n + 1)T_4 + \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2+3n) \right] T_5 }[/math]、
which reduces to
which reduces to
可以简化为：
- [math]\displaystyle{ f(n) = \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7、 }[/math]
As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n^{2} is the highest-order term, so one can conclude that f(n) = O(n^{2}). Formally this can be proven as follows:
As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n^{2} is the highest-order term, so one can conclude that f(n) = O(n^{2}). Formally this can be proven as follows:
根据经验法则 Rule-of-thumb，我们可以假设任意给定函数中的最高阶项主导其增长率，从而定义其运行时顺序。在这个例子中，最高阶项为n^{2}，因此可以得出f(n) = O(n^{2})。证明如下：
证明：[math]\displaystyle{ \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le cn^2,\ n \ge n_0 }[/math]
[math]\displaystyle{ \begin{align}
&\left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7\\
\le &( n^2 + n )T_6 + ( n^2 + 3n )T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \ (\text{for } n \ge 0 )
\end{align} }[/math]
设k为一个大于或等于[T_{1}..T_{7}]的常数：
[math]\displaystyle{ \begin{align}
&T_6( n^2 + n ) + T_5( n^2 + 3n ) + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le k( n^2 + n ) + k( n^2 + 3n ) + kn + 5k\\
= &2kn^2 + 5kn + 5k \le 2kn^2 + 5kn^2 + 5kn^2 \ (\text{for } n \ge 1) = 12kn^2
\end{align} }[/math]
因此： [math]\displaystyle{ \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le cn^2, n \ge n_0 \text{ for } c = 12k, n_0 = 1 }[/math]
A more elegant approach to analyzing this algorithm would be to declare that [T_{1}..T_{7}] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's running time breaks down as follows:^{[11]}
A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's running time breaks down as follows:
分析这个算法的一个更好的方法，是声明[T_{1}..T_{7}]都等于一个时间单位，在同一个单位制中，选择这样一个单位大于或等于这些步骤的实际时间。这意味着该算法的运行时间分解如下：
[math]\displaystyle{ 4+\sum_{i=1}^n i\leq 4+\sum_{i=1}^n n=4+n^2\leq5n^2 \ (\text{for } n \ge 1) =O(n^2). }[/math]
其他资源增长率分析 Growth rate analysis of other resources
The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of memory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a file which that program manages:
The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of memory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a file which that program manages:
运行时分析的方法也可用于预测其他增长率，如内存空间的消耗。例如下面的这个伪代码，它表示根据程序管理的文件大小来管理和重新分配程序的内存使用:
while file is still open: let n = size of file for every 100,000 kilobytes of increase in file size double the amount of memory reserved
In this instance, as the file size n increases, memory will be consumed at an exponential growth rate, which is order O(2^{n}). This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources.
In this instance, as the file size n increases, memory will be consumed at an exponential growth rate, which is order O(2^{n}). This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources.
在这个例子中，随着文件大小 n 的增加，内存消耗的速度将达到指数增长，即O(2^{n})。对于内存资源的消耗来讲，这种量级的增长率是很大的并且很有可能无法控制。
意义 Relevance
Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless.
Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless.
算法分析在实践中非常重要，因为偶然或无意地使用低效算法会显著影响系统性能。在对时间敏感的应用程序中，运行时间过长的算法可能使其结果过时或无用。一个低效的算法也可能最终需要不经济的计算能力或存储量才能运行，再一次使它变得毫无用处。
常数因子 Constant factors
Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 2^{32} = 4 GiB (greater if segmented memory is used) and on 64-bit machines 2^{64} = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are O(1) for a large enough constant, or for small enough data.
Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 2^{32} = 4 GiB (greater if segmented memory is used) and on 64-bit machines 2^{64} = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are O(1) for a large enough constant, or for small enough data.
算法的分析通常侧重于算法的渐近性能，尤其是在初级阶段，但在实际应用中，常系数非常重要，但实际数据的大小往往是有限的。该限制通常表现为是可寻址内存的大小，因此在32位计算机上232=4 GiB（如果使用分段内存，则更大），在64位计算机上264=16 EiB。因此，给定一个有限的大小，一个增长的顺序（时间或空间）可以被一个常数因子代替，在这个意义上，所有实际的算法对于足够大的常数或足够小的数据都是O（1）。
This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log^{*}) is less than 5 for all practical data (2^{65536} bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (2^{64} bits); and binary log (log n) is less than 64 for virtually all practical data (2^{64} bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have [math]\displaystyle{ K \gt k \log \log n }[/math] so long as [math]\displaystyle{ K/k \gt 6 }[/math] and [math]\displaystyle{ n \lt 2^{2^6} = 2^{64} }[/math].
This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log^{*}) is less than 5 for all practical data (2^{65536} bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (2^{64} bits); and binary log (log n) is less than 64 for virtually all practical data (2^{64} bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have [math]\displaystyle{ K \gt k \log \log n }[/math] so long as [math]\displaystyle{ K/k \gt 6 }[/math] and [math]\displaystyle{ n \lt 2^{2^6} = 2^{64} }[/math].
这种解释主要适用于增长极慢的函数：（二进制）所有实际数据（265536位）的迭代对数（log*）小于5；（二进制）对数对数（log log log n）对于几乎所有实际数据（264位）小于6；对于几乎所有实际数据（264位），二进制对数（logn）小于64。然而，如果恒定时间算法的开销导致更大的常数因子，则具有非恒定复杂度的算法在实际数据上可能比具有恒定复杂度的算法更有效，例如，只要𝐾/𝑘>6且𝑛<226=264，则可以具有𝐾>𝑘loglog𝑛。
For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity [math]\displaystyle{ n \log n }[/math]), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity [math]\displaystyle{ n^2 }[/math]) for small data, as the simpler algorithm is faster on small data.
For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity [math]\displaystyle{ n \log n }[/math]), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity [math]\displaystyle{ n^2 }[/math]) for small data, as the simpler algorithm is faster on small data.
对于大数据来讲，线性或二次项系数是不能忽略的，但对于小数据，渐近低效的算法可能更有效。这尤其适用于混合算法，如Timsort，它使用渐近有效的算法（这里指归并排序，时间复杂度𝑛log𝑛），但对于小数据，转换为渐近低效的算法（这里是插入排序，时间复杂度为2），因为简单算法在小数据上更快。
参见 See also
- Amortized analysis 平摊分析
- Best, worst and average case 最佳、最差和一般情况
- Big O notation 大O符号
- Computational complexity theory 计算复杂性理论
- Master theorem (analysis of algorithms) 主定理（算法分析）
- NP-Complete NP完全
- Numerical analysis 数值分析
- Polynomial time 多项式时间
- Program optimization 程序优化
- Profiling (computer programming) 仿形（计算机编程）
- Scalability 可扩展性
- Smoothed analysis 平滑分析
- Termination analysis — the subproblem of checking whether a program will terminate at all
终止分析
- Time complexity — includes table of orders of growth for common algorithms
时间复杂度：包括常用算法的增长顺序表
- Information-based complexity 基于信息的复杂性
Notes
- ↑ "Knuth: Recent News". web.archive.org. 28 August 2016.
- ↑ Alfred V. Aho; John E. Hopcroft; Jeffrey D. Ullman (1974). The design and analysis of computer algorithms. Addison-Wesley Pub. Co.. https://archive.org/details/designanalysisof00ahoarich., section 1.3
- ↑ Juraj Hromkovič (2004). Theoretical computer science: introduction to Automata, computability, complexity, algorithmics, randomization, communication, and cryptography. Springer. pp. 177–178. ISBN 978-3-540-14015-3. https://books.google.com/books?id=KpNet-n262QC&pg=PA177.
- ↑ Giorgio Ausiello (1999). Complexity and approximation: combinatorial optimization problems and their approximability properties. Springer. pp. 3–8. ISBN 978-3-540-65431-5. https://books.google.com/books?id=Yxxw90d9AuMC&pg=PA3.
- ↑ Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York: Springer-Verlag, p. 20, ISBN 978-3-540-21045-0
- ↑ Robert Endre Tarjan (1983). Data structures and network algorithms. SIAM. pp. 3–7. ISBN 978-0-89871-187-5. https://books.google.com/books?id=JiC7mIqg-X4C&pg=PA3.
- ↑ Examples of the price of abstraction?, cstheory.stackexchange.com
- ↑ How To Avoid O-Abuse and Bribes -{zh-cn:互联网档案馆; zh-tw:網際網路檔案館; zh-hk:互聯網檔案館;}-的存檔，存档日期2017-03-08., at the blog "Gödel's Lost Letter and P=NP" by R. J. Lipton, professor of Computer Science at Georgia Tech, recounting idea by Robert Sedgewick
- ↑ However, this is not the case with a quantum computer
- ↑ It can be proven by induction that [math]\displaystyle{ 1 + 2 + 3 + \cdots + (n-1) + n = \frac{n(n+1)}{2} }[/math]
- ↑ This approach, unlike the above approach, neglects the constant time consumed by the loop tests which terminate their respective loops, but it is trivial to prove that such omission does not affect the final result
References
- Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. & Stein, Clifford (2001). Introduction to Algorithms. Chapter 1: Foundations (Second ed.). Cambridge, MA: MIT Press and McGraw-Hill. pp. 3–122. ISBN 0-262-03293-7.
- Sedgewick, Robert (1998). Algorithms in C, Parts 1-4: Fundamentals, Data Structures, Sorting, Searching (3rd ed.). Reading, MA: Addison-Wesley Professional. ISBN 978-0-201-31452-6. https://archive.org/details/algorithmsinc00sedg.
- Knuth, Donald. The Art of Computer Programming. Addison-Wesley.
- Greene, Daniel A.; Knuth, Donald E. (1982). Mathematics for the Analysis of Algorithms (Second ed.). Birkhäuser. ISBN 3-7643-3102-X.
- Goldreich, Oded (2010). Computational Complexity: A Conceptual Perspective. Cambridge University Press. ISBN 978-0-521-88473-0.
External links
Category:Computational complexity theory
类别: 计算复杂性理论
This page was moved from wikipedia:en:Analysis of algorithms. Its edit history can be viewed at 算法分析/edithistory