第239行: |
第239行: |
| 获得复杂度下限的标准方法包括将一个问题转化为另一个问题。更准确地说,假设一个大小为{{mvar|n}}的问题{{mvar|A}}可以编码成大小为{{math|''f''(''n'')}}的问题{{mvar|B}},则问题{{mvar|A}}的复杂度为<math>\Omega(g(n)).</math>。不失一般性地,我们可以假设函数{{mvar|f}}随着{{mvar|n}}的增加而增加,并且有一个'''<font color="#ff8000">反函数 inverse function</font>'''{{mvar|h}},那么问题{{mvar|B}}的复杂度就是<math>\Omega(g(h(n))).</math>。这个方法可以用来证明,若'''<font color="#ff8000">P ≠ NP</font>'''(一个未解决的猜想) ,对于每个正整数{{mvar|k}},每个'''<font color="#ff8000">NP完全问题 NP-complete problem</font>''' 的复杂度都是 <math>\Omega(n^k),</math>。 | | 获得复杂度下限的标准方法包括将一个问题转化为另一个问题。更准确地说,假设一个大小为{{mvar|n}}的问题{{mvar|A}}可以编码成大小为{{math|''f''(''n'')}}的问题{{mvar|B}},则问题{{mvar|A}}的复杂度为<math>\Omega(g(n)).</math>。不失一般性地,我们可以假设函数{{mvar|f}}随着{{mvar|n}}的增加而增加,并且有一个'''<font color="#ff8000">反函数 inverse function</font>'''{{mvar|h}},那么问题{{mvar|B}}的复杂度就是<math>\Omega(g(h(n))).</math>。这个方法可以用来证明,若'''<font color="#ff8000">P ≠ NP</font>'''(一个未解决的猜想) ,对于每个正整数{{mvar|k}},每个'''<font color="#ff8000">NP完全问题 NP-complete problem</font>''' 的复杂度都是 <math>\Omega(n^k),</math>。 |
| | | |
− | ==Use in algorithm design== | + | ==在算法设计中的应用== |
− | 在算法设计中的应用
| |
| | | |
| | | |
第247行: |
第246行: |
| Evaluating the complexity of an algorithm is an important part of algorithm design, as this gives useful information on the performance that may be expected. | | Evaluating the complexity of an algorithm is an important part of algorithm design, as this gives useful information on the performance that may be expected. |
| | | |
− | 评估算法的复杂度是'''<font color="#ff8000">算法设计 algorithm design</font>'''的一个重要组成部分,因为这给出了关于预期可能结果的有用信息。 | + | 评估算法的复杂度是'''<font color="#ff8000">算法设计 algorithm design</font>'''的一个重要组成部分,因为这给出了一个算法关于预期性能的有用信息。 |
− | | |
− | | |
| | | |
| It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of [[Moore's law]], which posits the [[exponential growth]] of the power of modern [[computer]]s. This is wrong because this power increase allows working with large input data ([[big data]]). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the [[bibliography]] of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require <math>O(n^2)</math> comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the [[quicksort]] and [[merge sort]] require only <math>n\log_2 n</math> comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For {{math|1=''n'' = 1,000,000}}, this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second. | | It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of [[Moore's law]], which posits the [[exponential growth]] of the power of modern [[computer]]s. This is wrong because this power increase allows working with large input data ([[big data]]). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the [[bibliography]] of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require <math>O(n^2)</math> comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the [[quicksort]] and [[merge sort]] require only <math>n\log_2 n</math> comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For {{math|1=''n'' = 1,000,000}}, this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second. |
第255行: |
第252行: |
| It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers. This is wrong because this power increase allows working with large input data (big data). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require <math>O(n^2)</math> comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the quicksort and merge sort require only <math>n\log_2 n</math> comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For , this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second. | | It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers. This is wrong because this power increase allows working with large input data (big data). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require <math>O(n^2)</math> comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the quicksort and merge sort require only <math>n\log_2 n</math> comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For , this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second. |
| | | |
− | 有一个常见的误解,由于'''<font color="#ff8000">摩尔定律 Moore's law</font>'''假定了现代计算机功率的'''<font color="#ff8000">指数增长 exponential growth</font>''',对算法复杂度的评估将变得不那么重要。这是错误的,因为这种功率增加容许处理较大的输入数据('''<font color="#ff8000">大数据 big data</font>''')。例如,当一个人想要按字母顺序对数百个条目的列表进行排序时,比如一本书的'''<font color="#ff8000">参考书目 bibliography</font>''',任何算法都应该在不到一秒的时间内就能正常工作。另一方面,对于一个包含100万条目的列表(例如一个大城镇的电话号码) ,需要<math>O(n^2)</math>次比较的基本算法,须进行1万亿次比较运算,即以每秒1000万次的速度进行比较需要大约3个小时。另一方面,'''<font color="#ff8000">快速排序 quicksort</font>'''和'''<font color="#ff8000">合并排序 merge sort</font>'''只需要<math>n\log_2 n</math>次比较(前者为平均情况复杂度,后者为最坏情况复杂度)。这大约是3000万次比较,以每秒1000万次比较计算,只需要3秒钟。 | + | 有一个常见的误解,由于'''<font color="#ff8000">摩尔定律 Moore's law</font>'''假定了现代计算机功率的'''<font color="#ff8000">指数增长 exponential growth</font>''',对算法复杂度的评估将变得不那么重要。这是错误的,因为这种功率增加同样也会容使得处理较大的输入数据('''<font color="#ff8000">大数据 big data</font>''')成为可能。例如,当一个人想要按字母顺序对数百个条目的列表进行排序时,比如一本书的'''<font color="#ff8000">参考书目 bibliography</font>''',任何算法都应该在不到一秒的时间内就能正常工作。另一方面,对于一个包含100万个条目的列表(例如一个大城镇的电话号码) ,需要<math>O(n^2)</math>次比较的基本算法,须进行1万亿次比较运算,即以每秒1000万次的速度进行比较需要大约3个小时。另一方面,'''<font color="#ff8000">快速排序 quicksort</font>'''和'''<font color="#ff8000">合并排序 merge sort</font>'''只需要<math>n\log_2 n</math>次比较(前者为平均情况复杂度,后者为最坏情况复杂度)。这大约是3000万次比较,以每秒1000万次比较计算,只需要3秒钟。 |
− | | |
− | | |
| | | |
| Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation. | | Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation. |
第263行: |
第258行: |
| Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation. | | Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation. |
| | | |
− | 因此,对复杂度的评估可能允许在任何实现之前消除许多低效率的算法。这也可用于在不测试所有变量的情况下调优复杂的算法。通过确定复杂算法中最复杂的步骤,对复杂度的研究还可以将重点放在这些步骤上,从而提高实现的效率。
| + | 因此,对复杂度的评估允许在编程实现之前就消除许多低效率的算法。这也可用于在不用测试所有变量的情况下调优复杂的算法。通过确定复杂算法中最复杂的步骤,对复杂度的研究还可以将重点放在这些步骤上,从而提高实现的效率。 |
| | | |
− | ==See also== | + | ==更多资料== |
| | | |
| * [[Computational complexity of mathematical operations]] 数学运算的计算复杂性 | | * [[Computational complexity of mathematical operations]] 数学运算的计算复杂性 |
− | | + | ==引用== |
− | ==References== | |
− | ==References==
| |
| *{{Citation | | *{{Citation |
| | last1=Arora | first1=Sanjeev | author-link1=Sanjeev Arora | | | last1=Arora | first1=Sanjeev | author-link1=Sanjeev Arora |