更改

跳到导航 跳到搜索
添加3字节 、 2021年1月26日 (二) 21:16
第296行: 第296行:  
It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers. This is wrong because this power increase allows working with large input data (big data). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require <math>O(n^2)</math> comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the quicksort and merge sort require only <math>n\log_2 n</math> comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For , this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second.
 
It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers. This is wrong because this power increase allows working with large input data (big data). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require <math>O(n^2)</math> comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the quicksort and merge sort require only <math>n\log_2 n</math> comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For , this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second.
   −
有一个常见的误解,由于'''<font color="#ff8000">摩尔定律 Moore's law</font>'''假定了现代计算机功率的'''<font color="#ff8000">指数增长 exponential growth</font>''',对算法复杂度的评估将变得不那么重要。这是错误的,因为这种功率增加容许处理较大的输入数据('''<font color="#ff8000">大数据 big data</font>''')。例如,当一个人想要按字母顺序对数百个条目的列表进行排序时,比如一本书的'''<font color="#ff8000">参考书目 bibliography</font>''',任何算法都应该在不到一秒的时间内就能正常工作。另一方面,对于一个包含100万条目的列表(例如一个大城镇的电话号码) ,需要<math>O(n^2)</math>次比较的基本算法,须进行1万亿次比较运算,即以每秒1000万次的速度进行比较需要大约3个小时。另一方面,'''<font color="#ff8000">快速排序 quicksort</font>'''和'''<font color="#ff8000">合并排序 merge sort</font>'''只需要<math>n\log_2 n</math>比较(前者为平均情况复杂度,后者为最坏情况复杂度)。这大约是3000万次比较,以每秒1000万次比较计算,只需要3秒钟。
+
有一个常见的误解,由于'''<font color="#ff8000">摩尔定律 Moore's law</font>'''假定了现代计算机功率的'''<font color="#ff8000">指数增长 exponential growth</font>''',对算法复杂度的评估将变得不那么重要。这是错误的,因为这种功率增加容许处理较大的输入数据('''<font color="#ff8000">大数据 big data</font>''')。例如,当一个人想要按字母顺序对数百个条目的列表进行排序时,比如一本书的'''<font color="#ff8000">参考书目 bibliography</font>''',任何算法都应该在不到一秒的时间内就能正常工作。另一方面,对于一个包含100万条目的列表(例如一个大城镇的电话号码) ,需要<math>O(n^2)</math>次比较的基本算法,须进行1万亿次比较运算,即以每秒1000万次的速度进行比较需要大约3个小时。另一方面,'''<font color="#ff8000">快速排序 quicksort</font>'''和'''<font color="#ff8000">合并排序 merge sort</font>'''只需要<math>n\log_2 n</math>次比较(前者为平均情况复杂度,后者为最坏情况复杂度)。这大约是3000万次比较,以每秒1000万次比较计算,只需要3秒钟。
      第304行: 第304行:  
Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation.
 
Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation.
   −
因此,对复杂性的评估可能允许在任何实现之前消除许多低效率的算法。这也可用于在不测试所有变量的情况下调优复杂的算法。通过确定复杂算法中最昂贵的步骤,对复杂性的研究还可以将重点放在这些步骤上,从而提高实现的效率。
+
因此,对复杂度的评估可能允许在任何实现之前消除许多低效率的算法。这也可用于在不测试所有变量的情况下调优复杂的算法。通过确定复杂算法中最昂贵的步骤,对复杂度的研究还可以将重点放在这些步骤上,从而提高实现的效率。
    
==See also==
 
==See also==
307

个编辑

导航菜单