第655行: |
第655行: |
| This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log<sup>*</sup>) is less than 5 for all practical data (2<sup>65536</sup> bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (2<sup>64</sup> bits); and binary log (log n) is less than 64 for virtually all practical data (2<sup>64</sup> bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have <math>K > k \log \log n</math> so long as <math>K/k > 6</math> and <math>n < 2^{2^6} = 2^{64}</math>. | | This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log<sup>*</sup>) is less than 5 for all practical data (2<sup>65536</sup> bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (2<sup>64</sup> bits); and binary log (log n) is less than 64 for virtually all practical data (2<sup>64</sup> bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have <math>K > k \log \log n</math> so long as <math>K/k > 6</math> and <math>n < 2^{2^6} = 2^{64}</math>. |
| | | |
− | 这种解释主要适用于增长极其缓慢的函数: (二进制)迭代对数(log sup * / sup)对所有实际数据(2 sup 65536 / sup bit)小于5; (二进制) log-log (log n)对几乎所有实际数据(2 sup 64 / sup bit)小于6; 二进制 log (log n)对几乎所有实际数据(2 sup 64 / sup bits)小于64。如果常量时间算法的开销导致一个更大的常量因子,那么非常量复杂度的算法可能比实际数据上的常量复杂度算法更有效,例如,只要有数学 k / k6 / math 和数学 n2 ^ {2 ^ 6}2 ^ {64} / math,就可能有数学 k log n / math。
| + | 这种解释主要适用于增长极慢的函数:(二进制)所有实际数据(265536位)的迭代对数(log*)小于5;(二进制)对数对数(log log log n)对于几乎所有实际数据(264位)小于6;对于几乎所有实际数据(264位),二进制对数(logn)小于64。然而,如果恒定时间算法的开销导致更大的常数因子,则具有非恒定复杂度的算法在实际数据上可能比具有恒定复杂度的算法更有效,例如,只要𝐾/𝑘>6且𝑛<226=264,则可以具有𝐾>𝑘loglog𝑛。 |
− | | |
− | | |
| | | |
| For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in [[hybrid algorithm]]s, like [[Timsort]], which use an asymptotically efficient algorithm (here [[merge sort]], with time complexity <math>n \log n</math>), but switch to an asymptotically inefficient algorithm (here [[insertion sort]], with time complexity <math>n^2</math>) for small data, as the simpler algorithm is faster on small data. | | For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in [[hybrid algorithm]]s, like [[Timsort]], which use an asymptotically efficient algorithm (here [[merge sort]], with time complexity <math>n \log n</math>), but switch to an asymptotically inefficient algorithm (here [[insertion sort]], with time complexity <math>n^2</math>) for small data, as the simpler algorithm is faster on small data. |
第663行: |
第661行: |
| For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity <math>n \log n</math>), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity <math>n^2</math>) for small data, as the simpler algorithm is faster on small data. | | For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity <math>n \log n</math>), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity <math>n^2</math>) for small data, as the simpler algorithm is faster on small data. |
| | | |
− | 对于大数据,线性或二次因素不能忽略,但对于小数据,渐近低效的算法可能更有效。这尤其适用于混合算法,如Timsort,它使用渐近有效的算法(这里指归并排序,时间复杂度𝑛log𝑛),但对于小数据,转换为渐近低效的算法(这里是插入排序,时间复杂度为2),因为更简单的算法在小数据上更快。
| + | 对于大数据来讲,线性或二次项系数是不能忽略的,但对于小数据,渐近低效的算法可能更有效。这尤其适用于混合算法,如Timsort,它使用渐近有效的算法(这里指归并排序,时间复杂度𝑛log𝑛),但对于小数据,转换为渐近低效的算法(这里是插入排序,时间复杂度为2),因为简单算法在小数据上更快。 |
| | | |
| == 参见 See also== | | == 参见 See also== |