更改

跳到导航 跳到搜索
添加202字节 、 2021年1月26日 (二) 20:55
第270行: 第270行:  
A nonlinear lower bound of <math>\Omega(n\log n)</math> is known for the number of comparisons needed for a sorting algorithm. Thus the best sorting algorithms are optimal, as their complexity is <math>O(n\log n).</math> This lower bound results from the fact that there are  ways of ordering  objects. As each comparison splits in two parts this set of  orders, the number of  of comparisons that are needed for distinguishing all orders must verify <math>2^N>n!,</math> which implies <math>N =\Omega(n\log n),</math> by Stirling's formula.
 
A nonlinear lower bound of <math>\Omega(n\log n)</math> is known for the number of comparisons needed for a sorting algorithm. Thus the best sorting algorithms are optimal, as their complexity is <math>O(n\log n).</math> This lower bound results from the fact that there are  ways of ordering  objects. As each comparison splits in two parts this set of  orders, the number of  of comparisons that are needed for distinguishing all orders must verify <math>2^N>n!,</math> which implies <math>N =\Omega(n\log n),</math> by Stirling's formula.
   −
<math>\Omega(n\log n)</math>的一个非线性下界是已知的'''<font color="#ff8000">排序算法 sorting algorithm</font>'''所需的比较次数。由于{{mvar|n}}个对象有{{math|''n''!}}种排序方式,当它们的复杂度是<math>O(n\log n).</math>时,最好的排序算法都是最优的。由于每次比较结果都会被分成两部分,即一组{{math|''n''!}}次序,则区分所有比较序列所需的比较次数必须满足<math>2^N>n!,</math> ,这意味着<math>N =\Omega(n\log n),</math>,由 Stirling 公式得出。
+
<math>\Omega(n\log n)</math>的一个非线性下界是已知的'''<font color="#ff8000">排序算法 sorting algorithm</font>'''所需的比较次数。由于{{mvar|n}}个对象有{{math|''n''!}}种排序方式,当它们的复杂度是<math>O(n\log n).</math>时,最好的排序算法都是最优的。由于每次比较结果都会被分成两部分,即一组{{math|''n''!}}次序,则区分所有比较序列所需的比较次数{{mvar|N}}必须满足<math>2^N>n!,</math> ,这意味着<math>N =\Omega(n\log n),</math>,由 Stirling 公式得出。
      第278行: 第278行:  
A standard method for getting lower bounds of complexity consists of reducing a problem to another problem. More precisely, suppose that one may encode a problem  of size  into a subproblem of size  of a problem , and that the complexity of  is <math>\Omega(g(n)).</math> Without loss of generality, one may suppose that the function  increases with  and has an inverse function . Then the complexity of the problem  is <math>\Omega(g(h(n))).</math> This is this method that is used for proving that, if P ≠ NP (an unsolved conjecture), the complexity of every NP-complete problem is <math>\Omega(n^k),</math> for every positive integer .
 
A standard method for getting lower bounds of complexity consists of reducing a problem to another problem. More precisely, suppose that one may encode a problem  of size  into a subproblem of size  of a problem , and that the complexity of  is <math>\Omega(g(n)).</math> Without loss of generality, one may suppose that the function  increases with  and has an inverse function . Then the complexity of the problem  is <math>\Omega(g(h(n))).</math> This is this method that is used for proving that, if P ≠ NP (an unsolved conjecture), the complexity of every NP-complete problem is <math>\Omega(n^k),</math> for every positive integer .
   −
获得复杂度下限的标准方法包括将一个问题转化为另一个问题。更准确地说,假设一个人可以将一个大小问题编码为一个问题大小的子问题,并且这个问题的复杂性是 < math > Omega (g (n))。在不失一般性中,我们可以假设函数随着反函数的增加而增加。那么问题的复杂性就是“ math” > “ Omega”(g (h (n))) 。这个方法被用来证明,如果 p ≠ NP (一个未解的猜想) ,每个 NP 完全问题的复杂性是对于每个正整数都是 < math > Omega (n ^ k) </math > 。
+
获得复杂度下限的标准方法包括将一个问题转化为另一个问题。更准确地说,假设一个大小为{{mvar|n}}的问题{{mvar|A}}可以编码成大小为{{math|''f''(''n'')}}的问题{{mvar|B}},则问题{{mvar|A}}的复杂度为<math>\Omega(g(n)).</math>。不失一般性地,我们可以假设函数{{mvar|f}}随着{{mvar|n}}的增加而增加,并且有一个'''<font color="#ff8000">反函数 inverse function</font>'''{{mvar|h}},那么问题{{mvar|B}}的复杂度就是<math>\Omega(g(h(n))).</math>。这个方法可以用来证明,若 P ≠ NP (一个未解决的猜想) ,对于每个正整数{{mvar|k}},每个'''<font color="#ff8000">NP完全问题 NP-complete problem</font>''' 的复杂度都是 <math>\Omega(n^k),</math>。
    
==Use in algorithm design==
 
==Use in algorithm design==
307

个编辑

导航菜单