更改

添加310字节 、 2021年8月15日 (日) 14:47
无编辑摘要
第177行: 第177行:     
一些奇点论的支持者认为,通过对过去趋势的推断,特别是那些缩减技术进步的差距有关的趋势,奇点是不可避免的。技术进步的背景下,较早使用“奇点”一词时,Stanislaw Ulam讲述了与冯·诺伊曼关于加速变革的一次谈话:
 
一些奇点论的支持者认为,通过对过去趋势的推断,特别是那些缩减技术进步的差距有关的趋势,奇点是不可避免的。技术进步的背景下,较早使用“奇点”一词时,Stanislaw Ulam讲述了与冯·诺伊曼关于加速变革的一次谈话:
一次围绕不断加速的技术进步和生活方式变化的对话,它使得人类历史上出现了一些基本的奇点,超过了这些奇点,人类的事务,如我们所知,将无法继续下去。
+
一次围绕不断加速的技术进步和生活方式变化的对话,它使得人类历史上出现了一些基本的奇点,超过了这些奇点,如我们所知的那些人类的事务,将无法继续下去。
      第191行: 第191行:  
ascent to the singularity, rather than Vinge's rapidly self�improving superhuman intelligence.
 
ascent to the singularity, rather than Vinge's rapidly self�improving superhuman intelligence.
   −
库兹韦尔声称,技术进步遵循[[指数增长]的模式,遵循他所称的“加速返回定律law of accelerating returns”。库兹韦尔写道,每当一项技术遇到障碍时,新技术就会出来克服这个障碍。他预测范式转变将变得越来越普遍,导致“技术变革非常迅速和深刻,以至于它代表着人类历史结构的一个断裂”。库兹韦尔相信奇点将在2045年之前出现。他和Vinge预测的不同点在于他预测的是一个逐渐上升到奇点的过程,而Vinge预测了一个快速自我更新的超人类智能。
+
库兹韦尔声称,技术进步遵循[[指数增长]的模式,遵循他所称的“加速返回定律 law of accelerating returns”。库兹韦尔写道,每当一项技术遇到障碍时,新技术就会出来克服这个障碍。他预测范式转变将变得越来越普遍,导致“技术变革非常迅速和深刻,以至于它代表着人类历史结构的一个断裂”。库兹韦尔相信奇点将在2045年之前出现。他和Vinge预测的不同点在于他预测的是一个逐渐上升到奇点的过程,而Vinge预测了一个快速自我更新的超人类智能。
    
Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".
 
Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".
   −
经常被引用的危险包括那些与分子纳米技术和基因工程有关的技术。这些威胁是奇点论的倡导者和批评者面临的主要议题,也是比尔 · 乔伊《连线 Wired》杂志上所发表文章《为什么未来不需要我们Why the future doesn't need us》的主题。
+
经常被引用的危险包括那些与分子纳米技术和基因工程有关的技术。这些威胁是奇点论的倡导者和批评者面临的主要议题,也是比尔 · 乔伊《连线 Wired》杂志上所发表文章《为什么未来不需要我们 Why the future doesn't need us》的主题。
    
===算法改进 ===
 
===算法改进 ===
第205行: 第205行:  
The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.{{citation needed|date=July 2017}} An AI rewriting its own source code could do so while contained in an [[AI box]].
 
The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.{{citation needed|date=July 2017}} An AI rewriting its own source code could do so while contained in an [[AI box]].
   −
递归自我改进算法集的机制在两个方面不同于原始计算速度的提高。首先,它不需要外部影响:设计更快的硬件的机器仍然需要人类来创造改进的硬件,或者对工厂进行适当的编程。AI可以既身处一个AI盒 AI box里面,又同时改进自己的 源代码。
+
递归自我改进算法的机制在两个方面不同于原始计算速度的提高。首先,它不需要外部影响:设计更快的硬件的机器仍然需要人类来创造改进的硬件,或者对工厂进行适当的编程。AI可以既身处一个AI盒(AI box)里面,又同时改进自己的源代码。
    
Second, as with [[Vernor Vinge]]’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. [[Eliezer Yudkowsky]] compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.<ref name="yudkowsky">{{cite web|author=Eliezer S. Yudkowsky |url=http://yudkowsky.net/singularity/power |title=Power of Intelligence |publisher=Yudkowsky |accessdate=2011-09-09}}</ref>
 
Second, as with [[Vernor Vinge]]’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. [[Eliezer Yudkowsky]] compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.<ref name="yudkowsky">{{cite web|author=Eliezer S. Yudkowsky |url=http://yudkowsky.net/singularity/power |title=Power of Intelligence |publisher=Yudkowsky |accessdate=2011-09-09}}</ref>
   −
第二,和Vernor Vinge关于奇点的概念一样,对结果的预测要困难得多。虽然速度的提高似乎与人类的智能只是数量上的区别,但实际的算法改进在质量上是不同的。Eliezer Yudkowsky将其与人类智能带来的变化相比较:人类改变世界的速度比进化速度快数千倍,而且方式完全不同。同样地,生命的进化与以前的地质变化又有着巨大的不同和加速,而智能的提高可能会使变化再次变得不同。<ref name="yudkowsky">{{cite web|author=Eliezer S. Yudkowsky |url=http://yudkowsky.net/singularity/power |title=Power of Intelligence |publisher=Yudkowsky |accessdate=2011-09-09}}</ref>
+
第二,和 Vernor Vinge 关于奇点的概念一样,对结果的预测要困难得多。虽然速度的提高似乎与人类的智能只是数量上的区别,但实际的算法改进在质量上是不同的。Eliezer Yudkowsky将其与人类智能带来的变化相比较:人类改变世界的速度比进化速度快数千倍,而且方式完全不同。同样地,生命的进化与以前的地质变化又有着巨大的不同和加速,而智能的提高可能会使变化再次变得不同。<ref name="yudkowsky">{{cite web|author=Eliezer S. Yudkowsky |url=http://yudkowsky.net/singularity/power |title=Power of Intelligence |publisher=Yudkowsky |accessdate=2011-09-09}}</ref>
    
There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant under self-improvement, potentially causing the AI to optimise for something other than what was originally intended.<ref name="selfawaresystems">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008 ]</ref><ref name="kurzweilai">{{cite web|url=http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time |title=Artificial General Intelligence: Now Is the Time |publisher=KurzweilAI |accessdate=2011-09-09}}</ref> Secondly, AIs could compete for the same scarce resources mankind uses to survive.<ref name="selfawaresystems.com">[http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/ Omohundro, Stephen M., "The Nature of Self-Improving Artificial Intelligence." Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.]</ref><ref>{{cite book|last1=Barrat|first1=James|title=Our Final Invention|year=2013|publisher=St. Martin's Press|location=New York|isbn=978-0312622374|pages=78–98|edition=First|chapter=6, "Four Basic Drives"|title-link=Our Final Invention}}</ref>
 
There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant under self-improvement, potentially causing the AI to optimise for something other than what was originally intended.<ref name="selfawaresystems">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008 ]</ref><ref name="kurzweilai">{{cite web|url=http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time |title=Artificial General Intelligence: Now Is the Time |publisher=KurzweilAI |accessdate=2011-09-09}}</ref> Secondly, AIs could compete for the same scarce resources mankind uses to survive.<ref name="selfawaresystems.com">[http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/ Omohundro, Stephen M., "The Nature of Self-Improving Artificial Intelligence." Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.]</ref><ref>{{cite book|last1=Barrat|first1=James|title=Our Final Invention|year=2013|publisher=St. Martin's Press|location=New York|isbn=978-0312622374|pages=78–98|edition=First|chapter=6, "Four Basic Drives"|title-link=Our Final Invention}}</ref>
      −
由递归自我改进的算法集合引起的智能爆炸存在着巨大的危险。首先,人工智能的目标结构在自我完善的情况下可能不是一成不变的,这可能会导致人工智能对原本计划之外的东西进行优化。第二,人工智能可以与人类竞争赖以生存的稀缺资源。
+
由递归式自我改进的算法集合引起的智能爆炸存在着巨大的危险。首先,人工智能的目标结构在自我完善的情况下可能不是一成不变的,这可能会导致人工智能对原本计划之外的东西进行优化。第二,人工智能可以与人类竞争赖以生存的稀缺资源。
    
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.<ref name="kurzweilai.net">{{cite web|url=http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2 |title=Max More and Ray Kurzweil on the Singularity |publisher=KurzweilAI |accessdate=2011-09-09}}</ref><ref name="ReferenceB">{{cite web|url=http://singinst.org/riskintro/index.html |title=Concise Summary &#124; Singularity Institute for Artificial Intelligence |publisher=Singinst.org |accessdate=2011-09-09}}</ref><ref name="nickbostrom7">[http://www.nickbostrom.com/fut/evolution.html Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, pp. 339–371, 2004, Ria University Press.]</ref>
 
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.<ref name="kurzweilai.net">{{cite web|url=http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2 |title=Max More and Ray Kurzweil on the Singularity |publisher=KurzweilAI |accessdate=2011-09-09}}</ref><ref name="ReferenceB">{{cite web|url=http://singinst.org/riskintro/index.html |title=Concise Summary &#124; Singularity Institute for Artificial Intelligence |publisher=Singinst.org |accessdate=2011-09-09}}</ref><ref name="nickbostrom7">[http://www.nickbostrom.com/fut/evolution.html Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, pp. 339–371, 2004, Ria University Press.]</ref>
      
虽然不是恶意的,但没有理由认为人工智能会积极促进人类目标的实现,除非这些目标可以被编程,否则,它们就可能利用目前用于支持人类的资源来促进自己的目标,从而导致人类灭绝。
 
虽然不是恶意的,但没有理由认为人工智能会积极促进人类目标的实现,除非这些目标可以被编程,否则,它们就可能利用目前用于支持人类的资源来促进自己的目标,从而导致人类灭绝。
第223行: 第222行:  
[[Carl Shulman]] and [[Anders Sandberg]] suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."<ref name="MuehlhauserSalamon2012">{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
 
[[Carl Shulman]] and [[Anders Sandberg]] suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."<ref name="MuehlhauserSalamon2012">{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
   −
Carl Shulman和Anders Sandberg认为,算法改进可能是奇点的限制因素;虽然硬件效率趋于稳步提高,但软件创新更不具可预测性,可能会受到连续、累积的研究的限制。他们认为,智能爆炸在受软件限制的奇点情况中发生的可能性实际上比在受硬件限制的奇点更可能发生,因为在软件受限的情况下,一旦开发出人类水平的人工智能,它可以在非常快的硬件上连续运行,廉价硬件的丰富将使人工智能研究不那么受限制。<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> 一旦软件知道如何使用硬件,大量的硬件就可以被释放出来,这被称为“计算过剩”。<ref name="MuehlhauserSalamon2012">{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
+
Carl Shulman 和 Anders Sandberg 认为,算法改进可能是奇点的限制因素;虽然硬件效率趋于稳步提高,但软件创新更不具可预测性,可能会受到连续、累积的研究的限制。他们认为,智能爆炸在受软件限制的奇点情况中发生的可能性实际上比在受硬件限制的奇点更高,因为在软件受限的情况下,一旦开发出人类水平的人工智能,它可以在非常快的硬件上连续运行,廉价硬件的丰富将使人工智能研究不那么受限制。<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> 一旦软件知道如何使用硬件,大量的硬件就可以被释放出来,这被称为“计算过剩”。<ref name="MuehlhauserSalamon2012">{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
    
===危机===
 
===危机===
第229行: 第228行:  
Some critics, like philosopher [[Hubert Dreyfus]], assert that computers or machines cannot achieve [[human intelligence]], while others, like physicist [[Stephen Hawking]], hold that the definition of intelligence is irrelevant if the net result is the same.<ref name="dreyfus"/>
 
Some critics, like philosopher [[Hubert Dreyfus]], assert that computers or machines cannot achieve [[human intelligence]], while others, like physicist [[Stephen Hawking]], hold that the definition of intelligence is irrelevant if the net result is the same.<ref name="dreyfus"/>
   −
一些批评者,如哲学家Hubert Dreyfus断言计算机或机器无法实现人类智能,而其他人,如物理学家史蒂芬·霍金,则认为如果最终结果是相同的,那么智力的定义其实无关紧要。
+
一些批评者,如哲学家 Hubert Dreyfus 断言计算机或机器无法实现人类智能,而其他人,如物理学家史蒂芬·霍金 Stephen Hawking,则认为如果最终结果是相同的,那么智力的定义其实无关紧要。
    
An early description of the idea was made in John Wood Campbell Jr.'s 1932 short story "The last evolution".
 
An early description of the idea was made in John Wood Campbell Jr.'s 1932 short story "The last evolution".
   −
早在1932年,约翰·W·坎贝尔的短篇小说《最后的进化》中就对这个想法进行了描述。
+
早在1932年,约翰·W·坎贝尔 John Wood Campbell Jr. 的短篇小说《最后的进化 The last evolution》中就对这个想法进行了描述。
    
Psychologist [[Steven Pinker]] stated in 2008:
 
Psychologist [[Steven Pinker]] stated in 2008:
   −
心理学家史蒂芬·平克在2008年指出:
+
心理学家史蒂芬·平克 Steven Pinker 在2008年指出:
    
{{quote|... There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. ...<ref name="spectrum.ieee.org"/>}}
 
{{quote|... There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. ...<ref name="spectrum.ieee.org"/>}}
第245行: 第244行:  
[[University of California, Berkeley]], [[philosophy]] professor [[John Searle]] writes:
 
[[University of California, Berkeley]], [[philosophy]] professor [[John Searle]] writes:
   −
[[加州大学伯克利分校],哲学教授约翰·塞尔写道:
+
[[加州大学伯克利分校],哲学教授约翰·塞尔 John Searle 写道:
    
{{blockquote|[Computers] have, literally ..., no [[intelligence]], no [[motivation]], no [[autonomy]], and no agency.  We design them to behave as if they had certain sorts of [[psychology]], but there is no psychological reality to the corresponding processes or behavior. ...  [T]he machinery has no beliefs, desires, [or] motivations.<ref>[[John R. Searle]], “What Your Computer Can’t Know”, ''[[The New York Review of Books]]'', 9 October 2014, p. 54.</ref>}}
 
{{blockquote|[Computers] have, literally ..., no [[intelligence]], no [[motivation]], no [[autonomy]], and no agency.  We design them to behave as if they had certain sorts of [[psychology]], but there is no psychological reality to the corresponding processes or behavior. ...  [T]he machinery has no beliefs, desires, [or] motivations.<ref>[[John R. Searle]], “What Your Computer Can’t Know”, ''[[The New York Review of Books]]'', 9 October 2014, p. 54.</ref>}}
    
毫不夸张地说,计算机没有智能,没有动机,没有自主,也没有智能体。我们设计他们,使他们的行为好像表示他们有某种心理,但其实没有对应这些过程或行为的心理现实……机器没有信仰、愿望或动机。
 
毫不夸张地说,计算机没有智能,没有动机,没有自主,也没有智能体。我们设计他们,使他们的行为好像表示他们有某种心理,但其实没有对应这些过程或行为的心理现实……机器没有信仰、愿望或动机。
         
[[Martin Ford (author)|Martin Ford]] in ''The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future''<ref name="thelightsinthetunnel"/> postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be "routine."<ref name="nytimes"/>
 
[[Martin Ford (author)|Martin Ford]] in ''The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future''<ref name="thelightsinthetunnel"/> postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be "routine."<ref name="nytimes"/>
   −
Martin Ford在“隧道中的灯光:自动化、加速技术和未来经济 The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future”<ref name="thelightsinthetunnel"/>中提出了一个“技术悖论”:在奇点出现之前,经济体中的大多数日常工作都将自动化,因为这所需的技术水平低于奇点。这将导致大规模的失业和消费者需求的骤降,这反过来又会破坏投资于实现奇点所需技术的动机。工作的替代越来越不再局限于那些传统上被认为是“例行公事”的工作。<ref name="nytimes"/>
+
Martin Ford 在“隧道中的灯光:自动化、加速技术和未来经济 The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future”<ref name="thelightsinthetunnel"/>中提出了一个“技术悖论”:在奇点出现之前,经济体中的大多数日常工作都将自动化,因为这所需的技术水平低于奇点。这将导致大规模的失业和消费者需求的骤降,这反过来又会破坏投资于实现奇点所需技术的动机。工作的替代越来越不再局限于那些传统上被认为是“例行公事”的工作。<ref name="nytimes"/>
    
[[Theodore Modis]]<ref name="google13"/><ref name="Singularity Myth"/> and [[Jonathan Huebner]]<ref name="technological14"/> argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer [[clock rate]]s is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.<ref name="cnet"/> While Kurzweil used Modis' resources, and Modis' work was around accelerating change, Modis distanced himself from Kurzweil's thesis of a "technological singularity", claiming that it lacks scientific rigor.<ref name="Singularity Myth"/>
 
[[Theodore Modis]]<ref name="google13"/><ref name="Singularity Myth"/> and [[Jonathan Huebner]]<ref name="technological14"/> argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer [[clock rate]]s is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.<ref name="cnet"/> While Kurzweil used Modis' resources, and Modis' work was around accelerating change, Modis distanced himself from Kurzweil's thesis of a "technological singularity", claiming that it lacks scientific rigor.<ref name="Singularity Myth"/>
   −
Theodore Modis<ref name="google13"/><ref name="Singularity Myth"/>和Jonathan Huebner<ref name="technological14"/> 认为技术创新的速度不仅停止上升,而且现在实际上正在下降。这种下降的证据是计算机时钟速率的增长正在放缓,尽管摩尔关于电路密度指数增长的预测仍然成立。这是由于芯片产生过多的热量,当它们以较高的速度运行时,这些热量不能足够快地散去,可能导致芯片熔化。在未来,随着更节能的CPU设计和多单元处理器的发明,速度的提高可能实现。<ref name="cnet"/>尽管 Kurzweil 利用了Modis的(工作成果)作为资源,同时 Modis 的工作围绕着加速变革,但 Modis Kurzweil 的“技术奇点”理论保持距离,称其缺乏科学严谨性。<ref name="Singularity Myth"/>
+
Theodore Modis<ref name="google13"/><ref name="Singularity Myth"/>和Jonathan Huebner<ref name="technological14"/> 认为技术创新的速度不仅停止上升,而且现在实际上正在下降。这种下降的证据是计算机时钟速率的增长正在放缓,尽管摩尔关于电路密度指数增长的预测仍然成立。这是由于芯片产生过多的热量,当它们以较高的速度运行时,这些热量不能足够快地散去,可能导致芯片熔化。在未来,随着更节能的CPU设计和多单元处理器的发明,速度的提高可能实现。<ref name="cnet"/>尽管 Kurzweil 利用了 Modis 的(工作成果)作为资源,同时 Modis 的工作围绕着加速变革,但 Modis Kurzweil 的“技术奇点”理论保持距离,称其缺乏科学严谨性。<ref name="Singularity Myth"/>
      第268行: 第266行:  
In a 2007 paper, Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.<ref>Schmidhuber, Jürgen. "New millennium AI and the convergence of history." Challenges for computational intelligence. Springer Berlin Heidelberg, 2007. 15–35.</ref>
 
In a 2007 paper, Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.<ref>Schmidhuber, Jürgen. "New millennium AI and the convergence of history." Challenges for computational intelligence. Springer Berlin Heidelberg, 2007. 15–35.</ref>
   −
在2007年的一篇论文中,Schmidhuber指出主观上“值得注意的事件”出现的频率似乎正在接近21世纪的奇点,但他提醒读者,对这些主观事件的情节要持保留态度:也许对近期和远期事件的记忆差异,可能会造成一种在根本不存在的情况下变化加速的错觉。<ref>Schmidhuber, Jürgen. "New millennium AI and the convergence of history." Challenges for computational intelligence. Springer Berlin Heidelberg, 2007. 15–35.</ref>
+
在2007年的一篇论文中,Schmidhuber 指出主观上“值得注意的事件”出现的频率似乎正在接近21世纪的奇点,但他提醒读者,对这些主观事件的情节要持保留态度:也许对近期和远期事件的记忆差异,可能会造成一种在根本不存在的情况下变化加速的错觉。<ref>Schmidhuber, Jürgen. "New millennium AI and the convergence of history." Challenges for computational intelligence. Springer Berlin Heidelberg, 2007. 15–35.</ref>
    
Paul Allen argued the opposite of accelerating returns, the complexity brake; the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies, a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[60] The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".
 
Paul Allen argued the opposite of accelerating returns, the complexity brake; the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies, a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[60] The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".
      −
保罗·艾伦 Paul Allen 认为,与加速回报相反的是复杂性制动;科学在理解智力方面取得的进展越多,就越难取得更多的进展。一项对专利数量的研究表明,人类的创造力并没有表现出加速的回报。事实上,正如Joseph Tainter 在他的《复杂社会的崩溃 The Collapse of Complex Societies 》中所指出的那样,存在一个收益递减定律 a law of diminishing returns 的限制。每千人的专利的数量在1850年至1900年期间达到顶峰,此后一直在下降。复杂性的增长最终会自我限制,并导致广泛的“一般系统崩溃 general systems collapse”。
+
保罗·艾伦 Paul Allen 认为,与加速回报相反的是<font color="#ff8000">复杂性制动 complexity brake</font>;科学在理解智力方面取得的进展越多,就越难取得更多的进展。一项对专利数量的研究表明,人类的创造力并没有表现出加速的回报。事实上,正如Joseph Tainter 在他的《复杂社会的崩溃 The Collapse of Complex Societies 》中所指出的那样,存在一个收益递减定律 a law of diminishing returns 的限制。每千人的专利的数量在1850年至1900年期间达到顶峰,此后一直在下降。复杂性的增长最终会自我限制,并导致广泛的“一般系统崩溃 general systems collapse”。
    
[[Jaron Lanier]] refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process."<ref name="lanier">{{cite web |author=Jaron Lanier |title=Who Owns the Future? |work=New York: Simon & Schuster |date=2013 |url=http://www.epubbud.com/read.php?g=JCB8D9LA&tocp=59}}</ref> He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on ''not'' emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."<ref name="lanier" />
 
[[Jaron Lanier]] refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process."<ref name="lanier">{{cite web |author=Jaron Lanier |title=Who Owns the Future? |work=New York: Simon & Schuster |date=2013 |url=http://www.epubbud.com/read.php?g=JCB8D9LA&tocp=59}}</ref> He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on ''not'' emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."<ref name="lanier" />
   −
Jaron Lanier驳斥了奇点不可避免的观点。他说:“我不认为这项技术是在创造自我。这不是一个自主的过程。”<ref name="lanier">{{cite web |author=Jaron Lanier |title=Who Owns the Future? |work=New York: Simon & Schuster |date=2013 |url=http://www.epubbud.com/read.php?g=JCB8D9LA&tocp=59}}</ref>他接着断言:“相信人的能动性而不是技术决定论的原因是,这样你就可以有一个经济体,人们在其中可以自己挣钱,创造自己的生活。如果你在不强调个人主观能动性的基础上构建社会,就等于在操作上否认人们的影响力、尊严和自决...接受 [奇点的想法] 将是对糟糕数据和糟糕政治的颂扬。<ref name="lanier" />
+
Jaron Lanier 驳斥了奇点不可避免的观点。他说:“我不认为这项技术是在创造自我。这不是一个自主的过程。”<ref name="lanier">{{cite web |author=Jaron Lanier |title=Who Owns the Future? |work=New York: Simon & Schuster |date=2013 |url=http://www.epubbud.com/read.php?g=JCB8D9LA&tocp=59}}</ref>他接着断言:“相信人的能动性而不是技术决定论的原因是,这样你就可以有一个经济体,人们在其中可以自己挣钱,创造自己的生活。如果你在不强调个人主观能动性的基础上构建社会,就等于在操作上否认人们的影响力、尊严和自决...接受 [奇点的想法] 将是对糟糕数据和糟糕政治的颂扬。<ref name="lanier" />
    
[[Economics|Economist]] [[Robert J. Gordon]], in ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'' (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the [[financial crisis of 2007–2008]], and argues that the economic data show no trace of a coming Singularity as imagined by mathematician [[I.J. Good]].<ref>[[William D. Nordhaus]], "Why Growth Will Fall" (a review of [[Robert J. Gordon]], ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'', Princeton University Press, 2016, {{ISBN|978-0691147727}}, 762 pp., $39.95), ''[[The New York Review of Books]]'', vol. LXIII, no. 13 (August 18, 2016), p. 68.</ref>
 
[[Economics|Economist]] [[Robert J. Gordon]], in ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'' (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the [[financial crisis of 2007–2008]], and argues that the economic data show no trace of a coming Singularity as imagined by mathematician [[I.J. Good]].<ref>[[William D. Nordhaus]], "Why Growth Will Fall" (a review of [[Robert J. Gordon]], ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'', Princeton University Press, 2016, {{ISBN|978-0691147727}}, 762 pp., $39.95), ''[[The New York Review of Books]]'', vol. LXIII, no. 13 (August 18, 2016), p. 68.</ref>
   −
经济学家 Robert J.Gordon在《美国经济增长的兴衰:内战以来的美国生活水平 The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War》(2016)中指出,据测量,经济增长在1970年左右放缓,自2007-2008年金融危机以来甚至进一步放缓,并认为经济数据显示没有迹象表明数学家I.J.Good所设想的奇点将会到来。<ref>[[William D. Nordhaus]], "Why Growth Will Fall" (a review of [[Robert J. Gordon]], ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'', Princeton University Press, 2016, {{ISBN|978-0691147727}}, 762 pp., $39.95), ''[[The New York Review of Books]]'', vol. LXIII, no. 13 (August 18, 2016), p. 68.</ref>
+
经济学家 Robert J.Gordon 在《美国经济增长的兴衰:内战以来的美国生活水平 The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War》(2016)中指出,据测量,经济增长在1970年左右放缓,自2007-2008年金融危机以来甚至进一步放缓,并认为经济数据显示没有迹象表明数学家 I.J.Good 所设想的奇点将会到来。<ref>[[William D. Nordhaus]], "Why Growth Will Fall" (a review of [[Robert J. Gordon]], ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'', Princeton University Press, 2016, {{ISBN|978-0691147727}}, 762 pp., $39.95), ''[[The New York Review of Books]]'', vol. LXIII, no. 13 (August 18, 2016), p. 68.</ref>
    
In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a [[Log-log plot|log-log]] chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist [[PZ Myers]] points out that many of the early evolutionary "events" were picked arbitrarily.<ref name="PZMyers"/> Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on [[:File:ParadigmShiftsFrr15Events.svg|a log-log chart]]. ''[[The Economist]]'' mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.<ref name="moreblades"/>
 
In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a [[Log-log plot|log-log]] chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist [[PZ Myers]] points out that many of the early evolutionary "events" were picked arbitrarily.<ref name="PZMyers"/> Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on [[:File:ParadigmShiftsFrr15Events.svg|a log-log chart]]. ''[[The Economist]]'' mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.<ref name="moreblades"/>
   −
除了对奇点概念的一般性批评外,一些批评者还对库兹韦尔的标志性图表提出了质疑。一种批评是,这种性质的对数图像本质上就会存在倾向于直线的有偏差结果。其他人批评库兹韦尔在数据点的使用上存在选择偏差。<ref name="PZMyers"/>例如,生物学家P. Z. Myers指出,许多早期的进化“事件”都是随意挑选的。库兹韦尔反驳了这一点,他绘制了15个中立来源的进化事件图,并表明它们都符合一条直线.《经济学人》用一张图表来嘲讽这个概念:一把剃须刀上的刀片数在过去几年里从一个增加到多达五个,并且它将以更快的速度增长到无穷大。<ref name="moreblades"/>
+
除了对奇点概念的一般性批评外,一些批评者还对 Kurzweil 的标志性图表提出了质疑。一种批评是,这种性质的对数图像本质上就会存在倾向于直线的有偏差结果。其他人批评 Kurzweil 在数据点的使用上存在选择偏差。<ref name="PZMyers"/>例如,生物学家 P. Z. Myers 指出,许多早期的进化“事件”都是随意挑选的。Kurzweil 反驳了这一点,他绘制了15个中立来源的进化事件图,并表明它们都符合一条直线.《经济学人》用一张图表来嘲讽这个概念:一把剃须刀上的刀片数在过去几年里从一个增加到多达五个,并且它将以更快的速度增长到无穷大。<ref name="moreblades"/>
    
==潜在影响==
 
==潜在影响==
第291行: 第289行:  
Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the [[Paleolithic]] era until the [[Neolithic Revolution]]. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.<ref name="Hanson">{{Citation |url=http://www.spectrum.ieee.org/robotics/robotics-software/economics-of-the-singularity |title=Economics Of The Singularity |author=Robin Hanson |work=IEEE Spectrum Special Report: The Singularity }} & [http://hanson.gmu.edu/longgrow.pdf Long-Term Growth As A Sequence of Exponential Modes]</ref>
 
Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the [[Paleolithic]] era until the [[Neolithic Revolution]]. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.<ref name="Hanson">{{Citation |url=http://www.spectrum.ieee.org/robotics/robotics-software/economics-of-the-singularity |title=Economics Of The Singularity |author=Robin Hanson |work=IEEE Spectrum Special Report: The Singularity }} & [http://hanson.gmu.edu/longgrow.pdf Long-Term Growth As A Sequence of Exponential Modes]</ref>
   −
过去由于一些技术进步,经济增长率发生了巨大变化。以人口增长为基础,从[[旧石器时代]]到[[新石器时代]],经济每25万年翻一番。新农业经济每900年翻一番,增长显著。在当今时代,从工业革命开始,世界经济产出每15年翻一番,比农业时代快60倍。罗宾·汉森Robin Hanson认为,如果超人智能的兴起引发了类似的革命,人们会预期经济至少每季度翻一番,甚至可能每周翻一番。
+
过去由于一些技术进步,经济增长率发生了巨大变化。以人口增长为基础,从[[旧石器时代]]到[[新石器时代]],经济每25万年翻一番。新农业经济每900年翻一番,增长显著。在当今时代,从工业革命开始,世界经济产出每15年翻一番,比农业时代快60倍。罗宾·汉森 Robin Hanson 认为,如果超人智能的兴起引发了类似的革命,人们会预期经济至少每季度翻一番,甚至可能每周翻一番。
    
===不确定性和风险===
 
===不确定性和风险===
第354行: 第352行:  
In addition, some argue that we are already in the midst of a [[The Major Transitions in Evolution|major evolutionary transition]] that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence.
 
In addition, some argue that we are already in the midst of a [[The Major Transitions in Evolution|major evolutionary transition]] that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence.
   −
此外,有人认为,我们已经处在一个融合了技术、生物学和社会学的进化巨变major evolutionary transition之中。数字技术已经无可争辩地渗透到人类社会的结构中,而且生命的维持常常依赖数字技术。
+
此外,有人认为,我们已经处在一个融合了技术、生物学和社会学的<font color="#ff8000">进化巨变 major evolutionary transition </font>之中。数字技术已经无可争辩地渗透到人类社会的结构中,而且生命的维持常常依赖数字技术。
       
A 2016 article in ''[[Trends in Ecology & Evolution]]'' argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust [[artificial intelligence]] with our lives through [[Anti-lock braking system|antilock braking in cars]] and [[autopilot]]s in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".
 
A 2016 article in ''[[Trends in Ecology & Evolution]]'' argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust [[artificial intelligence]] with our lives through [[Anti-lock braking system|antilock braking in cars]] and [[autopilot]]s in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".
   −
2016年发表在 生态学和进化进展 Trends in Ecology and Evolution 的一篇文章认为,“人类已经接受了生物和技术的融合。我们清醒时大部分时间都是通过数字媒介进行交流的……我们拿性命做担保信任汽车上的防抱死制动系统 Anti-lock braking system 和飞机上的自动巡航模式 autopilot……在美国,三分之一的婚姻都是在网络上开始的,数字算法也在人类配对和繁殖中也发挥了作用”。
+
2016年发表在《生态学和进化进展 Trends in Ecology and Evolution》的一篇文章认为,“人类已经接受了生物和技术的融合。我们清醒时大部分时间都是通过数字媒介进行交流的……我们拿性命做担保信任汽车上的<font color="#ff8000">防抱死制动系统 Anti-lock braking system</font> 和飞机上的<font color="#ff8000">自动巡航模式 autopilot</font>……在美国,三分之一的婚姻都是在网络上开始的,数字算法也在人类配对和繁殖中也发挥了作用”。
      第388行: 第386行:  
In February 2009, under the auspices of the [[Association for the Advancement of Artificial Intelligence]] (AAAI), [[Eric Horvitz]] chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire [[autonomy]], and to what degree they could use such abilities to pose threats or hazards.<ref name="nytimes july09" />
 
In February 2009, under the auspices of the [[Association for the Advancement of Artificial Intelligence]] (AAAI), [[Eric Horvitz]] chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire [[autonomy]], and to what degree they could use such abilities to pose threats or hazards.<ref name="nytimes july09" />
   −
2009年2月,在人工智能促进协会Association for the Advancement of Artificial Intelligence(AAAI) 的主持下,Eric Horvitz在加利福尼亚州Pacific Grove 的 Asilomar 主持了一次由主要计算机科学家、人工智能研究人员和机器人学家参加的会议。其目的是讨论机器人能够自给自足并能够自己做决定的假设可能性的潜在影响。他们讨论了计算机和机器人能够在多大程度上获得自主性 autonomy,以及在多大程度上可以利用这些能力对人类构成威胁或危险。<ref name="nytimes july09" />
+
2009年2月,在<font color="#ff8000">人工智能促进协会Association for the Advancement of Artificial Intelligence(AAAI) </font>的主持下,Eric Horvitz 在加利福尼亚州 Pacific Grove 的 Asilomar 主持了一次由主要计算机科学家、人工智能研究人员和机器人学家参加的会议。其目的是讨论机器人能够自给自足并能够自己做决定的假设可能性的潜在影响。他们讨论了计算机和机器人能够在多大程度上获得自主性 autonomy,以及在多大程度上可以利用这些能力对人类构成威胁或危险。<ref name="nytimes july09" />
      第394行: 第392行:       −
有些机器被编程成各种形式的半自主 semi-autonomy,包括定位自己的电源和选择武器攻击的目标等。此外,有些计算机病毒可以避免被消除,根据与会科学家的说法,可以说已经达到了机器智能的“蟑螂”阶段。与会者指出,科幻小说中描述的自我意识可能不太可能,但也存在其他潜在的危险和陷阱。<ref name="nytimes july09">[https://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&ref=todayspaper Scientists Worry Machines May Outsmart Man] By JOHN MARKOFF, NY Times, July 26, 2009.</ref>
+
有些机器被编程成各种形式的<font color="#ff8000">半自主 semi-autonomy</font>,包括定位自己的电源和选择武器攻击的目标等。此外,有些计算机病毒可以避免被消除,根据与会科学家的说法,可以说已经达到了机器智能的“蟑螂”阶段。与会者指出,科幻小说中描述的自我意识可能不太可能,但也存在其他潜在的危险和陷阱。<ref name="nytimes july09">[https://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&ref=todayspaper Scientists Worry Machines May Outsmart Man] By JOHN MARKOFF, NY Times, July 26, 2009.</ref>
    
Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability.<ref name=":0">{{cite magazine |last=Robinson |first=Frank S. |title=The Human Future: Upgrade or Replacement? |magazine=[[The Humanist]] |date=27 June 2013 |url=https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement}}</ref> Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than humanity captures, so capturing more of that solar energy would hold vast promise for civilizational growth.
 
Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability.<ref name=":0">{{cite magazine |last=Robinson |first=Frank S. |title=The Human Future: Upgrade or Replacement? |magazine=[[The Humanist]] |date=27 June 2013 |url=https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement}}</ref> Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than humanity captures, so capturing more of that solar energy would hold vast promise for civilizational growth.
      −
Frank S. Robinson 预言,一旦人类实现了具有人类智能的机器,科学技术问题将被远远优于人类的智力来解决。他指出,人工系统能够比人类更直接地共享数据,并预测这将导致一个全球的超级智能网络,使人类的能力相形见绌。Robinson还讨论了在这样一次智能爆炸之后,未来可能会有多大的不同。其中一个例子就是太阳能,地球接收到的太阳能远远多于人类捕获的太阳能,因此捕捉更多的太阳能将为文明发展带来巨大的希望。
+
Frank S. Robinson 预言,一旦人类实现了具有人类智能的机器,科学技术问题将被远远优于人类的智力来解决。他指出,人工系统能够比人类更直接地共享数据,并预测这将导致一个全球的超级智能网络,使人类的能力相形见绌。Robinson 还讨论了在这样一次智能爆炸之后,未来可能会有多大的不同。其中一个例子就是太阳能,地球接收到的太阳能远远多于人类捕获的太阳能,因此捕捉更多的太阳能将为文明发展带来巨大的希望。
    
==硬起飞与软起飞==
 
==硬起飞与软起飞==
第417行: 第415行:       −
Ramez Naam反对硬起飞。他指出,我们已经看到企业等超级智能体的递归式地自我完善。例如,Intel拥有“数万人的集体脑力,可能还有数百万个CPU核心……来设计更好的CPU!”然而,这并没有导致硬起飞;相反,它以摩尔定律的形式实现了软起飞<ref name=Naam2014Further>{{cite web|last=Naam|first=Ramez|title=The Singularity Is Further Than It Appears|url=http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html|accessdate=16 May 2014|year=2014}}</ref> 。Naam进一步指出,高等智能的计算复杂度可能比线性大得多,因此,“创建智能系统2的难度可能两倍于是创建智能系统1的两倍多。”<ref name=Naam2014Ascend>{{cite web|last=Naam|first=Ramez|title=Why AIs Won't Ascend in the Blink of an Eye - Some Math|url=http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html|accessdate=16 May 2014|year=2014}}</ref>
+
Ramez Naam 反对硬起飞。他指出,我们已经看到企业等超级智能体的递归式地自我完善。例如,Intel拥有“数万人的集体脑力,可能还有数百万个CPU核心……来设计更好的CPU!”然而,这并没有导致硬起飞;相反,它以摩尔定律的形式实现了软起飞<ref name=Naam2014Further>{{cite web|last=Naam|first=Ramez|title=The Singularity Is Further Than It Appears|url=http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html|accessdate=16 May 2014|year=2014}}</ref> 。Naam进一步指出,高等智能的计算复杂度可能比线性大得多,因此,“创建智能系统2的难度可能两倍于是创建智能系统1的两倍多。”<ref name=Naam2014Ascend>{{cite web|last=Naam|first=Ramez|title=Why AIs Won't Ascend in the Blink of an Eye - Some Math|url=http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html|accessdate=16 May 2014|year=2014}}</ref>
    
[[J. Storrs Hall]] believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the ''starting point'' of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460–467|url=http://www.agiri.org/takeoff_hall.pdf|accessdate=16 May 2014}}</ref>
 
[[J. Storrs Hall]] believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the ''starting point'' of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460–467|url=http://www.agiri.org/takeoff_hall.pdf|accessdate=16 May 2014}}</ref>
   −
J.Storrs Hall认为,“许多常见的一夜之间出现的硬起飞场景都是循环论证——它们似乎在自我提升过程的起点上假设了超人类的能力”,以便人工智能能够实现起飞所需的戏剧性的、领域通用的改进。Hall认为,一个初出茅庐的人工智能与其靠自己不断地自我改进硬件、软件和基础设施,不如专注于一个它最有效的领域,然后在市场上购买剩余的组件,因为市场上产品的质量不断提高,人工智能将很难跟上世界其他地方使用的尖端技术。<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460–467|url=http://www.agiri.org/takeoff_hall.pdf|accessdate=16 May 2014}}</ref>
+
J.Storrs Hall 认为,“许多常见的一夜之间出现的硬起飞场景都是循环论证——它们似乎在自我提升过程的起点上假设了超人类的能力”,以便人工智能能够实现起飞所需的戏剧性的、领域通用的改进。Hall 认为,一个初出茅庐的人工智能与其靠自己不断地自我改进硬件、软件和基础设施,不如专注于一个它最有效的领域,然后在市场上购买剩余的组件,因为市场上产品的质量不断提高,人工智能将很难跟上世界其他地方使用的尖端技术。<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460–467|url=http://www.agiri.org/takeoff_hall.pdf|accessdate=16 May 2014}}</ref>
      第430行: 第428行:  
[[Max More]] disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
 
[[Max More]] disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
   −
Max More不同意这一观点,他认为,如果只有少数超高速的人类水平的人工智能,它们不会从根本上改变世界,因为它们仍将依赖人来完成任务,并且仍然会受到人类认知的限制。即使所有的超高速人工智能都致力于智能增强,但目前还不清楚为什么它们在产生超人类智能方面比现有的人类认知科学家做得更好,尽管进展速度会加快。更进一步指出,超级智能不会在一夜之间改变世界:超级智能需要与现有的、缓慢的人类系统进行接触,以完成对世界的物理影响。”合作、组织和将想法付诸实际变革的需要将确保所有旧规则不会在一夜之间甚至几年内被抛弃。”<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
+
Max More 不同意这一观点,他认为,如果只有少数超高速的人类水平的人工智能,它们不会从根本上改变世界,因为它们仍将依赖人来完成任务,并且仍然会受到人类认知的限制。即使所有的超高速人工智能都致力于智能增强,但目前还不清楚为什么它们在产生超人类智能方面比现有的人类认知科学家做得更好,尽管进展速度会加快。更进一步指出,超级智能不会在一夜之间改变世界:超级智能需要与现有的、缓慢的人类系统进行接触,以完成对世界的物理影响。”合作、组织和将想法付诸实际变革的需要将确保所有旧规则不会在一夜之间甚至几年内被抛弃。”<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
    
==永生==
 
==永生==
第436行: 第434行:  
In his 2005 book, ''[[The Singularity is Near]]'', [[Ray Kurzweil|Kurzweil]] suggests that medical advances would allow people to protect their bodies from the effects of aging, making the [[Life extension|life expectancy limitless]]. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.<ref>''The Singularity Is Near'', p.&nbsp;215.</ref> Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests [[somatic gene therapy]]; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.<ref>''The Singularity is Near'', p.&nbsp;216.</ref>
 
In his 2005 book, ''[[The Singularity is Near]]'', [[Ray Kurzweil|Kurzweil]] suggests that medical advances would allow people to protect their bodies from the effects of aging, making the [[Life extension|life expectancy limitless]]. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.<ref>''The Singularity Is Near'', p.&nbsp;215.</ref> Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests [[somatic gene therapy]]; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.<ref>''The Singularity is Near'', p.&nbsp;216.</ref>
   −
在Kurzweil 2005年出版的《奇点近了The Singularity is Near》一书中,他指出,医学的进步将使人们能够保护自己的身体免受衰老的影响,从而延长寿命。Kurzweil认为,医学的技术进步将使我们能够不断地修复和更换我们身体中有缺陷的部件,从而将寿命延长到某个他无法确定的年龄。Kurzweil通过讨论当前的生物工程进展进一步支持了他的论点。Kurzweil建议了体细胞基因疗法somatic gene therapy;在合成具有特定遗传信息的病毒之后,下一步则是把这项技术应用到基因治疗中,用合成的基因取代人类的DNA。
+
在 Kurzweil 2005年出版的《奇点近了 The Singularity is Near》一书中,他指出,医学的进步将使人们能够保护自己的身体免受衰老的影响,从而延长寿命。Kurzweil 认为,医学的技术进步将使我们能够不断地修复和更换我们身体中有缺陷的部件,从而将寿命延长到某个他无法确定的年龄。Kurzweil 通过讨论当前的生物工程进展进一步支持了他的论点。Kurzweil 建议了体细胞基因疗法somatic gene therapy;在合成具有特定遗传信息的病毒之后,下一步则是把这项技术应用到基因治疗中,用合成的基因取代人类的DNA。
       
[[K. Eric Drexler]], one of the founders of [[nanotechnology]], postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical [[biological machine]]s, in his 1986 book ''[[Engines of Creation]]''.
 
[[K. Eric Drexler]], one of the founders of [[nanotechnology]], postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical [[biological machine]]s, in his 1986 book ''[[Engines of Creation]]''.
   −
[[K.Eric Drexler]],纳米技术nanotechnology的创始人之一,在他1986年的著作“创造的引擎Engines of Creation”中,假设了细胞修复设备,包括在细胞内运行并利用目前假设的[[生物机器]]的设备。
+
[[K.Eric Drexler]],纳米技术 nanotechnology 的创始人之一,在他1986年的著作《创造的引擎 Engines of Creation》中,假设了细胞修复设备,包括在细胞内运行并利用目前假设的[[生物机器]]的设备。
      第449行: 第447行:       −
根据[[Richard Feynman]],他过去的研究生和合作者[[Albert Hibbs]]最初向他建议(大约在1959年)费曼理论微型机械Feynman's theoretical micromachines的“医学”用途。Hibbs认为,有一天,理论上某些修理机器的尺寸可能会被尽可能地缩小(正如费曼所说的那样)“[[Molecular machine#Biological|swallow the doctor]]”。这个想法被收入了费曼1959年的文章“在底部有很多空间[There's Plenty of Room at the Bottom”。
+
根据[[Richard Feynman]],他过去的研究生和合作者[[Albert Hibbs]]最初向他建议(大约在1959年)费曼理论微型机械Feynman's theoretical micromachines的“医学”用途。Hibbs 认为,有一天,理论上某些修理机器的尺寸可能会被尽可能地缩小(正如费曼所说的那样)“[[Molecular machine#Biological|swallow the doctor]]”。这个想法被收入了费曼1959年的文章“在底部有很多空间[There's Plenty of Room at the Bottom”。
      第455行: 第453行:  
Beyond merely extending the operational life of the physical body, [[Jaron Lanier]] argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".<ref>{{cite book |title = You Are Not a Gadget: A Manifesto |last = Lanier |first = Jaron |author-link = Jaron Lanier |publisher = [[Alfred A. Knopf]] |year = 2010 |isbn = 978-0307269645 |location = New York, NY |page = [https://archive.org/details/isbn_9780307269645/page/26 26] |url-access = registration |url = https://archive.org/details/isbn_9780307269645 }}</ref>
 
Beyond merely extending the operational life of the physical body, [[Jaron Lanier]] argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".<ref>{{cite book |title = You Are Not a Gadget: A Manifesto |last = Lanier |first = Jaron |author-link = Jaron Lanier |publisher = [[Alfred A. Knopf]] |year = 2010 |isbn = 978-0307269645 |location = New York, NY |page = [https://archive.org/details/isbn_9780307269645/page/26 26] |url-access = registration |url = https://archive.org/details/isbn_9780307269645 }}</ref>
   −
除了仅仅延长物质身体的运行寿命之外,[[Jaron Lanier]]还主张一种称为“数字提升Digital Ascension”的不朽形式,即“人在肉体层面死亡,意识被上传到电脑里并保持清醒”。
+
除了仅仅延长物质身体的运行寿命之外,[[Jaron Lanier]]还主张一种称为“数字提升 Digital Ascension”的不朽形式,即“人在肉体层面死亡,意识被上传到电脑里并保持清醒”。
 
        第471行: 第468行:  
In his 1958 obituary for [[John von Neumann]], Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."<ref name=mathematical/>
 
In his 1958 obituary for [[John von Neumann]], Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."<ref name=mathematical/>
   −
Ulam在1958年为约翰·冯·诺依曼写的讣告中,回忆了与冯·诺依曼的一次对话:“技术的不断进步和人类生活方式的变化,这使我们似乎接近了种族历史上某些基本的奇点,超出了这些奇点,人类的事务就不能继续下去了。”
+
Ulam 在1958年为约翰·冯·诺依曼写的讣告中,回忆了与冯·诺依曼的一次对话:“技术的不断进步和人类生活方式的变化,这使我们似乎接近了种族历史上某些基本的奇点,超出了这些奇点,人类的事务就不能继续下去了。”
    
In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.
 
In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.
第492行: 第489行:  
In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher [[Ray Solomonoff]] articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.<ref name=chalmers /><ref name="std"/>
 
In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher [[Ray Solomonoff]] articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.<ref name=chalmers /><ref name="std"/>
   −
1985年,在《人工智能的时间尺度The Time Scale of Artificial Intelligence》一书中,人工智能研究人员[[Ray Solomonoff]]以数学的方式阐述了他所说的“无限点”的相关概念:如果一个人类水平的能自我改进人工智能的研究社区需要四年时间使其速度加倍,那么两年,然后一年,依此类推,它们的能力在有限的时间内无限增长。<ref name=chalmers/><ref name=“std”/>
+
1985年,在《人工智能的时间尺度 The Time Scale of Artificial Intelligence》一书中,人工智能研究人员[[Ray Solomonoff]]以数学的方式阐述了他所说的“无限点”的相关概念:如果一个人类水平的能自我改进人工智能的研究社区需要四年时间使其速度加倍,那么两年,然后一年,依此类推,它们的能力在有限的时间内无限增长。<ref name=chalmers/><ref name=“std”/>
    
Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",<ref name="vinge1993" /> spread widely on the internet and helped to popularize the idea.<ref name="google5"/> This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.<ref name="vinge1993" />
 
Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",<ref name="vinge1993" /> spread widely on the internet and helped to popularize the idea.<ref name="google5"/> This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.<ref name="vinge1993" />
      −
Vinge 1993年的文章《未来的技术奇点:如何在后人类时代生存The Coming Technological Singularity: How to Survive in the Post-Human Era》在互联网上广为传播,普及了这一理念。“在三十年内,我们将拥有创造超人智慧的技术手段。不久之后,人类时代将结束。”Vinge认为,科幻小说作者无法写出超越人类智力的现实主义后奇点人物,因为这种智力的思想将超出人类的表达能力。
+
Vinge 1993年的文章《未来的技术奇点:如何在后人类时代生存 The Coming Technological Singularity: How to Survive in the Post-Human Era》在互联网上广为传播,普及了这一理念。“在三十年内,我们将拥有创造超人智慧的技术手段。不久之后,人类时代将结束。”Vinge认为,科幻小说作者无法写出超越人类智力的现实主义后奇点人物,因为这种智力的思想将超出人类的表达能力。
      第508行: 第505行:  
In 2005, Kurzweil published ''[[The Singularity is Near]]''. Kurzweil's publicity campaign included an appearance on ''[[The Daily Show with Jon Stewart]]''.<ref name="episode"/>
 
In 2005, Kurzweil published ''[[The Singularity is Near]]''. Kurzweil's publicity campaign included an appearance on ''[[The Daily Show with Jon Stewart]]''.<ref name="episode"/>
   −
2005年,库兹韦尔发表了《奇点临近 The Singularity is Near》。库兹韦尔的宣传活动包括参加“ Jon Stewart 的每日秀 The Daily Show with Jon Stewart”。<ref name="episode"/>
+
2005年,Kurzweil 发表了《奇点临近 The Singularity is Near》。Kurzweil 的宣传活动包括参加“ Jon Stewart 的每日秀 The Daily Show with Jon Stewart”。<ref name="episode"/>
    
In 2007, [[Eliezer Yudkowsky]] suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.<ref name="yudkowsky.net"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref> For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.<ref name="yudkowsky.net"/>
 
In 2007, [[Eliezer Yudkowsky]] suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.<ref name="yudkowsky.net"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref> For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.<ref name="yudkowsky.net"/>
      −
2007年,Eliezer Yudkowsky指出,“奇点”被赋予的许多不同的定义是互不兼容而不是相互支持的<ref name="yudkowsky.net"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref>。例如,库兹韦尔推断了在自我提升的人工智能或超人智能到来之前的当前技术轨迹。Yudkowsky 认为,这与I.J.Good 提出的智能的不连续上升和 Vinge 关于不可预测性的论点存在矛盾。<ref name="yudkowsky.net"/>
+
2007年,Eliezer Yudkowsky指出,“奇点”被赋予的许多不同的定义是互不兼容而不是相互支持的<ref name="yudkowsky.net"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref>。例如,Kurzweil 推断了在自我提升的人工智能或超人智能到来之前的当前技术轨迹。Yudkowsky 认为,这与I.J.Good 提出的智能的不连续上升和 Vinge 关于不可预测性的论点存在矛盾。<ref name="yudkowsky.net"/>
    
In 2009, Kurzweil and [[X-Prize]] founder [[Peter Diamandis]] announced the establishment of [[Singularity University]], a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."<ref name="singularityu"/> Funded by [[Google]], [[Autodesk]], [[ePlanet Ventures]], and a group of [[High tech|technology industry]] leaders, Singularity University is based at [[NASA]]'s [[Ames Research Center]] in [[Mountain View, California|Mountain View]], [[California]]. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.
 
In 2009, Kurzweil and [[X-Prize]] founder [[Peter Diamandis]] announced the establishment of [[Singularity University]], a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."<ref name="singularityu"/> Funded by [[Google]], [[Autodesk]], [[ePlanet Ventures]], and a group of [[High tech|technology industry]] leaders, Singularity University is based at [[NASA]]'s [[Ames Research Center]] in [[Mountain View, California|Mountain View]], [[California]]. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.
   −
2009年,Kurzweil和X-Prize的创始人Peter Diamandis宣布成立奇点大学,这是一所未经认证的私立学院,其宣称的使命是“教育、激励和赋能领导者来使用指数技术应对人类的重大挑战<ref name="singularityu"/> 。”奇点大学由Google、Autodesk、ePlanet Ventures和一群高科技产业的领导团队资助,总部设在位于加州山景城的美国宇航局 NASA 的 艾姆斯研究中心 Ames Research Center。这家非营利组织在每年夏季举办为期十周的研究生课程,涵盖十种不同的技术和相关领域,并全年举办一系列高管课程。
+
2009年,Kurzweil 和 X-Prize 的创始人Peter Diamandis宣布成立奇点大学,这是一所未经认证的私立学院,其宣称的使命是“教育、激励和赋能领导者来使用指数技术应对人类的重大挑战<ref name="singularityu"/> 。”奇点大学由Google、Autodesk、ePlanet Ventures和一群高科技产业的领导团队资助,总部设在位于加州山景城的美国宇航局 NASA 的 艾姆斯研究中心 Ames Research Center。这家非营利组织在每年夏季举办为期十周的研究生课程,涵盖十种不同的技术和相关领域,并全年举办一系列高管课程。