第1行: |
第1行: |
| 此词条暂由水流心不竞、嘉树初译,徐培审校,带来阅读不便,请见谅。 | | 此词条暂由水流心不竞、嘉树初译,徐培审校,带来阅读不便,请见谅。 |
| | | |
− | The '''technological singularity'''—also, simply, '''the singularity'''<ref>Cadwalladr, Carole (2014). "[https://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence Are the robots about to rise? Google's new director of engineering thinks so…]" ''The Guardian''. Guardian News and Media Limited.</ref>—is a [[hypothetical]] point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.<ref>{{cite web |title=Collection of sources defining "singularity" |url=http://www.singularitysymposium.com/definition-of-singularity.html |website=singularitysymposium.com |accessdate=17 April 2019}}</ref><ref name="Singularity hypotheses">{{cite book |author1=Eden, Amnon H. |author2=Moor, James H. |title=Singularity hypotheses: A Scientific and Philosophical Assessment |date=2012 |publisher=Springer |location=Dordrecht |isbn=9783642325601 |pages=1–2}}</ref> According to the most popular version of the singularity hypothesis, called [[Technological singularity#Intelligence explosion|intelligence explosion]], an upgradable [[intelligent agent]] will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful [[superintelligence]] that qualitatively far surpasses all [[human intelligence]].
| + | '''技术奇点 Technological singularity'''——简称奇点 Singularity<ref>Cadwalladr, Carole (2014). "[https://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence Are the robots about to rise? Google's new director of engineering thinks so…]" ''The Guardian''. Guardian News and Media Limited.</ref>是一个假设的时间点。在该时间点上,技术的增长变得不可控制和不可逆转,从而导致人类文明发生无法预见的变化。<ref>{{cite web |title=Collection of sources defining "singularity" |url=http://www.singularitysymposium.com/definition-of-singularity.html |website=singularitysymposium.com |accessdate=17 April 2019}}</ref><ref name="Singularity hypotheses">{{cite book |author1=Eden, Amnon H. |author2=Moor, James H. |title=Singularity hypotheses: A Scientific and Philosophical Assessment |date=2012 |publisher=Springer |location=Dordrecht |isbn=9783642325601 |pages=1–2}}</ref>根据奇点假说(也被称为智能爆炸 intelligence explosion)最流行的版本:一个可升级的智能体终将进入一种自我完善循环的'''失控反应 runaway reaction'''”。每个新的、更智能的世代将出现得越来越快,导致智能的“爆炸”,并产生一种在实质上远超所有人类智能的超级智能。 |
− | | |
− | | |
− | <font color="#ff8000">技术奇点 Technological singularity</font>——简称 奇点 Singularity <ref>Cadwalladr, Carole (2014). "[https://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence Are the robots about to rise? Google's new director of engineering thinks so…]" ''The Guardian''. Guardian News and Media Limited.</ref>是一个假设的时间点。在该时间点上,技术的增长变得不可控制和不可逆转,从而导致人类文明发生无法预见的变化。<ref>{{cite web |title=Collection of sources defining "singularity" |url=http://www.singularitysymposium.com/definition-of-singularity.html |website=singularitysymposium.com |accessdate=17 April 2019}}</ref><ref name="Singularity hypotheses">{{cite book |author1=Eden, Amnon H. |author2=Moor, James H. |title=Singularity hypotheses: A Scientific and Philosophical Assessment |date=2012 |publisher=Springer |location=Dordrecht |isbn=9783642325601 |pages=1–2}}</ref>根据奇点假说(也被称为智能爆炸 intelligence explosion)最流行的版本:一个可升级的智能体终将进入一种自我完善循环的“<font color="#ff8000">失控反应 runaway reaction</font>”。每个新的、更智能的世代将出现得越来越快,导致智能的“爆炸”,并产生一种在实质上远超所有人类智能的超级智能。
| |
− | | |
− | The first use of the concept of a "singularity" in the technological context was [[John von Neumann]].<ref>''The Technological Singularity'' by Murray Shanahan, (MIT Press, 2015), page 233</ref> [[Stanislaw Ulam]] reports a discussion with von Neumann "centered on the [[Accelerating change|accelerating progress]] of technology and changes in the mode of human life, which gives the appearance of approaching some essential [[Wiktionary:singularity|singularity]] in the history of the race beyond which human affairs, as we know them, could not continue".<ref name="mathematical" /> Subsequent authors have echoed this viewpoint.<ref name="Singularity hypotheses" /><ref name="chalmers">{{Cite journal|last=Chalmers|first=David|date=2010|title=The singularity: a philosophical analysis|url=|journal=Journal of Consciousness Studies|volume=17|issue=9–10|pages=7–65|via=}}</ref>
| |
− | | |
− | | |
− | | |
− | 第一次在科技领域使用“奇点”这一概念的人是冯·诺依曼 John von Neumann<ref>''The Technological Singularity'' by Murray Shanahan, (MIT Press, 2015), page 233</ref>。Stanislaw Ulam 报告了一次与冯·诺依曼的讨论。“围绕技术的加速进步和人类生活模式的改变,这让我们看到了人类历史上一些本质上的奇点。<ref name="mathematical" /> 一旦超越了这些奇点,我们所熟知的人类事务就将无法继续下去了”。后来的作者也赞同这一观点。<ref name="Singularity hypotheses" /><ref name="chalmers">{{Cite journal|last=Chalmers|first=David|date=2010|title=The singularity: a philosophical analysis|url=|journal=Journal of Consciousness Studies|volume=17|issue=9–10|pages=7–65|via=}}</ref>
| |
− | | |
− | [[I. J. Good]]'s "intelligence explosion" model predicts that a future superintelligence will trigger a singularity.<ref name="vinge1993">Vinge, Vernor. [http://mindstalk.net/vinge/vinge-sing.html "The Coming Technological Singularity: How to Survive in the Post-Human Era"], in ''Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace'', G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.</ref>
| |
− | | |
| | | |
| + | 第一次在科技领域使用“奇点”这一概念的人是冯·诺依曼 John von Neumann<ref>''The Technological Singularity'' by Murray Shanahan, (MIT Press, 2015), page 233</ref>。Stanislaw Ulam报告了一次与冯·诺依曼的讨论。“围绕技术的加速进步和人类生活模式的改变,这让我们看到了人类历史上一些本质上的奇点。<ref name="mathematical" />一旦超越了这些奇点,我们所熟知的人类事务就将无法继续下去了”。后来的作者也赞同这一观点。<ref name="Singularity hypotheses" /><ref name="chalmers">{{Cite journal|last=Chalmers|first=David|date=2010|title=The singularity: a philosophical analysis|url=|journal=Journal of Consciousness Studies|volume=17|issue=9–10|pages=7–65|via=}}</ref> |
| | | |
| I. J.古德的“智能爆炸”模型预测未来的超级智能将触发一个奇点。<ref name="vinge1993">Vinge, Vernor. [http://mindstalk.net/vinge/vinge-sing.html "The Coming Technological Singularity: How to Survive in the Post-Human Era"], in ''Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace'', G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.</ref> | | I. J.古德的“智能爆炸”模型预测未来的超级智能将触发一个奇点。<ref name="vinge1993">Vinge, Vernor. [http://mindstalk.net/vinge/vinge-sing.html "The Coming Technological Singularity: How to Survive in the Post-Human Era"], in ''Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace'', G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.</ref> |
| | | |
| + | “奇点”的概念和术语是由Vernor Vinge在他1993年的文章《即将到来的技术奇点 The Coming Technological Singularity》中得到推广的。他在文中写道,这将标志着人类时代的终结,因为新的超级智能将持续自我升级,并以不可思议的速度在技术上进步。他写道,如果奇点发生在2005年之前或2030年之后,他会感到惊讶。<ref name="vinge1993" /> |
| | | |
| + | 斯蒂芬·霍金 Stephen Hawking和埃隆·马斯克 Elon Musk等公众人物对完全人工智能(AI)可能导致人类灭绝表示担忧。<ref>{{cite news|last1=Sparkes|first1=Matthew|title=Top scientists call for caution over artificial intelligence|url=https://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html|accessdate=24 April 2015|work=[[The Daily Telegraph|The Telegraph (UK)]]|date=13 January 2015}}</ref><ref>{{cite web|url=https://www.bbc.com/news/technology-30290540|title=Hawking: AI could end human race|date=2 December 2014|publisher=BBC|accessdate=11 November 2017}}</ref>奇点的后果及其对人类的潜在利益或伤害一直存在激烈的争论。 |
| | | |
− | The concept and the term "singularity" were popularized by [[Vernor Vinge]] in his 1993 essay ''The Coming Technological Singularity'', in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.<ref name="vinge1993" />
| + | 2012年到2013年期间,Nick Bostrom和 Vincent c. Müller对人工智能研究人员进行了四次调查。结果显示,通用人工智能(artificial general intelligence,AGI)在2040年至2050年被成功开发出来的概率估计的中位数为50%。<ref name="newyorker">{{cite news|last1=Khatchadourian|first1=Raffi|title=The Doomsday Invention|url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=31 January 2018|work=The New Yorker|date=16 November 2015}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). "Future progress in artificial intelligence: A survey of expert opinion". In V. C. Müller (ed): ''Fundamental issues of artificial intelligence'' (pp. 555–572). Springer, Berlin. http://philpapers.org/rec/MLLFPI</ref> |
− | | |
− | | |
− | “奇点”的概念和术语是由 Vernor Vinge 在他1993年的文章《即将到来的技术奇点 The Coming Technological Singularity》中得到推广的。他在文中写道,这将标志着人类时代的终结,因为新的超级智能将持续自我升级,并以不可思议的速度在技术上进步。他写道,如果奇点发生在2005年之前或2030年之后,他会感到惊讶。<ref name="vinge1993" />
| |
− | | |
− | | |
− | | |
− | Public figures such as [[Stephen Hawking]] and [[Elon Musk]] have expressed concern that full [[artificial intelligence]] (AI) could result in human extinction.<ref>{{cite news|last1=Sparkes|first1=Matthew|title=Top scientists call for caution over artificial intelligence|url=https://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html|accessdate=24 April 2015|work=[[The Daily Telegraph|The Telegraph (UK)]]|date=13 January 2015}}</ref><ref>{{cite web|url=https://www.bbc.com/news/technology-30290540|title=Hawking: AI could end human race|date=2 December 2014|publisher=BBC|accessdate=11 November 2017}}</ref> The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.
| |
− | | |
− | 斯蒂芬·霍金 Stephen Hawking 和埃隆·马斯克 Elon Musk 等公众人物对完全人工智能(AI)可能导致人类灭绝表示担忧。<ref>{{cite news|last1=Sparkes|first1=Matthew|title=Top scientists call for caution over artificial intelligence|url=https://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html|accessdate=24 April 2015|work=[[The Daily Telegraph|The Telegraph (UK)]]|date=13 January 2015}}</ref><ref>{{cite web|url=https://www.bbc.com/news/technology-30290540|title=Hawking: AI could end human race|date=2 December 2014|publisher=BBC|accessdate=11 November 2017}}</ref>奇点的后果及其对人类的潜在利益或伤害一直存在激烈的争论。
| |
− | | |
− | | |
− | Four polls of AI researchers, conducted in 2012 and 2013 by [[Nick Bostrom]] and [[Vincent C. Müller]], suggested a median probability estimate of 50% that [[artificial general intelligence]] (AGI) would be developed by 2040–2050.<ref name="newyorker">{{cite news|last1=Khatchadourian|first1=Raffi|title=The Doomsday Invention|url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=31 January 2018|work=The New Yorker|date=16 November 2015}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). "Future progress in artificial intelligence: A survey of expert opinion". In V. C. Müller (ed): ''Fundamental issues of artificial intelligence'' (pp. 555–572). Springer, Berlin. http://philpapers.org/rec/MLLFPI</ref>
| |
− | | |
− | 2012年到2013年期间,Nick Bostrom 和 Vincent c. Müller 对人工智能研究人员进行了四次调查。结果显示,通用人工智能(artificial general intelligence, AGI)在2040年至2050年被成功开发出来的概率估计的中位数为50%。<ref name="newyorker">{{cite news|last1=Khatchadourian|first1=Raffi|title=The Doomsday Invention|url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=31 January 2018|work=The New Yorker|date=16 November 2015}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). "Future progress in artificial intelligence: A survey of expert opinion". In V. C. Müller (ed): ''Fundamental issues of artificial intelligence'' (pp. 555–572). Springer, Berlin. http://philpapers.org/rec/MLLFPI</ref> | |
| | | |
| ==背景== | | ==背景== |
| + | 虽然技术进步一直在加速,但它一直受到人脑基本智力的限制,而根据Paul R. Ehrlich的说法,人类大脑的基本智力在几千年来并没有发生显著变化。<ref name="Paul Ehrlich June 2008">Ehrlich, Paul. [http://www.longnow.org/seminars/02008/jun/27/dominant-animal-human-evolution-and-environment/ The Dominant Animal: Human Evolution and the Environment]</ref>然而,随着计算机和其他技术的日益强大,人类最终有可能制造出一台比人类智能得多的机器。<ref name="businessweek">[http://www.businessweek.com/1999/99_35/b3644021.htm Superbrains born of silicon will change everything.] {{webarchive |url=https://web.archive.org/web/20100801074729/http://www.businessweek.com/1999/99_35/b3644021.htm |date=August 1, 2010 }}</ref> |
| | | |
− | Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to [[Paul R. Ehrlich]], changed significantly for millennia.<ref name="Paul Ehrlich June 2008">Ehrlich, Paul. [http://www.longnow.org/seminars/02008/jun/27/dominant-animal-human-evolution-and-environment/ The Dominant Animal: Human Evolution and the Environment]</ref> However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans.<ref name="businessweek">[http://www.businessweek.com/1999/99_35/b3644021.htm Superbrains born of silicon will change everything.] {{webarchive |url=https://web.archive.org/web/20100801074729/http://www.businessweek.com/1999/99_35/b3644021.htm |date=August 1, 2010 }}</ref>
| + | 如果一种超人类智能被发明出来,无论是通过人类智能的放大还是通过人工智能,它将带来比现在的人类更强的问题解决和发明创造能力。这种人工智能被称为'''种子人工智能 Seed AI'''。因为如果人工智能的工程能力能够与它的人类创造者相匹敌或超越,那么它就有潜力自主改进自己的软件和硬件,或者设计出更强大的机器。这台能力更强的机器可以继续设计一台能力更强的机器。这种自我递归式改进的迭代可以加速,以至于在物理定律或理论计算设定的任何上限之内发生巨大的质变。据推测,经过多次迭代,这样的人工智能将远远超过人类的认知能力。 |
− | | |
− | 虽然技术进步一直在加速,但它一直受到人脑基本智力的限制,而根据 Paul R.Ehrlich 的说法,人类大脑的基本智力在几千年来并没有发生显著变化。<ref name="Paul Ehrlich June 2008">Ehrlich, Paul. [http://www.longnow.org/seminars/02008/jun/27/dominant-animal-human-evolution-and-environment/ The Dominant Animal: Human Evolution and the Environment]</ref>然而,随着计算机和其他技术的日益强大,人类最终有可能制造出一台比人类智能得多的机器<ref name="businessweek">[http://www.businessweek.com/1999/99_35/b3644021.htm Superbrains born of silicon will change everything.] {{webarchive |url=https://web.archive.org/web/20100801074729/http://www.businessweek.com/1999/99_35/b3644021.htm |date=August 1, 2010 }}</ref>
| |
− | | |
− | If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.
| |
− | | |
− | 如果一种超人类智能被发明出来。无论是通过人类智能的放大还是通过人工智能,它将带来比现在的人类更强的问题解决和发明创造能力。这种人工智能被称为<font color="#ff8000">种子人工智能 Seed AI</font>。因为如果人工智能的工程能力能够与它的人类创造者相匹敌或超越,那么它就有潜力自主改进自己的软件和硬件,或者设计出更强大的机器。这台能力更强的机器可以继续设计一台能力更强的机器。这种自我递归式改进的迭代可以加速,以至于在物理定律或理论计算设定的任何上限之内发生巨大的质变。据推测,经过多次迭代,这样的人工智能将远远超过人类的认知能力。
| |
| | | |
| ==智能爆炸== | | ==智能爆炸== |
| + | 智能爆炸是构建'''通用人工智能 artificial general intelligence(AGI)'''的可能结果。在技术奇点实现后不久,AGI将能够进行递归式的自我迭代,从而导致'''人工超级智能 artificial superintelligence(ASI)'''的迅速出现,但其局限性尚不清楚。 |
| | | |
− | Intelligence explosion is a possible outcome of humanity building [[artificial general intelligence]] (AGI). AGI would be capable of recursive self-improvement, leading to the rapid emergence of [[Superintelligence|artificial superintelligence]] (ASI), the limits of which are unknown, shortly after technological singularity is achieved.
| + | 1965年,I. J. Good曾推测通用人工智能可能会带来智能爆炸。他对'''超人类机器 superhuman machines'''及其影响进行了推测,如果他们真的被发明出来的话:<ref name="stat"/> |
− | | + | ::让我们把超智能机器定义为一种机器,它可以进行远超无论多么聪明的一个人的所有智力活动。由于机器的设计是一种智力活动,那么一台超智能机器可以设计出更好的机器;那么毫无疑问会出现“智能爆炸”,人类的智能将远远落后。因此,第一台超智能机器是人类所需要的最后一项发明,当然,假设机器足够温顺并能够告诉我们如何控制它们的话。 |
− | 智能爆炸是构建<font color="#ff8000">通用人工智能 artificial general intelligence (AGI)</font> 的可能结果。在技术奇点实现后不久,AGI 将能够进行递归式的自我迭代,从而导致<font color="#ff8000">人工超级智能 artificial superintelligence (ASI)</font> 的迅速出现,但其局限性尚不清楚。
| |
− | | |
− | [[I. J. Good]] speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. He speculated on the effects of superhuman machines, should they ever be invented:<ref name="stat"/>
| |
− | | |
− | 1965年,I.J.Good 曾推测通用人工智能可能会带来智能爆炸。他对<font color="#ff8000">超人类机器 superhuman machines </font>及其影响进行了推测,如果他们真的被发明出来的话:<ref name="stat"/> | |
− | | |
− | {{quote|Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.}}
| |
− | | |
− | {{让我们把超智能机器定义为一种机器,它可以进行远超无论多么聪明的一个人的所有智力活动。由于机器的设计是一种智力活动,那么一台超智能机器可以设计出更好的机器;那么毫无疑问会出现“智能爆炸”,人类的智能将远远落后。因此,第一台超智能机器是人类所需要的最后一项发明,当然,假设机器足够温顺并能够告诉我们如何控制它们的话。}}
| |
− | | |
− | Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (even more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.<ref name="stat"/>
| |
| | | |
| 古德的设想如下:随着计算机能力的增加,人们有可能制造出一台比人类更智能的机器;这种超人的智能拥有比现在人类更强大的问题解决和发明创造的能力。这台超级智能机器随后设计一台功能更强大的机器,或者重写自己的软件来变得更加智能;这台(甚至更强大的)机器接着继续设计功能更强大的机器,以此类推。这些递归式的自我完善的迭代加速,允许在物理定律或理论计算设定的任何上限之内发生巨大的质变。<ref name="stat"/> | | 古德的设想如下:随着计算机能力的增加,人们有可能制造出一台比人类更智能的机器;这种超人的智能拥有比现在人类更强大的问题解决和发明创造的能力。这台超级智能机器随后设计一台功能更强大的机器,或者重写自己的软件来变得更加智能;这台(甚至更强大的)机器接着继续设计功能更强大的机器,以此类推。这些递归式的自我完善的迭代加速,允许在物理定律或理论计算设定的任何上限之内发生巨大的质变。<ref name="stat"/> |
| | | |
| ==其他表现形式== | | ==其他表现形式== |
− |
| |
| ===超级智能的出现=== | | ===超级智能的出现=== |
− | | + | 超级智能、超智能或超人智能是一种假想的智能体。它拥有的智能远远超过最聪明、最有天赋的人类大脑的智能。“超级智能”也可以指这种智能体所拥有的智能的形式或程度。约翰·冯·诺依曼 John von Neumann,Vernor Vinge和 Ray Kurzweil从技术创造超级智能的角度定义了这个概念。他们认为,现在的人类很难或不可能预测人类在后奇点世界的生活会是什么样子<ref name="vinge1993"/><ref name="singularity"/> |
− | {{Further|Superintelligence}}
| |
− | | |
− | {{进一步{超级智能}}
| |
− | | |
− | | |
− | A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical [[intelligent agent|agent]] that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. [[John von Neumann]], [[Vernor Vinge]] and [[Ray Kurzweil]] define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.<ref name="vinge1993"/><ref name="singularity"/>
| |
− | | |
− | 超级智能、超智能或超人智能是一种假想的智能体。它拥有的智能远远超过最聪明、最有天赋的人类大脑的智能。“超级智能”也可以指这种智能体所拥有的智能的形式或程度。约翰·冯·诺依曼 John von Neumann,Vernor Vinge 和 Ray Kurzweil 从技术创造超级智能的角度定义了这个概念。他们认为,现在的人类很难或不可能预测人类在后奇点世界的生活会是什么样子<ref name="vinge1993"/><ref name="singularity"/> | |
− | | |
− | | |
− | Technology forecasters and researchers disagree about if or when human intelligence is likely to be surpassed. Some argue that advances in [[artificial intelligence]] (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of [[futures studies]] scenarios combine elements from both of these possibilities, suggesting that humans are likely to [[brain–computer interface|interface with computers]], or [[mind uploading|upload their minds to computers]], in a way that enables substantial intelligence amplification.
| |
| | | |
| 技术预言家和研究人员对人类智能是否或何时可能被超越存在分歧。一些人认为,人工智能(AI)的进步可能会产生没有人类认知局限的一般推理系统。另一些人则认为,人类将进化或直接改变自己的生物性,从而从根本上实现更高的智能。许多未来研究的场景结合了这两种可能的元素,认为人类很可能会与计算机交互,或以将他们的意识上传到计算机的方式实现大量的智能增益。 | | 技术预言家和研究人员对人类智能是否或何时可能被超越存在分歧。一些人认为,人工智能(AI)的进步可能会产生没有人类认知局限的一般推理系统。另一些人则认为,人类将进化或直接改变自己的生物性,从而从根本上实现更高的智能。许多未来研究的场景结合了这两种可能的元素,认为人类很可能会与计算机交互,或以将他们的意识上传到计算机的方式实现大量的智能增益。 |
| | | |
| ===非人工智能奇点=== | | ===非人工智能奇点=== |
− | Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as [[molecular nanotechnology]],<ref name="hplusmagazine"/><ref name="yudkowsky.net"/><ref name="agi-conf"/> although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.<ref name="vinge1993" />
| + | 一些作家更宽泛地使用“奇点”的概念,用来指代任何我们社会中由新技术带来的剧烈变化,如分子纳米技术,<ref name="hplusmagazine"/><ref name="yudkowsky.net"/><ref name="agi-conf"/>尽管Vernor Vinge和其他作家明确指出,如果没有超级智能,这些改变就不能算作真正的奇点。<ref name="vinge1993" /> |
− | | |
− | 一些作家更宽泛地使用“奇点”的概念,用来指代任何我们社会中由新技术带来的剧烈变化,如分子纳米技术,<ref name="hplusmagazine"/><ref name="yudkowsky.net"/><ref name="agi-conf"/> 尽管 Vernor Vinge 和其他作家明确指出,如果没有超级智能,这些改变就不能算作真正的奇点。<ref name="vinge1993" /> | |
| | | |
| ===速度超智能=== | | ===速度超智能=== |
− | A speed superintelligence describes an AI that can do everything that a human can do, where the only difference is that the machine runs faster.<ref>{{cite book |doi=10.1007/978-3-662-54033-6_2 |year=2017 |publisher=Springer Berlin Heidelberg |pages=11–23 |author=Kaj Sotala and Roman Yampolskiy |title=The Technological Singularity |chapter=Risks of the Journey to the Singularity |series=The Frontiers Collection |isbn=978-3-662-54031-2 |conference=The Frontiers Collection }}</ref> For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds.<ref name="singinst.org"/> Such a difference in information processing speed could drive the singularity.<ref>{{cite book |doi=10.1002/9781118922590.ch16 |year=2016 |publisher=John Wiley \& Sons, Inc |pages=171–224 |author=David J. Chalmers |title=Science Fiction and Philosophy |chapter=The Singularity |isbn=9781118922590 |conference=Science Fiction and Philosophy }}</ref>
| + | '''速度超级智能 speed superintelligence'''描述了一个人工智能,它可以做任何人类能做的事情,唯一的区别是这个机器运行得更快。<ref>{{cite book |doi=10.1007/978-3-662-54033-6_2 |year=2017 |publisher=Springer Berlin Heidelberg |pages=11–23 |author=Kaj Sotala and Roman Yampolskiy |title=The Technological Singularity |chapter=Risks of the Journey to the Singularity |series=The Frontiers Collection |isbn=978-3-662-54031-2 |conference=The Frontiers Collection }}</ref>例如,与人类相比,它信息处理的速度提高了一百万倍,一个主观年将在30个物理秒内过去。<ref name="singinst.org"/>这种在信息处理速度上的差异可能会导致奇点。<ref>{{cite book |doi=10.1002/9781118922590.ch16 |year=2016 |publisher=John Wiley \& Sons, Inc |pages=171–224 |author=David J. Chalmers |title=Science Fiction and Philosophy |chapter=The Singularity |isbn=9781118922590 |conference=Science Fiction and Philosophy }}</ref> |
− | | |
− | <font color="#ff8000">速度超级智能 speed superintelligence </font>描述了一个人工智能,它可以做任何人类能做的事情,唯一的区别是这个机器运行得更快.<ref>{{cite book |doi=10.1007/978-3-662-54033-6_2 |year=2017 |publisher=Springer Berlin Heidelberg |pages=11–23 |author=Kaj Sotala and Roman Yampolskiy |title=The Technological Singularity |chapter=Risks of the Journey to the Singularity |series=The Frontiers Collection |isbn=978-3-662-54031-2 |conference=The Frontiers Collection }}</ref> 。例如,与人类相比,它信息处理的速度提高了一百万倍,一个主观年将在30个物理秒内过去。<ref name="singinst.org"/>这种在信息处理速度上的差异可能会导致奇点。.<ref>{{cite book |doi=10.1002/9781118922590.ch16 |year=2016 |publisher=John Wiley \& Sons, Inc |pages=171–224 |author=David J. Chalmers |title=Science Fiction and Philosophy |chapter=The Singularity |isbn=9781118922590 |conference=Science Fiction and Philosophy }}</ref>
| |
| | | |
| ==合理性== | | ==合理性== |
− | | + | 许多著名的技术专家和学者都对技术奇点的合理性提出质疑,包括Paul Allen、Jeff Hawkins、John Holland、Jaron Lanier和Gordon Moore,他的摩尔定律经常被引用来支持这一概念。<ref name="spectrum.ieee.org"/><ref name="ieee"/><ref name="Allen"/> |
− | Many prominent technologists and academics dispute the plausibility of a technological singularity, including [[Paul Allen]], [[Jeff Hawkins]], [[John Henry Holland|John Holland]], [[Jaron Lanier]], and [[Gordon Moore]], whose [[Moore's law|law]] is often cited in support of the concept.<ref name="spectrum.ieee.org"/><ref name="ieee"/><ref name="Allen"/>
| |
− | | |
− | 许多著名的技术专家和学者都对技术奇点的合理性提出质疑,包括 Paul Allen、Jeff Hawkins、 John Holland、Jaron Lanier 和 Gordon Moore,他的摩尔定律经常被引用来支持这一概念。<ref name="spectrum.ieee.org"/><ref name="ieee"/><ref name="Allen"/>
| |
| | | |
| Most proposed methods for creating superhuman or [[transhuman]] minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The speculated ways to produce intelligence augmentation are many, and include [[bioengineering]], [[genetic engineering]], [[nootropic]] drugs, AI assistants, direct [[brain–computer interface]]s and [[mind uploading]]. Because multiple paths to an intelligence explosion are being explored, it makes a singularity more likely; for a singularity to not occur they would all have to fail.<ref name="singinst.org">{{cite web|url=http://singinst.org/overview/whatisthesingularity |title=What is the Singularity? | Singularity Institute for Artificial Intelligence |publisher=Singinst.org |accessdate=2011-09-09 |url-status=dead |archiveurl=https://web.archive.org/web/20110908014050/http://singinst.org/overview/whatisthesingularity/ |archivedate=2011-09-08 }}</ref> | | Most proposed methods for creating superhuman or [[transhuman]] minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The speculated ways to produce intelligence augmentation are many, and include [[bioengineering]], [[genetic engineering]], [[nootropic]] drugs, AI assistants, direct [[brain–computer interface]]s and [[mind uploading]]. Because multiple paths to an intelligence explosion are being explored, it makes a singularity more likely; for a singularity to not occur they would all have to fail.<ref name="singinst.org">{{cite web|url=http://singinst.org/overview/whatisthesingularity |title=What is the Singularity? | Singularity Institute for Artificial Intelligence |publisher=Singinst.org |accessdate=2011-09-09 |url-status=dead |archiveurl=https://web.archive.org/web/20110908014050/http://singinst.org/overview/whatisthesingularity/ |archivedate=2011-09-08 }}</ref> |