更改

跳到导航 跳到搜索
添加1,184字节 、 2020年12月21日 (一) 20:58
第533行: 第533行:  
{{Harvtxt|Berglas|2008}} claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf |date=2012-06-11 }}. Draft for a publication in ''Global Catastrophic Risk'' from August 31, 2006, retrieved July 18, 2011 (PDF file)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref> [[Anders Sandberg]] has also elaborated on this scenario, addressing various common counter-arguments.<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref> AI researcher [[Hugo de Garis]] suggests that artificial intelligences may simply eliminate the human race [[instrumental convergence|for access to scarce resources]],<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref> and humans would be powerless to stop them.<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref> Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.<ref name="nickbostrom7" />
 
{{Harvtxt|Berglas|2008}} claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf |date=2012-06-11 }}. Draft for a publication in ''Global Catastrophic Risk'' from August 31, 2006, retrieved July 18, 2011 (PDF file)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref> [[Anders Sandberg]] has also elaborated on this scenario, addressing various common counter-arguments.<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref> AI researcher [[Hugo de Garis]] suggests that artificial intelligences may simply eliminate the human race [[instrumental convergence|for access to scarce resources]],<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref> and humans would be powerless to stop them.<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref> Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.<ref name="nickbostrom7" />
   −
{Harvtxt|Berglas|2008}声称人工智能对人类友好没有直接的进化动机。进化并没有产生人类所重视的结果的内在倾向,也没有理由期望一个任意的优化过程来促进人类所期望的结果,而不是无意中导致人工智能以一种非其创造者意图的方式行为。<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf |date=2012-06-11 }}Draft for a publication in ''Global Catastrophic Risk'' from August 31, 2006, retrieved July 18, 2011 (PDF file)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref>[[Anders Sandberg]]也详细阐述了这种情况,讨论了各种常见的反驳意见。<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref>人工智能研究人员[[Hugo de Garis]]认为,人工智能可能会简单地消灭人类[[工具性融合|获取稀缺资源]],<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref>人类将无力阻止它们。<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref>另一方面,人工智能是在进化的压力下发展起来的,以促进自身的生存,这一点可以超越人类。<ref name="nickbostrom7" />
+
{{Harvtxt|Berglas|2008}}声称没有直接的进化动机促使人工智能对人类友好。进化并不倾向于产生人类所重视的结果,也没有理由期望一个任意的优化过程会促进人类所期望的结果,而不是无意中导致人工智能以一种不是其创造者意图的方式行动。<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf |date=2012-06-11 }}2006年8月31日“全球灾难性风险”出版物草稿,2011年7月18日检索(PDF文件)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref>[[Anders Sandberg]]也详细阐述了这种情况,讨论了各种常见的反驳意见。<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref>人工智能研究人员[[Hugo de Garis]]认为,人工智能可能会简单地消灭人类[[工具性融合|获取稀缺资源]],<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref>人类将无力阻止它们。<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref>另一方面,人工智能是在进化的压力下发展起来的,以促进自身的生存,这一点可以超越人类。<ref name="nickbostrom7" />
    
{{Reflist
 
{{Reflist
第551行: 第551行:  
{{quote|When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.}}
 
{{quote|When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.}}
   −
{{引用}当我们创建第一个超级智能实体时,我们可能会犯一个错误,给它目标,导致它毁灭人类,假设它巨大的智力优势赋予它这样做的力量。例如,我们可能会错误地将子目标提升为超级目标。我们告诉它去解决一个数学问题,然后它将太阳系中的所有物质变成一个巨大的计算装置,在这个过程中杀死了提出这个问题的人
+
{{引用}当我们创建第一个超级智能实体时,我们可能会犯一个错误,给它目标,导致它毁灭人类,假设它巨大的智力优势赋予它这样做的力量。例如,我们可能会错误地将子目标提升为超级目标。我们告诉它去解决一个数学问题,然后它将太阳系中的所有物质变成一个巨大的计算装置,在这个过程中杀死了提出这个问题的人。}}
      第557行: 第557行:  
According to [[Eliezer Yudkowsky]], a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">[http://singinst.org/upload/CEV.html Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004 ] {{webarchive|url=https://web.archive.org/web/20100815055725/http://singinst.org/upload/CEV.html |date=2010-08-15 }}</ref> {{harvtxt|Bill Hibbard|2014}} proposes an AI design that avoids several dangers including self-delusion,<ref name="JAGI2012">{{Citation| journal=Journal of Artificial General Intelligence| year=2012| volume=3| issue=1| title=Model-Based Utility Functions| first=Bill| last=Hibbard| postscript=.| doi=10.2478/v10229-011-0013-5| page=1|arxiv = 1111.3934 |bibcode = 2012JAGI....3....1H | s2cid=8434596}}</ref> unintended instrumental actions,<ref name="selfawaresystems"/><ref name="AGI-12a">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf  Avoiding Unintended AI Behaviors.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. [http://intelligence.org/2012/12/19/december-2012-newsletter/ This paper won the Machine Intelligence Research Institute's 2012 Turing Prize for the Best AGI Safety Paper].</ref> and corruption of the reward generator.<ref name="AGI-12a"/> He also discusses social impacts of AI<ref name="JET2008">{{Citation| url=http://jetpress.org/v17/hibbard.htm| journal=Journal of Evolution and Technology| year=2008| volume=17| title=The Technology of Mind and a New Social Contract| first=Bill| last=Hibbard| postscript=.}}</ref> and testing AI.<ref name="AGI-12b">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf  Decision Support for Safe AI Design|.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.</ref> His 2001 book ''[[Super-Intelligent Machines]]'' advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator.
 
According to [[Eliezer Yudkowsky]], a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">[http://singinst.org/upload/CEV.html Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004 ] {{webarchive|url=https://web.archive.org/web/20100815055725/http://singinst.org/upload/CEV.html |date=2010-08-15 }}</ref> {{harvtxt|Bill Hibbard|2014}} proposes an AI design that avoids several dangers including self-delusion,<ref name="JAGI2012">{{Citation| journal=Journal of Artificial General Intelligence| year=2012| volume=3| issue=1| title=Model-Based Utility Functions| first=Bill| last=Hibbard| postscript=.| doi=10.2478/v10229-011-0013-5| page=1|arxiv = 1111.3934 |bibcode = 2012JAGI....3....1H | s2cid=8434596}}</ref> unintended instrumental actions,<ref name="selfawaresystems"/><ref name="AGI-12a">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf  Avoiding Unintended AI Behaviors.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. [http://intelligence.org/2012/12/19/december-2012-newsletter/ This paper won the Machine Intelligence Research Institute's 2012 Turing Prize for the Best AGI Safety Paper].</ref> and corruption of the reward generator.<ref name="AGI-12a"/> He also discusses social impacts of AI<ref name="JET2008">{{Citation| url=http://jetpress.org/v17/hibbard.htm| journal=Journal of Evolution and Technology| year=2008| volume=17| title=The Technology of Mind and a New Social Contract| first=Bill| last=Hibbard| postscript=.}}</ref> and testing AI.<ref name="AGI-12b">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf  Decision Support for Safe AI Design|.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.</ref> His 2001 book ''[[Super-Intelligent Machines]]'' advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator.
   −
根据[[Eliezer Yudkowsky]],人工智能安全的一个重要问题是,不友好的人工智能可能比友好的人工智能更容易创建。虽然两者都需要递归优化过程设计的巨大进步,但友好的人工智能也需要能够使目标结构在自我改进下保持不变(或者人工智能可以将自己转变成不友好的东西),以及一个与人类价值观相一致且不会自动毁灭人类的目标结构。另一方面,一个不友好的人工智能可以针对任意的目标结构进行优化,而目标结构不需要在自我修改下保持不变。<ref name="singinst12">[http://singinst.org/upload/CEV.html Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004 ] {{webarchive|url=https://web.archive.org/web/20100815055725/http://singinst.org/upload/CEV.html |date=2010-08-15 }}</ref>{{harvxt | Bill Hibbard | 2014}提出了一种避免包括自欺欺人在内的危险的人工智能设计,<ref name="JAGI2012">{{Citation| journal=Journal of Artificial General Intelligence| year=2012| volume=3| issue=1| title=Model-Based Utility Functions| first=Bill| last=Hibbard| postscript=.| doi=10.2478/v10229-011-0013-5| page=1|arxiv = 1111.3934 |bibcode = 2012JAGI....3....1H | s2cid=8434596}}</ref>意外的工具性行为,<ref name="selfawaresystems"/><ref name="AGI-12a">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf  Avoiding Unintended AI Behaviors.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. [http://intelligence.org/2012/12/19/december-2012-newsletter/ This paper won the Machine Intelligence Research Institute's 2012 Turing Prize for the Best AGI Safety Paper].</ref> 以及奖励机制的腐败。<ref name="AGI-12a"/>他还讨论了人工智能的社会影响<ref name="JET2008">{{Citation| url=http://jetpress.org/v17/hibbard.htm| journal=Journal of Evolution and Technology| year=2008| volume=17| title=The Technology of Mind and a New Social Contract| first=Bill| last=Hibbard| postscript=.}}</ref>和测试人工智能。<ref name="AGI-12b">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf  Decision Support for Safe AI Design|.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.</ref>他在2001年出版的新书“[[超级智能机器]]”提倡公众对人工智能的教育和公众对人工智能的控制。它还提出了一个简单的设计,容易腐败的奖励生成器。
+
按照[[Eliezer Yudkowsky]]的观点,人工智能安全的一个重要问题是,不友好的人工智能可能比友好的人工智能更容易创建。虽然两者都需要递归优化过程设计的巨大进步,但友好的人工智能也需要能够使目标结构在自我改进下保持不变(或者人工智能可以将自己转变成不友好的东西),以及一个与人类价值观相一致且不会自动毁灭人类的目标结构。另一方面,一个不友好的人工智能可以针对任意的目标结构进行优化,而目标结构不需要在自我修改下保持不变。<ref name="singinst12">[http://singinst.org/upload/CEV.html Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004 ] {{webarchive|url=https://web.archive.org/web/20100815055725/http://singinst.org/upload/CEV.html |date=2010-08-15 }}</ref>{{harvxt | Bill Hibbard | 2014}提出了一种避免包括自欺欺人在内的危险的人工智能设计,<ref name="JAGI2012">{{Citation| journal=Journal of Artificial General Intelligence| year=2012| volume=3| issue=1| title=Model-Based Utility Functions| first=Bill| last=Hibbard| postscript=.| doi=10.2478/v10229-011-0013-5| page=1|arxiv = 1111.3934 |bibcode = 2012JAGI....3....1H | s2cid=8434596}}</ref>意外的工具性行为,<ref name="selfawaresystems"/><ref name="AGI-12a">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf  Avoiding Unintended AI Behaviors.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. [http://intelligence.org/2012/12/19/december-2012-newsletter/ This paper won the Machine Intelligence Research Institute's 2012 Turing Prize for the Best AGI Safety Paper].</ref> 以及奖励机制的腐败。<ref name="AGI-12a"/>他还讨论了人工智能的社会影响<ref name="JET2008">{{Citation| url=http://jetpress.org/v17/hibbard.htm| journal=Journal of Evolution and Technology| year=2008| volume=17| title=The Technology of Mind and a New Social Contract| first=Bill| last=Hibbard| postscript=.}}</ref>和人工智能测试。<ref name="AGI-12b">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf  Decision Support for Safe AI Design|.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.</ref>他在2001年出版的新书“[[超级智能机器]]”提倡公众对人工智能的教育和公众对人工智能的控制。它还提出了一个简单的设计,容易腐败的奖励生成器。
    
===Next step of sociobiological evolution社会生物进化的下一步===
 
===Next step of sociobiological evolution社会生物进化的下一步===
第575行: 第575行:  
While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.{{citation needed|date=April 2018}}
 
While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.{{citation needed|date=April 2018}}
   −
虽然技术奇点通常被视为一个突发事件,但一些学者认为目前的变化速度已经符合这种描述
+
虽然技术奇点通常被视为一个突发事件,但一些学者认为目前的变化速度已经符合这种描述。{{citation needed|date=April 2018}}
    
In addition, some argue that we are already in the midst of a [[The Major Transitions in Evolution|major evolutionary transition]] that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence.
 
In addition, some argue that we are already in the midst of a [[The Major Transitions in Evolution|major evolutionary transition]] that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence.
   −
 
+
此外,有人认为,我们已经处在一个融合了技术、生物学和社会的[[进化中的突变|进化突变]]的中间。数字技术已经渗透到人类社会的结构中,达到了无可争辩的程度,而且常常是维持生命的依赖。
 
  −
 
  −
 
  −
 
  −
此外,有人认为,我们已经处在一个融合了技术、生物学和社会的[[进化中的主要转变|主要进化转变]]的中间。数字技术已经渗透到人类社会的结构中,达到了无可争辩的程度,而且常常是维持生命的依赖。
         
A 2016 article in ''[[Trends in Ecology & Evolution]]'' argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust [[artificial intelligence]] with our lives through [[Anti-lock braking system|antilock braking in cars]] and [[autopilot]]s in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".
 
A 2016 article in ''[[Trends in Ecology & Evolution]]'' argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust [[artificial intelligence]] with our lives through [[Anti-lock braking system|antilock braking in cars]] and [[autopilot]]s in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".
   −
 
+
2016年发表在“[[Trends in Ecology&Evolution]]”的一篇文章认为,“人类已经接受了生物和技术的融合。我们醒着的大部分时间都是通过数字媒介渠道进行交流的。。。我们相信[[人工智能]]通过[[防抱死制动系统|汽车中的防抱死制动]]和飞机上的[[自动驾驶仪]]来生活。。。在美国,三分之一的婚姻都是在网络上开始的,数字算法也在人类配对和繁殖中发挥了作用”。
 
  −
2016年发表在“[[Trends in Ecology&Evolution]]”的一篇文章认为,“人类已经接受了生物和技术的融合。我们醒着的大部分时间都是通过数字媒介渠道进行交流的。。。我们相信[[人工智能]]通过[[防抱死制动系统|汽车中的防抱死制动]]和飞机上的[[自动驾驶仪]]来生活。。。在美国,三分之一的婚姻都是在网络上开始的,数字算法也在人类配对和繁殖中发挥了作用。
        第600行: 第593行:  
The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 [[zettabyte]]s in 2014 (5{{e|21}} bytes).{{Citation needed|date=April 2019}}
 
The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 [[zettabyte]]s in 2014 (5{{e|21}} bytes).{{Citation needed|date=April 2019}}
   −
 
+
人类创造的数字信息已经达到了与生物圈中生物信息相似的程度。自20世纪80年代以来,存储的数字信息量大约每2.5年翻一番,2014年达到约5[[zettabyte]](5{e | 21}}字节)。{{Citation needed|date=April 2019}}
 
  −
 
  −
 
  −
人类创造的数字信息已经达到了与生物圈中生物信息相似的程度。自20世纪80年代以来,存储的数字信息量大约每2.5年翻一番,2014年达到约5[[zettabyte]](5{e | 21}}字节)
        第610行: 第599行:     
在生物学方面,地球上有72亿人,每个人的基因组有62亿个核苷酸。由于一个字节可以编码四个核苷酸对,地球上每个人类的个体基因组可以编码大约1{e | 19}字节。2014年,数字领域存储的信息是这个数字领域的500倍(见图)。据估计,地球上所有细胞所含的DNA总量约为5.3{e | 37}碱基对,相当于1.325{e | 37}字节的信息。
 
在生物学方面,地球上有72亿人,每个人的基因组有62亿个核苷酸。由于一个字节可以编码四个核苷酸对,地球上每个人类的个体基因组可以编码大约1{e | 19}字节。2014年,数字领域存储的信息是这个数字领域的500倍(见图)。据估计,地球上所有细胞所含的DNA总量约为5.3{e | 37}碱基对,相当于1.325{e | 37}字节的信息。
  −
  −
      
If growth in digital storage continues at its current rate of 30–38% compound annual growth per year,<ref name="HilbertLopez2011" /> it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".<ref name="InfoBiosphere2016">{{Cite journal |url=http://escholarship.org/uc/item/38f4b791 |doi=10.1016/j.tree.2015.12.013|pmid=26777788|title=Information in the Biosphere: Biological and Digital Worlds|journal=Trends in Ecology & Evolution|volume=31|issue=3|pages=180–189|year=2016|last1=Kemp|first1=D. J.|last2=Hilbert|first2=M.|last3=Gillings|first3=M. R.}}</ref>
 
If growth in digital storage continues at its current rate of 30–38% compound annual growth per year,<ref name="HilbertLopez2011" /> it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".<ref name="InfoBiosphere2016">{{Cite journal |url=http://escholarship.org/uc/item/38f4b791 |doi=10.1016/j.tree.2015.12.013|pmid=26777788|title=Information in the Biosphere: Biological and Digital Worlds|journal=Trends in Ecology & Evolution|volume=31|issue=3|pages=180–189|year=2016|last1=Kemp|first1=D. J.|last2=Hilbert|first2=M.|last3=Gillings|first3=M. R.}}</ref>
第627行: 第613行:  
In February 2009, under the auspices of the [[Association for the Advancement of Artificial Intelligence]] (AAAI), [[Eric Horvitz]] chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire [[autonomy]], and to what degree they could use such abilities to pose threats or hazards.<ref name="nytimes july09" />
 
In February 2009, under the auspices of the [[Association for the Advancement of Artificial Intelligence]] (AAAI), [[Eric Horvitz]] chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire [[autonomy]], and to what degree they could use such abilities to pose threats or hazards.<ref name="nytimes july09" />
   −
2009年2月,在[[人工智能促进协会]](AAAI)的主持下,[[Eric Horvitz]]在加利福尼亚州太平洋格罗夫的Asilomar主持了一次由主要计算机科学家、人工智能研究人员和机器人学家组成的会议。其目的是讨论机器人能够自给自足并能够做出自己决定的假设可能性的潜在影响。他们讨论了计算机和机器人能够在多大程度上获得[[自主]],以及在多大程度上可以利用这些能力构成威胁或危险
+
2009年2月,在[[人工智能促进协会]](AAAI)的主持下,[[Eric Horvitz]]在加利福尼亚州太平洋格罗夫的Asilomar主持了一次由主要计算机科学家、人工智能研究人员和机器人学家组成的会议。其目的是讨论机器人能够自给自足并能够做出自己决定的假设可能性的潜在影响。他们讨论了计算机和机器人能够在多大程度上获得[[自主]],以及在多大程度上可以利用这些能力构成威胁或危险。<ref name="nytimes july09" />
      第634行: 第620行:       −
有些机器被编程成各种形式的半自治,包括定位自己的电源和选择用武器攻击的目标的能力。此外,有些[计算机病毒]可以逃避消除,根据与会科学家的说法,可以说已经达到了机器智能的“蟑螂”阶段。与会者指出,科幻小说中描述的自我意识可能不太可能,但也存在其他潜在的危险和陷阱。<ref name=“nytimes july09”>[https://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&ref=todayspaper科学家担心机器可能会比人聪明]作者:JOHN MARKOFF,纽约时报,2009年7月26日。</ref>
+
有些机器被编程成各种形式的半自治,包括定位自己的电源和选择用武器攻击的目标的能力。此外,有些[计算机病毒]可以逃避消除,根据与会科学家的说法,可以说已经达到了机器智能的“蟑螂”阶段。与会者指出,科幻小说中描述的自我意识可能不太可能,但也存在其他潜在的危险和陷阱。<ref name="nytimes july09">[https://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&ref=todayspaper Scientists Worry Machines May Outsmart Man] By JOHN MARKOFF, NY Times, July 26, 2009.</ref>
 
        第641行: 第626行:       −
弗兰克·S·罗宾逊预言,一旦人类实现了具有人类智能的机器,科学技术问题将以远远优于人类的智力来解决和解决。他指出,人工系统能够比人类更直接地共享数据,并预测这将导致一个超级智能的全球网络,使人类的能力相形见绌?|magazine=[[The Humanist]]|日期=2013年6月27日|网址=https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement}}</ref>罗宾逊还讨论了在这样一次情报爆炸之后,未来可能会有多大的不同。其中一个例子就是太阳能,地球接收到的太阳能远远多于人类捕获的太阳能,因此捕捉更多的太阳能将为文明发展带来巨大的希望。
+
弗兰克·S·罗宾逊预言,一旦人类实现了具有人类智能的机器,科学技术问题将以远远优于人类的智力来解决和解决。他指出,人工系统能够比人类更直接地共享数据,并预测这将导致一个超级智能的全球网络,使人类的能力相形见绌?<ref name=":0">{{cite magazine |last=Robinson |first=Frank S. |title=The Human Future: Upgrade or Replacement? |magazine=[[The Humanist]] |date=27 June 2013 |url=https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement}}</ref>罗宾逊还讨论了在这样一次情报爆炸之后,未来可能会有多大的不同。其中一个例子就是太阳能,地球接收到的太阳能远远多于人类捕获的太阳能,因此捕捉更多的太阳能将为文明发展带来巨大的希望。
    
==Hard vs. soft takeoff硬起飞与软起飞==
 
==Hard vs. soft takeoff硬起飞与软起飞==
第647行: 第632行:  
[[File:Recursive self-improvement.svg|thumb|upright=1.6|In this sample recursive self-improvement scenario, humans modifying an AI's architecture would be able to double its performance every three years through, for example, 30 generations before exhausting all feasible improvements (left). If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years (right).<ref name="yudkowsky-global-risk">[[Eliezer Yudkowsky]]. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).</ref>]]
 
[[File:Recursive self-improvement.svg|thumb|upright=1.6|In this sample recursive self-improvement scenario, humans modifying an AI's architecture would be able to double its performance every three years through, for example, 30 generations before exhausting all feasible improvements (left). If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years (right).<ref name="yudkowsky-global-risk">[[Eliezer Yudkowsky]]. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).</ref>]]
   −
[[文件:Recursive self-improvement.svg|thumb |直立=1.6 |在这个示例递归自我改进场景中,修改人工智能体系结构的人可以每三年将其性能提高一倍,例如,30代人,然后用尽所有可行的改进(左)。相反,如果人工智能足够聪明,能够像人类研究人员那样修改自己的架构,那么每一代人完成一次重新设计所需的时间将减半,并且它在6年内将所有30代可行的代都推进(右图)。<ref name=“yudkowsky global risk”>[[Eliezer yudkowsky]]”人工智能作为全球风险的积极和消极因素〉,《全球灾难性风险》(2008年)。</ref>]]
+
[[文件:Recursive self-improvement.svg|thumb |直立=1.6 |在这个示例递归自我改进场景中,修改人工智能体系结构的人可以每三年将其性能提高一倍,例如,30代人,然后用尽所有可行的改进(左)。相反,如果人工智能足够聪明,能够像人类研究人员那样修改自己的架构,那么每一代人完成一次重新设计所需的时间将减半,并且它在6年内将所有30代可行的代都推进(右图)。<ref name="yudkowsky-global-risk">[[Eliezer Yudkowsky]]. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).</ref>]]
 
      
In a hard takeoff scenario, an AGI rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals. In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.<ref>Bugaj, Stephan Vladimir, and Ben Goertzel. "Five ethical imperatives and their implications for human-AGI interaction." Dynamical Psychology (2007).</ref><ref>Sotala, Kaj, and Roman V. Yampolskiy. "Responses to catastrophic AGI risk: a survey." Physica Scripta 90.1 (2014): 018001.</ref>
 
In a hard takeoff scenario, an AGI rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals. In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.<ref>Bugaj, Stephan Vladimir, and Ben Goertzel. "Five ethical imperatives and their implications for human-AGI interaction." Dynamical Psychology (2007).</ref><ref>Sotala, Kaj, and Roman V. Yampolskiy. "Responses to catastrophic AGI risk: a survey." Physica Scripta 90.1 (2014): 018001.</ref>
   −
 
+
在一个艰难的情况下,一个AGI迅速自我完善,“掌控”了世界(也许在几个小时内),对于人为引起的重大错误纠正或AGI目标的逐步调整来说,太快了。在软起飞的情况下,AGI仍然比人类强大得多,但以一种类似人类的速度(也许是几十年的数量级),在一个时间尺度上,持续的人类互动和纠正可以有效地指导AGI的发展。<ref>Bugaj、Stephan Vladimir和Ben Goertzel。”五项伦理要求及其对人类AGI互动的影响〉,《动力心理学》(2007年)。</ref><ref>Sotala, Kaj, and Roman V. Yampolskiy. "Responses to catastrophic AGI risk: a survey." Physica Scripta 90.1 (2014): 018001.</ref>
 
  −
在一个艰难的情况下,一个AGI迅速自我完善,“掌控”了世界(也许在几个小时内),对于人为引起的重大错误纠正或AGI目标的逐步调整来说,太快了。在软起飞的情况下,AGI仍然比人类强大得多,但以一种类似人类的速度(也许是几十年的数量级),在一个时间尺度上,持续的人类互动和纠正可以有效地指导AGI的发展。<ref>Bugaj、Stephan Vladimir和Ben Goertzel。”五项伦理要求及其对人类AGI互动的影响〉,《动力心理学》(2007年)。</ref><ref>索塔拉、卡吉和罗曼诉扬波尔斯基对灾难性AGI风险的反应:一项调查。“Physica Scripta 90.1(2014):018001。</ref>
         
[[Ramez Naam]] argues against a hard takeoff. He has pointed that we already see recursive self-improvement by superintelligences, such as corporations. [[Intel]], for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of [[Moore's law]].<ref name=Naam2014Further>{{cite web|last=Naam|first=Ramez|title=The Singularity Is Further Than It Appears|url=http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html|accessdate=16 May 2014|year=2014}}</ref> Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably ''more'' than twice as hard as creating a mind of intelligence 1."<ref name=Naam2014Ascend>{{cite web|last=Naam|first=Ramez|title=Why AIs Won't Ascend in the Blink of an Eye - Some Math|url=http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html|accessdate=16 May 2014|year=2014}}</ref>
 
[[Ramez Naam]] argues against a hard takeoff. He has pointed that we already see recursive self-improvement by superintelligences, such as corporations. [[Intel]], for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of [[Moore's law]].<ref name=Naam2014Further>{{cite web|last=Naam|first=Ramez|title=The Singularity Is Further Than It Appears|url=http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html|accessdate=16 May 2014|year=2014}}</ref> Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably ''more'' than twice as hard as creating a mind of intelligence 1."<ref name=Naam2014Ascend>{{cite web|last=Naam|first=Ramez|title=Why AIs Won't Ascend in the Blink of an Eye - Some Math|url=http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html|accessdate=16 May 2014|year=2014}}</ref>
   −
[[Ramez Naam]]反对硬起飞。他指出,我们已经看到企业等超级智能体的递归自我改进。例如,[[Intel]]拥有“数万人的集体脑力,可能还有数百万个CPU核心。。。设计更好的CPU!”然而,这并没有导致一个艰难的起飞;相反,它以[[摩尔定律]]的形式实现了软起飞=http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html|accessdate=2014年5月16日|=2014}</ref>Naam进一步指出高智商的复杂性可能比线性复杂得多,因此,“创造一个智慧的头脑2可能比创造一个智慧的头脑1”的难度要“多”一倍多。”<ref name=Naam2014Ascend>{{cite web|last=Naam|first=Ramez|title=Why AIs Won't Ascend in the Blink of an Eye - Some Math|url=http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html|accessdate=16 May 2014|year=2014}}</ref>
+
[[Ramez Naam]]反对硬起飞。他指出,我们已经看到企业等超级智能体的递归自我改进。例如,[[Intel]]拥有“数万人的集体脑力,可能还有数百万个CPU核心。。。设计更好的CPU!”然而,这并没有导致一个艰难的起飞;相反,它以[[摩尔定律]]的形式实现了软起飞<ref name=Naam2014Further>{{cite web|last=Naam|first=Ramez|title=The Singularity Is Further Than It Appears|url=http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html|accessdate=16 May 2014|year=2014}}</ref> Naam进一步指出高智商的复杂性可能比线性复杂得多,因此,“创造一个智慧的头脑2可能比创造一个智慧的头脑1”的难度要“多”一倍多。”<ref name=Naam2014Ascend>{{cite web|last=Naam|first=Ramez|title=Why AIs Won't Ascend in the Blink of an Eye - Some Math|url=http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html|accessdate=16 May 2014|year=2014}}</ref>
    
[[J. Storrs Hall]] believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the ''starting point'' of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460–467|url=http://www.agiri.org/takeoff_hall.pdf|accessdate=16 May 2014}}</ref>
 
[[J. Storrs Hall]] believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the ''starting point'' of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460–467|url=http://www.agiri.org/takeoff_hall.pdf|accessdate=16 May 2014}}</ref>
   −
[[J.Storrs Hall]]认为,“许多更常见的夜间硬起飞场景都是循环的——它们似乎在自我提升过程的“起点”假设了超人类能力”,以便人工智能能够实现起飞所需的戏剧性的领域总体改进。霍尔认为,一个初出茅庐的人工智能与其靠自己不断地自我改进硬件、软件和基础设施,不如专注于一个它最有效的领域,然后在市场上购买剩余的组件,因为市场上产品的质量不断提高,而人工智能也在不断提高很难跟上世界其他地区使用的尖端技术。<ref name=Hall2008>{cite journal | last=Hall | first=J.Storrs | title=Engineering Utopia|journal=Artificial General Intelligence,2008年:第一届AGI会议记录|日期=2008 |=460–467 |网址=http://www.agiri.org/takeng_hall.pdf|accessdate=2014年5月16日}</ref>
+
[[J.Storrs Hall]]认为,“许多更常见的夜间硬起飞场景都是循环的——它们似乎在自我提升过程的“起点”假设了超人类能力”,以便人工智能能够实现起飞所需的戏剧性的领域总体改进。霍尔认为,一个初出茅庐的人工智能与其靠自己不断地自我改进硬件、软件和基础设施,不如专注于一个它最有效的领域,然后在市场上购买剩余的组件,因为市场上产品的质量不断提高,而人工智能也在不断提高很难跟上世界其他地区使用的尖端技术。<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460–467|url=http://www.agiri.org/takeoff_hall.pdf|accessdate=16 May 2014}}</ref>
       
[[Ben Goertzel]] agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. Goerzel refers to this scenario as a "semihard takeoff".<ref name="Goertzel2014">{{cite news|last1=Goertzel|first1=Ben|title=Superintelligence — Semi-hard Takeoff Scenarios|url=http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/|accessdate=25 October 2014|agency=h+ Magazine|date=26 Sep 2014}}</ref>
 
[[Ben Goertzel]] agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. Goerzel refers to this scenario as a "semihard takeoff".<ref name="Goertzel2014">{{cite news|last1=Goertzel|first1=Ben|title=Superintelligence — Semi-hard Takeoff Scenarios|url=http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/|accessdate=25 October 2014|agency=h+ Magazine|date=26 Sep 2014}}</ref>
   −
 
+
[[Ben Goertzel]]同意霍尔的建议,即一个新的人类级别的人工智能将很好地利用其智能来积累财富。人工智能的天赋可能会激励公司和政府将其软件分散到整个社会。戈尔策尔对5分钟的艰难起飞持怀疑态度,但他推测,从人类到超人的水平,以5年的速度起飞是合理的。Goerzel将这种情况称为“半硬起飞”。<ref name="Goertzel2014">{{cite news|last1=Goertzel|first1=Ben|title=Superintelligence — Semi-hard Takeoff Scenarios|url=http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/|accessdate=25 October 2014|agency=h+ Magazine|date=26 Sep 2014}}</ref>
 
  −
[[Ben Goertzel]]同意霍尔的建议,即一个新的人类级别的人工智能将很好地利用其智能来积累财富。人工智能的天赋可能会激励公司和政府将其软件分散到整个社会。戈尔策尔对5分钟的艰难起飞持怀疑态度,但他推测,从人类到超人的水平,以5年的速度起飞是合理的。Goerzel将这种情况称为“半硬起飞”。<ref name=“Goertzel2014”>{cite news | last1=Goertzel | first1=Ben | title=superiintelligence-Semi-hard-tacking Scenarios | url=http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/|accessdate=2014年10月25日|机构=h+杂志|日期=2014年9月26日}</ref>
         
[[Max More]] disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
 
[[Max More]] disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
   −
 
+
[[Max More]]不同意这一观点,他认为,如果只有少数超快人类水平的人工智能,它们不会从根本上改变世界,因为它们仍将依赖于其他人来完成任务,并且仍然会受到人类认知的限制。即使所有的超高速人工智能都致力于智能增强,但目前还不清楚为什么它们在产生超人类智能方面比现有的人类认知科学家做得更好,尽管进展速度会加快。更进一步指出,超级智能不会在一夜之间改变世界:超级智能需要与现有的、缓慢的人类系统进行接触,以完成对世界的物理影响。”合作、组织和将想法付诸实际变革的需要将确保所有旧规则不会在一夜之间甚至几年内被废除。”<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
 
  −
[[Max More]]不同意这一观点,他认为,如果只有少数超快人类水平的人工智能,它们不会从根本上改变世界,因为它们仍将依赖于其他人来完成任务,并且仍然会受到人类认知的限制。即使所有的超高速人工智能都致力于智能增强,但目前还不清楚为什么它们在产生超人类智能方面比现有的人类认知科学家做得更好,尽管进展速度会加快。更进一步指出,超级智能不会在一夜之间改变世界:超级智能需要与现有的、缓慢的人类系统进行接触,以完成对世界的物理影响。”合作、组织和将想法付诸实际变革的需要将确保所有旧规则不会在一夜之间甚至几年内被废除。”
      
== Immortality 永生==
 
== Immortality 永生==
第683行: 第661行:  
In his 2005 book, ''[[The Singularity is Near]]'', [[Ray Kurzweil|Kurzweil]] suggests that medical advances would allow people to protect their bodies from the effects of aging, making the [[Life extension|life expectancy limitless]]. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.<ref>''The Singularity Is Near'', p.&nbsp;215.</ref> Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests [[somatic gene therapy]]; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.<ref>''The Singularity is Near'', p.&nbsp;216.</ref>
 
In his 2005 book, ''[[The Singularity is Near]]'', [[Ray Kurzweil|Kurzweil]] suggests that medical advances would allow people to protect their bodies from the effects of aging, making the [[Life extension|life expectancy limitless]]. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.<ref>''The Singularity Is Near'', p.&nbsp;215.</ref> Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests [[somatic gene therapy]]; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.<ref>''The Singularity is Near'', p.&nbsp;216.</ref>
   −
在他2005年出版的《奇点就在眼前》一书中,他指出,医学的进步将使人们能够保护自己的身体免受衰老的影响,从而延长寿命。Kurzweil认为,医学的技术进步将使我们能够不断地修复和更换我们身体中有缺陷的部件,从而将寿命延长到不确定的年龄。<ref>“奇点就在附近”,第215页。</ref>Kurzweil通过讨论当前的生物工程进展进一步支持了他的论点。Kurzweil建议[[体细胞基因疗法];在合成具有特定遗传信息的病毒之后,下一步将把这项技术应用到基因治疗中,用合成的基因取代人类的DNA。<ref>“奇点就在附近”,第216页。</ref>
+
在他2005年出版的《奇点就在眼前》一书中,他指出,医学的进步将使人们能够保护自己的身体免受衰老的影响,从而延长寿命。Kurzweil认为,医学的技术进步将使我们能够不断地修复和更换我们身体中有缺陷的部件,从而将寿命延长到不确定的年龄。<ref>''The Singularity Is Near'', p.&nbsp;215.</ref>Kurzweil通过讨论当前的生物工程进展进一步支持了他的论点。Kurzweil建议[[体细胞基因疗法];在合成具有特定遗传信息的病毒之后,下一步将把这项技术应用到基因治疗中,用合成的基因取代人类的DNA。<ref>''The Singularity is Near'', p.&nbsp;216.</ref>
       
[[K. Eric Drexler]], one of the founders of [[nanotechnology]], postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical [[biological machine]]s, in his 1986 book ''[[Engines of Creation]]''.
 
[[K. Eric Drexler]], one of the founders of [[nanotechnology]], postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical [[biological machine]]s, in his 1986 book ''[[Engines of Creation]]''.
  −
      
[[K.Eric Drexler]],[[纳米技术]]的创始人之一,在他1986年的著作“[[创造的引擎]]”中,假设了细胞修复设备,包括在细胞内运行并利用目前假设的[[生物机器]]的设备。
 
[[K.Eric Drexler]],[[纳米技术]]的创始人之一,在他1986年的著作“[[创造的引擎]]”中,假设了细胞修复设备,包括在细胞内运行并利用目前假设的[[生物机器]]的设备。
第697行: 第673行:       −
据[[Richard Feynman]]所说,是他的前研究生和合作者[[Albert Hibbs]]最初向他建议(大约在1959年)费曼理论微型机械的“医学”用途。Hibbs认为,有一天,某些修理机器的尺寸可能会缩小到理论上可能(正如费曼所说的那样)“[[分子机器|生物|吞下医生]”。这个想法被纳入了费曼1959年的文章“[[在底部有很多空间]]”http://www.its.caltech.edu/~feynman/pluntity.html|title=底部有足够的空间| first=Richard P.| last=Feynman | author link=Richard Feynman | date=1959年12月| url status=dead |存档url=https://web.archive.org/web/2010021190050/http://www.its.caltech.edu/~feynman/pluntity.html|存档日期=2010-02-11}}</ref>
+
据[[Richard Feynman]]所说,是他的前研究生和合作者[[Albert Hibbs]]最初向他建议(大约在1959年)费曼理论微型机械的“医学”用途。Hibbs认为,有一天,某些修理机器的尺寸可能会缩小到理论上可能(正如费曼所说的那样)“[[分子机器|生物|吞下医生]”。这个想法被纳入了费曼1959年的文章“[[在底部有很多空间]]”。<ref>{{cite web|url = http://www.its.caltech.edu/~feynman/plenty.html|title = There's Plenty of Room at the Bottom|first = Richard P.|last = Feynman |author-link = Richard Feynman|date = December 1959|url-status = dead|archive-url = https://web.archive.org/web/20100211190050/http://www.its.caltech.edu/~feynman/plenty.html|archive-date = 2010-02-11}}</ref>
       
Beyond merely extending the operational life of the physical body, [[Jaron Lanier]] argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".<ref>{{cite book |title = You Are Not a Gadget: A Manifesto |last = Lanier |first = Jaron |author-link = Jaron Lanier |publisher = [[Alfred A. Knopf]] |year = 2010 |isbn = 978-0307269645 |location = New York, NY |page = [https://archive.org/details/isbn_9780307269645/page/26 26] |url-access = registration |url = https://archive.org/details/isbn_9780307269645 }}</ref>
 
Beyond merely extending the operational life of the physical body, [[Jaron Lanier]] argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".<ref>{{cite book |title = You Are Not a Gadget: A Manifesto |last = Lanier |first = Jaron |author-link = Jaron Lanier |publisher = [[Alfred A. Knopf]] |year = 2010 |isbn = 978-0307269645 |location = New York, NY |page = [https://archive.org/details/isbn_9780307269645/page/26 26] |url-access = registration |url = https://archive.org/details/isbn_9780307269645 }}</ref>
   −
 
+
除了仅仅延长物质身体的运行寿命之外,[[Jaron Lanier]]还主张一种称为“数字提升”的不朽形式,即“人死在肉体上,被上传到电脑里,保持清醒”。<ref>{{cite book |title = You Are Not a Gadget: A Manifesto |last = Lanier |first = Jaron |author-link = Jaron Lanier |publisher = [[Alfred A. Knopf]] |year = 2010 |isbn = 978-0307269645 |location = New York, NY |page = [https://archive.org/details/isbn_9780307269645/page/26 26] |url-access = registration |url = https://archive.org/details/isbn_9780307269645 }}</ref>
 
  −
除了仅仅延长物质身体的运行寿命之外,[[Jaron Lanier]]还主张一种称为“数字提升”的不朽形式,即“人死在肉体上,被上传到电脑里,保持清醒”。
      
==History of the concept概念史==
 
==History of the concept概念史==
第710行: 第684行:  
A paper by Mahendra Prasad, published in ''[[AI Magazine]]'', asserts that the 18th-century mathematician [[Marquis de Condorcet]] was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity.<ref>{{Cite journal|last=Prasad|first=Mahendra|year=2019|title=Nicolas de Condorcet and the First Intelligence Explosion Hypothesis|journal=AI Magazine|volume=40|issue=1|pages=29–33|doi=10.1609/aimag.v40i1.2855}}</ref>
 
A paper by Mahendra Prasad, published in ''[[AI Magazine]]'', asserts that the 18th-century mathematician [[Marquis de Condorcet]] was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity.<ref>{{Cite journal|last=Prasad|first=Mahendra|year=2019|title=Nicolas de Condorcet and the First Intelligence Explosion Hypothesis|journal=AI Magazine|volume=40|issue=1|pages=29–33|doi=10.1609/aimag.v40i1.2855}}</ref>
   −
 
+
Mahendra Prasad在“[[人工智能杂志]]”上发表的一篇论文断言,18世纪的数学家[[Marquis de Condorcet]]是第一个对智能爆炸及其对人类影响进行假设和数学建模的人。<ref>{{Cite journal|last=Prasad|first=Mahendra|year=2019|title=Nicolas de Condorcet and the First Intelligence Explosion Hypothesis|journal=AI Magazine|volume=40|issue=1|pages=29–33|doi=10.1609/aimag.v40i1.2855}}</ref>
    
An early description of the idea was made in [[John Wood Campbell Jr.]]'s 1932 short story "The last evolution".
 
An early description of the idea was made in [[John Wood Campbell Jr.]]'s 1932 short story "The last evolution".
第718行: 第692行:  
In his 1958 obituary for [[John von Neumann]], Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."<ref name=mathematical/>
 
In his 1958 obituary for [[John von Neumann]], Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."<ref name=mathematical/>
   −
乌兰姆在1958年为【【约翰·冯·诺依曼】】写的讣告中,回忆了与冯·诺依曼的一次对话:“技术的不断进步和人类生活方式的变化,这使我们似乎接近了种族历史上某些基本的奇点,超出了这些奇点,人类的事务就不能继续下去了。”名称=数学/>
+
乌兰姆在1958年为【【约翰·冯·诺依曼】】写的讣告中,回忆了与冯·诺依曼的一次对话:“技术的不断进步和人类生活方式的变化,这使我们似乎接近了种族历史上某些基本的奇点,超出了这些奇点,人类的事务就不能继续下去了。”<ref name=mathematical/>
    
In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.
 
In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.
   −
1965年,古德写了一篇文章,假设机器智能的递归自我改进是“智能爆炸”。
+
1965年,古德写了一篇文章,假设机器智能的自我改进迭代是“智能爆炸”。
    
In 1981, [[Stanisław Lem]] published his [[science fiction]] novel ''[[Golem XIV]]''. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.
 
In 1981, [[Stanisław Lem]] published his [[science fiction]] novel ''[[Golem XIV]]''. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.
第729行: 第703行:     
In 1983, [[Vernor Vinge]] greatly popularized Good's intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of ''[[Omni (magazine)|Omni]]'' magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" in a way that was specifically tied to the creation of intelligent machines:<ref name="google4"/><ref name="technological"/>
 
In 1983, [[Vernor Vinge]] greatly popularized Good's intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of ''[[Omni (magazine)|Omni]]'' magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" in a way that was specifically tied to the creation of intelligent machines:<ref name="google4"/><ref name="technological"/>
Mahendra Prasad的一篇论文,发表在“[[AI Magazine]]”上,断言18世纪的数学家[[Marquis de Condorcet]]是第一个假设和数学模拟智能爆炸及其对人类的影响的人杂志|卷=40 |问题=1 |页=29–33 | doi=10.1609/aimag.v40i1.2855}</ref>
  −
   
1983年,[[Vernor Vinge]]在许多著作中极大地普及了Good的智能爆炸,第一次在1983年1月出版的“[[Omni(magazine)| Omni]]”杂志上发表了这一主题。在这篇评论文章中,文奇似乎是第一个使用“奇点”一词的人,这种用法与智能机器的创造有着特别的联系:<ref name=“google4”/><ref name=“technology”/>
 
1983年,[[Vernor Vinge]]在许多著作中极大地普及了Good的智能爆炸,第一次在1983年1月出版的“[[Omni(magazine)| Omni]]”杂志上发表了这一主题。在这篇评论文章中,文奇似乎是第一个使用“奇点”一词的人,这种用法与智能机器的创造有着特别的联系:<ref name=“google4”/><ref name=“technology”/>
    
{{quote|We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.}}
 
{{quote|We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.}}
   −
{{我们很快就会创造出比我们自己更强大的智能。当这种情况发生时,人类历史将达到一种奇点,一种如同黑洞中心打结的时空一样难以逾越的知识转变,世界将远远超出我们的理解。我相信,这种奇点已经困扰了许多科幻作家。这使得对星际未来的现实推断变得不可能。要写一个多世纪以来的故事,就需要一场核战争。。。这样世界就可以理解了
+
{{我们很快就会创造出比我们自己更强大的智能。当这种情况发生时,人类历史将达到一种奇点,一种如同黑洞中心打结的时空一样难以逾越的知识转变,世界将远远超出我们的理解。我相信,这种奇点已经困扰了许多科幻作家。这使得对星际未来的现实推断变得不可能。要写一个多世纪以来的故事,就需要一场核战争。。。这样世界就可以理解了。}}
 
  −
}}
  −
 
  −
}}
      
In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher [[Ray Solomonoff]] articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.<ref name=chalmers /><ref name="std"/>
 
In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher [[Ray Solomonoff]] articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.<ref name=chalmers /><ref name="std"/>
第755行: 第723行:  
In 2000, [[Bill Joy]], a prominent technologist and a co-founder of [[Sun Microsystems]], voiced concern over the potential dangers of the singularity.<ref name="JoyFuture"/>
 
In 2000, [[Bill Joy]], a prominent technologist and a co-founder of [[Sun Microsystems]], voiced concern over the potential dangers of the singularity.<ref name="JoyFuture"/>
   −
 
+
在2000年,[[Bill Joy]],一位著名的技术专家和[[Sun Microsystems]]的联合创始人,表达了对奇点的潜在危险的担忧。<ref name="JoyFuture"/>
 
  −
在2000年,[[Bill Joy]],一位著名的技术专家和[[Sun Microsystems]]的联合创始人,表达了对奇点的潜在危险的担忧
         
In 2005, Kurzweil published ''[[The Singularity is Near]]''. Kurzweil's publicity campaign included an appearance on ''[[The Daily Show with Jon Stewart]]''.<ref name="episode"/>
 
In 2005, Kurzweil published ''[[The Singularity is Near]]''. Kurzweil's publicity campaign included an appearance on ''[[The Daily Show with Jon Stewart]]''.<ref name="episode"/>
   −
2005年,库兹韦尔发表了“[[奇点就在附近]]”。库兹韦尔的宣传活动包括在“[[The Daily Show with Jon Stewart]]”上露面<ref name="episode"/>
+
2005年,库兹韦尔发表了“[[奇点临近]]”。库兹韦尔的宣传活动在“[[乔恩·斯图尔特的每日秀]]”上露面<ref name="episode"/>
    
| first = I. J.
 
| first = I. J.
第770行: 第736行:  
In 2007, [[Eliezer Yudkowsky]] suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.<ref name="yudkowsky.net"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref> For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.<ref name="yudkowsky.net"/>
 
In 2007, [[Eliezer Yudkowsky]] suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.<ref name="yudkowsky.net"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref> For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.<ref name="yudkowsky.net"/>
   −
2007年,[[Eliezer Yudkowsky]]指出,许多被赋予“奇点”的不同定义是相互不相容的,而不是相互支持的yudkowsky.net网站“/><ref>桑德伯格,安德斯。”技术奇点模型概述〉《AGI路线图和AGI研讨会的未来》,瑞士卢加诺,3月。第八卷。2010年。</ref>例如,Kurzweil推断了当前的技术发展轨迹,而这些都是由于I.J.Good提出的智能的不连续上升和Vinge关于不可预测性的论文的矛盾。
+
2007年,[[Eliezer Yudkowsky]]指出,许多被赋予“奇点”的不同定义是相互不相容的,而不是相互支持的<ref name="yudkowsky.net"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref>例如,Kurzweil推断了当前的技术发展轨迹,而这些都是在自我完善的人工智能或超人智能的到来之后,尤德科夫斯基认为,这代表着一种紧张关系,既有I.J.古德提出的智力不连续上升,也有文奇关于不可预测性的论文。<ref name="yudkowsky.net"/>
    
| last = Good
 
| last = Good
第784行: 第750行:  
In 2009, Kurzweil and [[X-Prize]] founder [[Peter Diamandis]] announced the establishment of [[Singularity University]], a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."<ref name="singularityu"/> Funded by [[Google]], [[Autodesk]], [[ePlanet Ventures]], and a group of [[High tech|technology industry]] leaders, Singularity University is based at [[NASA]]'s [[Ames Research Center]] in [[Mountain View, California|Mountain View]], [[California]]. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.
 
In 2009, Kurzweil and [[X-Prize]] founder [[Peter Diamandis]] announced the establishment of [[Singularity University]], a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."<ref name="singularityu"/> Funded by [[Google]], [[Autodesk]], [[ePlanet Ventures]], and a group of [[High tech|technology industry]] leaders, Singularity University is based at [[NASA]]'s [[Ames Research Center]] in [[Mountain View, California|Mountain View]], [[California]]. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.
   −
2009年,Kurzweil和[[X-Prize]]的创始人[[Peter Diamandis]]宣布成立[[Singularity University]],这是一所未经认可的私立学院,其宣称的使命是“教育、激励和授权领导者应用指数技术应对人类的重大挑战。”,奇点大学的[[Autodesk]]、[[ePlanet Ventures]]和一组[[高科技|技术产业]]领导者,总部位于[[NASA]]]的[[Ames研究中心]],位于[[加利福尼亚州]的[[Mountain View,California]]。这家非营利组织在夏季每年夏季举办为期十周的研究生课程,涵盖十种不同的技术和相关领域,并全年举办一系列高管课程。
+
2009年,Kurzweil和[[X-Prize]]的创始人[[Peter Diamandis]]宣布成立[[奇点大学]],这是一所未经认可的私立学院,其宣称的使命是“教育、激励和授权领导者应用指数技术应对人类的重大挑战。”<ref name="singularityu"/>奇点大学由[[Google]]、[[Autodesk]]、[[ePlanet Ventures]]和一群[[高科技 | 技术产业]]的领导团队资助,总部设在[[NASA]]的[[Ames Research Center]],位于[[Mountain View,California]]的[[California]]。这家非营利组织在夏季每年夏季举办为期十周的研究生课程,涵盖十种不同的技术和相关领域,并全年举办一系列高管课程。
    
| chapter = Speculations Concerning the First Ultraintelligent Machine
 
| chapter = Speculations Concerning the First Ultraintelligent Machine
561

个编辑

导航菜单