更改

添加164字节 、 2021年8月6日 (五) 22:41
无编辑摘要
第289行: 第289行:       −
[[Paul Allen]] argued the opposite of accelerating returns, the complexity brake;<ref name="Allen"/> the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress.  A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by [[Joseph Tainter]] in his ''The Collapse of Complex Societies'',<ref name="university"/> a law of [[diminishing returns]]. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.<ref name="technological14"/><!--[Previous comment: is this from 'Collapse of Complex Societies' or some other source? Perhaps this refers to Jonathan Huebner's patent analysis mentioned in the earlier paragraph? If so, would be better to integrate this part with that paragraph, since the earlier paragraph mentions that Huebner's analysis has been criticized whereas this paragraph just seems to present it as fact --> The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".
+
保罗·艾伦 Paul Allen 认为,与加速回报相反的是复杂性制动;科学在理解智力方面取得的进展越多,就越难取得更多的进展。一项对专利数量的研究表明,人类的创造力并没有表现出加速的回报。事实上,正如Joseph Tainter 在他的《复杂社会的崩溃 The Collapse of Complex Societies 》中所指出的那样,存在一个收益递减定律 a law of diminishing returns 的限制。每千人的专利的数量在1850年至1900年期间达到顶峰,此后一直在下降。复杂性的增长最终会自我限制,并导致广泛的“一般系统崩溃 general systems collapse”。
 
  −
 
  −
Paul Allen认为,与加速回报相反的是复杂性制动器;科学在理解智力方面取得的进展越多,就越难取得更多的进展。一项对专利数量的研究表明,人类的创造力并没有表现出加速的回报,但事实上,正如Joseph Tainter 在他的《复杂社会的崩溃》中所指出的,有一个<font color = "#ff8000">收益递减定律a law of diminishing returns</font>的限制。每千件专利的数量在1850年至1900年期间达到顶峰,此后一直在下降。复杂性的增长最终会自我限制,并导致广泛的“一般系统崩溃”。
      
[[Jaron Lanier]] refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process."<ref name="lanier">{{cite web |author=Jaron Lanier |title=Who Owns the Future? |work=New York: Simon & Schuster |date=2013 |url=http://www.epubbud.com/read.php?g=JCB8D9LA&tocp=59}}</ref> He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on ''not'' emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."<ref name="lanier" />
 
[[Jaron Lanier]] refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process."<ref name="lanier">{{cite web |author=Jaron Lanier |title=Who Owns the Future? |work=New York: Simon & Schuster |date=2013 |url=http://www.epubbud.com/read.php?g=JCB8D9LA&tocp=59}}</ref> He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on ''not'' emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."<ref name="lanier" />
   −
Jaron Lanier refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process." He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad
+
Jaron Lanier驳斥了奇点不可避免的观点。他说:“我不认为这项技术是在创造自我。这不是一个自主的过程。”<ref name="lanier">{{cite web |author=Jaron Lanier |title=Who Owns the Future? |work=New York: Simon & Schuster |date=2013 |url=http://www.epubbud.com/read.php?g=JCB8D9LA&tocp=59}}</ref>他接着断言:“相信人的能动性而不是技术决定论的原因是,这样你就可以有一个经济体,人们在其中可以自己挣钱,创造自己的生活。如果你在不强调个人主观能动性的基础上构建社会,就等于在操作上否认人们的影响力、尊严和自决...接受 [奇点的想法] 将是对糟糕数据和糟糕政治的颂扬。<ref name="lanier" />
politics."
  −
 
  −
[[Jaron Lanier]]驳斥了奇点不可避免的观点。他说:“我不认为这项技术是在创造自我。这不是一个自主的过程。”他接着断言:“相信人的能动性而不是技术决定论的原因是,这样你就可以有一个经济体,人们可以自己挣钱,创造自己的生活。如果你构建一个不强调个体能动性的社会,在操作上这个社会同样会否认人们的影响力、尊严和自决权……接受奇点的想法将是对糟糕的数据和糟糕的政治的庆祝。”
      
[[Economics|Economist]] [[Robert J. Gordon]], in ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'' (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the [[financial crisis of 2007–2008]], and argues that the economic data show no trace of a coming Singularity as imagined by mathematician [[I.J. Good]].<ref>[[William D. Nordhaus]], "Why Growth Will Fall" (a review of [[Robert J. Gordon]], ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'', Princeton University Press, 2016, {{ISBN|978-0691147727}}, 762 pp., $39.95), ''[[The New York Review of Books]]'', vol. LXIII, no. 13 (August 18, 2016), p. 68.</ref>
 
[[Economics|Economist]] [[Robert J. Gordon]], in ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'' (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the [[financial crisis of 2007–2008]], and argues that the economic data show no trace of a coming Singularity as imagined by mathematician [[I.J. Good]].<ref>[[William D. Nordhaus]], "Why Growth Will Fall" (a review of [[Robert J. Gordon]], ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'', Princeton University Press, 2016, {{ISBN|978-0691147727}}, 762 pp., $39.95), ''[[The New York Review of Books]]'', vol. LXIII, no. 13 (August 18, 2016), p. 68.</ref>
   −
经济学家Robert J.Gordon在<font color = "#ff8000">《美国经济增长的兴衰:内战以来的美国生活水平The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War</font>》(2016)中指出,据测量,经济增长在1970年左右放缓,自2007-2008年金融危机以来甚至进一步放缓,并认为,经济数据没有显示出数学家I.J.Good所想象的未来奇点的踪迹。
+
经济学家 Robert J.Gordon在《美国经济增长的兴衰:内战以来的美国生活水平 The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War》(2016)中指出,据测量,经济增长在1970年左右放缓,自2007-2008年金融危机以来甚至进一步放缓,并认为经济数据显示没有迹象表明数学家I.J.Good所设想的奇点将会到来。<ref>[[William D. Nordhaus]], "Why Growth Will Fall" (a review of [[Robert J. Gordon]], ''The Rise and Fall of American Growth:  The U.S. Standard of Living Since the Civil War'', Princeton University Press, 2016, {{ISBN|978-0691147727}}, 762 pp., $39.95), ''[[The New York Review of Books]]'', vol. LXIII, no. 13 (August 18, 2016), p. 68.</ref>
    
In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a [[Log-log plot|log-log]] chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist [[PZ Myers]] points out that many of the early evolutionary "events" were picked arbitrarily.<ref name="PZMyers"/> Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on [[:File:ParadigmShiftsFrr15Events.svg|a log-log chart]]. ''[[The Economist]]'' mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.<ref name="moreblades"/>
 
In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a [[Log-log plot|log-log]] chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist [[PZ Myers]] points out that many of the early evolutionary "events" were picked arbitrarily.<ref name="PZMyers"/> Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on [[:File:ParadigmShiftsFrr15Events.svg|a log-log chart]]. ''[[The Economist]]'' mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.<ref name="moreblades"/>
   −
除了对奇点概念的一般性批评外,一些批评者还对库兹韦尔的标志性图表提出了质疑。一种批评是,一个对数的图表天然地偏向于直线的结果。其他人批评库兹韦尔在数据点的使用上的选择偏差。例如,生物学家P. Z. Myers指出,许多早期的进化事件都是随意挑选的。库兹韦尔反驳了这一点,他绘制了15个中立来源的进化事件图,并表明它们都符合一条直线.《经济学人》用一张图表来嘲讽这个概念:一把剃须刀上的刀片数在过去几年里从一个增加到多达五个,并且它将以更快的速度增长到无穷大。
+
除了对奇点概念的一般性批评外,一些批评者还对库兹韦尔的标志性图表提出了质疑。一种批评是,这种性质的对数图像本质上就会存在倾向于直线的有偏差结果。其他人批评库兹韦尔在数据点的使用上存在选择偏差。<ref name="PZMyers"/>例如,生物学家P. Z. Myers指出,许多早期的进化“事件”都是随意挑选的。库兹韦尔反驳了这一点,他绘制了15个中立来源的进化事件图,并表明它们都符合一条直线.《经济学人》用一张图表来嘲讽这个概念:一把剃须刀上的刀片数在过去几年里从一个增加到多达五个,并且它将以更快的速度增长到无穷大。<ref name="moreblades"/>
    
==Potential impacts潜在影响==
 
==Potential impacts潜在影响==
第322行: 第316行:  
The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.<ref name="positive-and-negative">{{Citation|last=Yudkowsky |first=Eliezer |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks |editor-last=Bostrom |editor-first=Nick |editor2-last=Cirkovic |editor2-first=Milan |publisher=Oxford University Press |year=2008 |url=http://singinst.org/AIRisk.pdf |bibcode=2008gcr..book..303Y |isbn=978-0-19-857050-9 |page=303 |url-status=dead |archiveurl=https://web.archive.org/web/20080807132337/http://www.singinst.org/AIRisk.pdf |archivedate=2008-08-07 }}</ref><ref name="theuncertainfuture"/> It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an [[Existential risk|existential threat]].<ref name="catastrophic"/><ref name="nickbostrom"/> Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the [[Future of Humanity Institute]], the [[Machine Intelligence Research Institute]],<ref name="positive-and-negative"/> the [[Center for Human-Compatible Artificial Intelligence]], and the [[Future of Life Institute]].
 
The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.<ref name="positive-and-negative">{{Citation|last=Yudkowsky |first=Eliezer |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks |editor-last=Bostrom |editor-first=Nick |editor2-last=Cirkovic |editor2-first=Milan |publisher=Oxford University Press |year=2008 |url=http://singinst.org/AIRisk.pdf |bibcode=2008gcr..book..303Y |isbn=978-0-19-857050-9 |page=303 |url-status=dead |archiveurl=https://web.archive.org/web/20080807132337/http://www.singinst.org/AIRisk.pdf |archivedate=2008-08-07 }}</ref><ref name="theuncertainfuture"/> It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an [[Existential risk|existential threat]].<ref name="catastrophic"/><ref name="nickbostrom"/> Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the [[Future of Humanity Institute]], the [[Machine Intelligence Research Institute]],<ref name="positive-and-negative"/> the [[Center for Human-Compatible Artificial Intelligence]], and the [[Future of Life Institute]].
   −
The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate. It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an existential threat. Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the Future of Humanity Institute, the Machine Intelligence Research Institute, the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute
     −
“技术奇点”一词反映出这样的变化可能突然发生,而且很难预测由此产生的新世界将如何运作。目前尚不清楚导致奇点的智能爆炸是有益还是有害,甚至是否具有存在威胁。由于人工智能是奇点风险的一个主要因素,许多组织追求一种将人工智能的目标系统与人类价值观相协调的技术理论,这些组织包括<font color = "#ff8000">人类未来研究所Future of Humanity Institute</font><font color = "#ff8000">机器智能研究所the Machine Intelligence Research Institute</font>, <font color = "#ff8000">人类兼容人工智能中心the Center for Human-Compatible Artificial Intelligence</font>和<font color = "#ff8000">未来生命研究所the Future of Life Institute</font>。
+
“技术奇点”一词反映了这样一种想法<ref name="positive-and-negative">{{Citation|last=Yudkowsky |first=Eliezer |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks |editor-last=Bostrom |editor-first=Nick |editor2-last=Cirkovic |editor2-first=Milan |publisher=Oxford University Press |year=2008 |url=http://singinst.org/AIRisk.pdf |bibcode=2008gcr..book..303Y |isbn=978-0-19-857050-9 |page=303 |url-status=dead |archiveurl=https://web.archive.org/web/20080807132337/http://www.singinst.org/AIRisk.pdf |archivedate=2008-08-07 }}</ref><ref name="theuncertainfuture"/> :这种变化可能突然发生,而且很难预测由此产生的新世界将如何运作。目前尚不清楚导致奇点的智能爆炸是有益还是有害,甚至是一种存在威胁。<ref name="catastrophic"/><ref name="nickbostrom"/> 由于人工智能是奇点风险的一个主要因素,许多组织追求一种将人工智能的目标系统与人类价值观相协调的技术理论。这些组织包括人类未来研究所 Future of Humanity Institute,机器智能研究所 The Machine Intelligence Research Institute, 人类兼容人工智能中心 <ref name="positive-and-negative"/>The Center for Human-Compatible Artificial Intelligence和未来生命研究所 The Future of Life Institute。
    
Physicist [[Stephen Hawking]] said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."<ref name=hawking_2014/> Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."<ref name=hawking_2014/> Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:<ref name=hawking_2014>{{cite web |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'  |work=[[The Independent]] |author=Stephen Hawking |date=1 May 2014 |accessdate=May 5, 2014|author-link=Stephen Hawking }}</ref>
 
Physicist [[Stephen Hawking]] said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."<ref name=hawking_2014/> Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."<ref name=hawking_2014/> Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:<ref name=hawking_2014>{{cite web |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'  |work=[[The Independent]] |author=Stephen Hawking |date=1 May 2014 |accessdate=May 5, 2014|author-link=Stephen Hawking }}</ref>
   −
 
+
物理学家史蒂芬·霍金在2014年表示,“成功创造人工智能将是人类历史上最大的事件。不幸的是,这也可能是最后一次,除非我们学会如何规避风险。”<ref name=hawking_2014/>  霍金认为,在未来几十年里,人工智能可能会带来“无法估量的利益和风险”,例如“技术超越金融市场的聪明程度,超越人类研究人员的创造力,超越人类领袖的操控力,开发我们甚至无法理解的武器”。霍金建议<ref name=hawking_2014/> ,人们应该更认真地对待人工智能,并做更多的工作来为奇点做准备:<ref name=hawking_2014>{{cite web |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'  |work=[[The Independent]] |author=Stephen Hawking |date=1 May 2014 |accessdate=May 5, 2014|author-link=Stephen Hawking }}</ref>
Physicist Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:
  −
 
  −
物理学家史蒂芬·霍金在2014年表示,“成功创造人工智能将是人类历史上最大的事件。不幸的是,这也可能是最后一次,除非我们学会如何规避风险。” 霍金认为,在未来几十年里,人工智能可能会带来“无法估量的利益和风险”,例如“技术超越金融市场,超越人类研究人员,超越人类领袖,开发我们甚至无法理解的武器”。
  −
霍金建议,人们应该更认真地对待人工智能,并应该做更多的工作来为奇点做准备:
      
{{quote|So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.}}
 
{{quote|So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.}}
   −
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.
+
所以,面对可能的收益和风险难以估量的未来,专家们肯定会尽一切可能确保最好的结果,对吗?错了。如果一个高级的外星文明给我们发了一条信息说,“我们几十年后就会到达”,我们会不会只回答,“好吧,你到了这里就打电话给我们——我们会开着灯的”?可能不会——但这或多或少就是人工智能正在发生的事情。
   −
所以,面对未来可能出现的无法估量的利益和风险,专家们肯定会尽一切可能确保最好的结果,对吗?错了。如果一个优越的外星文明给我们发了一条信息说,“我们几十年后就会到达”,我们会不会只回答,“好吧,你到了这里就打电话给我们——我们会关灯的”?可能不是——但这或多或少就是人工智能所发生的事情。
+
{{Harvtxt|Berglas|2008}} claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf |date=2012-06-11 }}. Draft for a publication in ''Global Catastrophic Risk'' from August 31, 2006, retrieved July 18, 2011 (PDF file)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref> [[Anders Sandberg]] has also elaborated on this scenario, addressing various common counter-arguments.<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref> AI researcher [[Hugo de Garis]] suggests that artificial intelligences may simply eliminate the human race [[instrumental convergence|for access to scarce resources]],<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref> and humans would be powerless to stop them.<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref> Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.<ref name="nickbostrom7" />
   −
{{Harvtxt|Berglas|2008}} claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf |date=2012-06-11 }}. Draft for a publication in ''Global Catastrophic Risk'' from August 31, 2006, retrieved July 18, 2011 (PDF file)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref> [[Anders Sandberg]] has also elaborated on this scenario, addressing various common counter-arguments.<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref> AI researcher [[Hugo de Garis]] suggests that artificial intelligences may simply eliminate the human race [[instrumental convergence|for access to scarce resources]],<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref> and humans would be powerless to stop them.<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref> Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.<ref name="nickbostrom7" />
+
Berglas(2008) 声称, 没有直接的进化动机促使人工智能对人类友好。进化并不倾向于产生人类所重视的结果,也没有理由期望一个任意的优化过程会促进人类所期望的结果,而不是无意中导致人工智能以违背其创造者原有意图的方式行事。<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf |date=2012-06-11 }}. Draft for a publication in ''Global Catastrophic Risk'' from August 31, 2006, retrieved July 18, 2011 (PDF file)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref> 安德斯·桑德伯格 Anders Sandberg 也也对这一情景进行了详细阐述,讨论了各种常见的反驳意见<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref> 。人工智能研究员 Hugo de Garis<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref> 认为,人工智能可能会为了获取稀缺资源而直接消灭人类,并且人类将无力阻止它们。<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref> 或者,在进化压力下为了促进自身生存而发展起来的人工智能可能会胜过人类。<ref name="nickbostrom7" />
   −
Berglas (2008) claims that there is no direct evolutionary motivation for an AI to be friendly to  humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by  humankind, rather than inadvertently leading to an AI behaving in a way not intended by its  creators.[76][77][78] Anders Sandberg has also elaborated on this scenario, addressing various  common counter-arguments.[79] AI researcher Hugo de Garis suggests that artificial intelligences  may simply eliminate the human race for access to scarce resources,[48][80] and humans would be  powerless to stop them.[81] Alternatively, AIs developed under evolutionary pressure to promote their  own survival could outcompete humanity. 
     −
{{Harvtxt|Berglas|2008}}声称没有直接的进化动机促使人工智能对人类友好。进化并不倾向于产生人类所重视的结果,也没有理由期望一个任意的优化过程会促进人类所期望的结果,或者期望人工智能不经意地以一种不是其创造者意图的方式行动。Anders Sandberg也详细阐述了这种情况,讨论了各种常见的反驳意见。人工智能研究员Hugo de Garis认为,人工智能可能会直接消灭人类以获取稀缺资源,并且人类将无力阻止它们。或者,在进化压力下发展起来的人工智能,为了促进自身的生存可能打败人类。
      
Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:
 
Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:
第389行: 第376行:  
A 2016 article in ''[[Trends in Ecology & Evolution]]'' argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust [[artificial intelligence]] with our lives through [[Anti-lock braking system|antilock braking in cars]] and [[autopilot]]s in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".
 
A 2016 article in ''[[Trends in Ecology & Evolution]]'' argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust [[artificial intelligence]] with our lives through [[Anti-lock braking system|antilock braking in cars]] and [[autopilot]]s in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".
   −
2016年发表在<font color = "#ff8000">生态学和进化进展Trends in Ecology and Evolution</font>”的一篇文章认为,“人类已经接受了生物和技术的融合。我们清醒时大部分时间都是通过数字媒介进行交流的……我们拿性命相信汽车上的<font color = "#ff8000">防抱死制动系统Anti-lock braking system</font>和飞机上的<font color = "#ff8000">自动巡航模式autopilot</font>……在美国,三分之一的婚姻都是在网络上开始的,数字算法也在人类配对和繁殖中也发挥了作用”。
+
2016年发表在 生态学和进化进展 Trends in Ecology and Evolution 的一篇文章认为,“人类已经接受了生物和技术的融合。我们清醒时大部分时间都是通过数字媒介进行交流的……我们拿性命做担保信任汽车上的防抱死制动系统 Anti-lock braking system 和飞机上的自动巡航模式 autopilot……在美国,三分之一的婚姻都是在网络上开始的,数字算法也在人类配对和繁殖中也发挥了作用”。
       
The article further argues that from the perspective of the [[evolution]], several previous [[The Major Transitions in Evolution|Major Transitions in Evolution]] have transformed life through innovations in information storage and replication ([[RNA]], [[DNA]], [[multicellularity]], and [[culture]] and [[language]]). In the current stage of life's evolution, the carbon-based biosphere has generated a [[cognitive system]] (humans) capable of creating technology that will result in a comparable [[The Major Transitions in Evolution|evolutionary transition]].
 
The article further argues that from the perspective of the [[evolution]], several previous [[The Major Transitions in Evolution|Major Transitions in Evolution]] have transformed life through innovations in information storage and replication ([[RNA]], [[DNA]], [[multicellularity]], and [[culture]] and [[language]]). In the current stage of life's evolution, the carbon-based biosphere has generated a [[cognitive system]] (humans) capable of creating technology that will result in a comparable [[The Major Transitions in Evolution|evolutionary transition]].
   −
文章进一步指出,从[[进化]]的角度来看,以前的几次进化巨变通过创新信息存储和复制的方式(如RNA、DNA、多细胞性、文化和语言的出现)来改变生命。在生命进化的当前阶段,以碳为基础的生物圈已经产生了一个能够创造出可以与前几次巨变相媲美的技术的认知系统(人类)。
+
文章进一步指出,从进化的角度来看,以前的几次[[进化]]巨变通过信息存储和复制方式(如RNA、DNA、多细胞性、文化和语言的出现)的创新来改变生命。在生命进化的当前阶段,以碳为基础的生物圈产生了一个认知系统(人类),能够创造技术,从而导致一个类似的进化变迁。
    
The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 [[zettabyte]]s in 2014 (5{{e|21}} bytes).{{Citation needed|date=April 2019}}
 
The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 [[zettabyte]]s in 2014 (5{{e|21}} bytes).{{Citation needed|date=April 2019}}
   −
人类创造的数字信息已经达到了与生物圈中生物信息相似的程度。自20世纪80年代以来,存储的数字信息量大约每2.5年翻一番,2014年达到约5泽字节(5e21字节)。
+
人类创造的数字信息已经达到了与生物圈中生物信息相似的规模。自20世纪80年代以来,存储的数字信息量大约每2.5年翻一番,2014年达到约5泽字节(5e21字节)。{{Citation needed|date=April 2019}}
      第418行: 第405行:  
In February 2009, under the auspices of the [[Association for the Advancement of Artificial Intelligence]] (AAAI), [[Eric Horvitz]] chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire [[autonomy]], and to what degree they could use such abilities to pose threats or hazards.<ref name="nytimes july09" />
 
In February 2009, under the auspices of the [[Association for the Advancement of Artificial Intelligence]] (AAAI), [[Eric Horvitz]] chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire [[autonomy]], and to what degree they could use such abilities to pose threats or hazards.<ref name="nytimes july09" />
   −
2009年2月,在<font color = "#ff8000">人工智能促进协会Association for the Advancement of Artificial Intelligence, AAAI</font>的主持下,[[Eric Horvitz]]在加利福尼亚州Pacific Grove的Asilomar主持了一次由主要计算机科学家、人工智能研究人员和机器人学家组成的会议。其目的是讨论,机器人如果能够自给自足并能够做出自己决定,则其潜在影响是什么。他们讨论了计算机和机器人能够在多大程度上获得<font color = "#ff8000">自主性autonomy</font>,以及在多大程度上可以利用这些能力对人类构成威胁或危险。
+
2009年2月,在人工智能促进协会Association for the Advancement of Artificial Intelligence(AAAI) 的主持下,Eric Horvitz在加利福尼亚州Pacific Grove 的 Asilomar 主持了一次由主要计算机科学家、人工智能研究人员和机器人学家参加的会议。其目的是讨论机器人能够自给自足并能够自己做决定的假设可能性的潜在影响。他们讨论了计算机和机器人能够在多大程度上获得自主性 autonomy,以及在多大程度上可以利用这些能力对人类构成威胁或危险。<ref name="nytimes july09" />
 
        第425行: 第411行:       −
有些机器被编程成各种形式的<font color = "#ff8000">半自主semi-autonomy</font>,包括定位自己的电源和选择武器攻击的目标等。此外,有些计算机病毒可以避免被消除,根据与会科学家的说法,可以说已经达到了机器智能的“蟑螂”阶段。与会者指出,科幻小说中描述的自我意识可能不太可能,但也存在其他潜在的危险和陷阱。
+
有些机器被编程成各种形式的半自主 semi-autonomy,包括定位自己的电源和选择武器攻击的目标等。此外,有些计算机病毒可以避免被消除,根据与会科学家的说法,可以说已经达到了机器智能的“蟑螂”阶段。与会者指出,科幻小说中描述的自我意识可能不太可能,但也存在其他潜在的危险和陷阱。<ref name="nytimes july09">[https://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&ref=todayspaper Scientists Worry Machines May Outsmart Man] By JOHN MARKOFF, NY Times, July 26, 2009.</ref>
    
Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability.<ref name=":0">{{cite magazine |last=Robinson |first=Frank S. |title=The Human Future: Upgrade or Replacement? |magazine=[[The Humanist]] |date=27 June 2013 |url=https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement}}</ref> Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than humanity captures, so capturing more of that solar energy would hold vast promise for civilizational growth.
 
Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability.<ref name=":0">{{cite magazine |last=Robinson |first=Frank S. |title=The Human Future: Upgrade or Replacement? |magazine=[[The Humanist]] |date=27 June 2013 |url=https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement}}</ref> Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than humanity captures, so capturing more of that solar energy would hold vast promise for civilizational growth.
第442行: 第428行:  
In a hard takeoff scenario, an AGI rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals. In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.<ref>Bugaj, Stephan Vladimir, and Ben Goertzel. "Five ethical imperatives and their implications for human-AGI interaction." Dynamical Psychology (2007).</ref><ref>Sotala, Kaj, and Roman V. Yampolskiy. "Responses to catastrophic AGI risk: a survey." Physica Scripta 90.1 (2014): 018001.</ref>
 
In a hard takeoff scenario, an AGI rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals. In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.<ref>Bugaj, Stephan Vladimir, and Ben Goertzel. "Five ethical imperatives and their implications for human-AGI interaction." Dynamical Psychology (2007).</ref><ref>Sotala, Kaj, and Roman V. Yampolskiy. "Responses to catastrophic AGI risk: a survey." Physica Scripta 90.1 (2014): 018001.</ref>
   −
在硬起飞的情况下,一个<font color = "#ff8000">人工通用智能AGI</font>迅速自我完善,“掌控”了世界(也许在几个小时内),这对于纠正人为引起的重大错误或逐步调整的AGI目标来说太快了。在软起飞的情况下,AGI仍然比人类强大得多,但以一种类似人类的速度进步(也许是几十年的数量级),在一个持续的人类互动和指导可以有效纠正AGI的时间尺度上发展。
+
在硬起飞的情况下,通用人工智能 AGI 迅速自我完善并“掌控”世界(也许在几个小时内)。速度太快,以致于无法进行重大的人为纠错或对 AGI 的目标进行逐步调整。在软起飞的情况下,虽然 AGI 仍然比人类强大得多,但却以一种类似人类的速度进步(也许是几十年的数量级),持续的人类互动和修正可以有效地引导AGI 的发展。<ref>Bugaj, Stephan Vladimir, and Ben Goertzel. "Five ethical imperatives and their implications for human-AGI interaction." Dynamical Psychology (2007).</ref><ref>Sotala, Kaj, and Roman V. Yampolskiy. "Responses to catastrophic AGI risk: a survey." Physica Scripta 90.1 (2014): 018001.</ref>
 
        第449行: 第434行:       −
Ramez Naam argues against a hard takeoff. He has pointed that we already see recursive self-improvement by superintelligences, such as corporations. Intel, for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores  to... design better CPUs!" However, this has  not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law. Naam further points out that the computational complexity of higher intelligence may be much  greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1."
+
Ramez Naam反对硬起飞。他指出,我们已经看到企业等超级智能体的递归式地自我完善。例如,Intel拥有“数万人的集体脑力,可能还有数百万个CPU核心……来设计更好的CPU!”然而,这并没有导致硬起飞;相反,它以摩尔定律的形式实现了软起飞<ref name=Naam2014Further>{{cite web|last=Naam|first=Ramez|title=The Singularity Is Further Than It Appears|url=http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html|accessdate=16 May 2014|year=2014}}</ref> 。Naam进一步指出,高等智能的计算复杂度可能比线性大得多,因此,“创建智能系统2的难度可能两倍于是创建智能系统1的两倍多。”<ref name=Naam2014Ascend>{{cite web|last=Naam|first=Ramez|title=Why AIs Won't Ascend in the Blink of an Eye - Some Math|url=http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html|accessdate=16 May 2014|year=2014}}</ref>
 
  −
[[Ramez Naam]]反对硬起飞。他指出,我们已经看到企业等超级智能体的递归自我改进。例如,[[Intel]]拥有“数万人的集体脑力,可能还有数百万个CPU核心……来设计更好的CPU!”然而,这并没有导致一个硬起飞;相反,它以[[摩尔定律]]的形式实现了软起飞。Naam进一步指出更高智能的复杂性可能比线性复杂得多,因此,“创造一个智慧的头脑2可能比创造一个智慧的头脑1”的难度要多一倍多。”
  −
 
      
[[J. Storrs Hall]] believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the ''starting point'' of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460–467|url=http://www.agiri.org/takeoff_hall.pdf|accessdate=16 May 2014}}</ref>
 
[[J. Storrs Hall]] believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the ''starting point'' of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460–467|url=http://www.agiri.org/takeoff_hall.pdf|accessdate=16 May 2014}}</ref>
   −
J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general  improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a  hard time keeping up with the cutting-edge technology used by the rest of the world.  
+
J.Storrs Hall认为,“许多常见的一夜之间出现的硬起飞场景都是循环论证——它们似乎在自我提升过程的起点上假设了超人类的能力”,以便人工智能能够实现起飞所需的戏剧性的、领域通用的改进。Hall认为,一个初出茅庐的人工智能与其靠自己不断地自我改进硬件、软件和基础设施,不如专注于一个它最有效的领域,然后在市场上购买剩余的组件,因为市场上产品的质量不断提高,人工智能将很难跟上世界其他地方使用的尖端技术。<ref name=Hall2008>{{cite journal|last=Hall|first=J. Storrs|title=Engineering Utopia|journal=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference|date=2008|pages=460–467|url=http://www.agiri.org/takeoff_hall.pdf|accessdate=16 May 2014}}</ref>
 
  −
 
  −
[[J.Storrs Hall]]认为,“许多常见的一夜之间出现的硬起飞场景都是循环论证——它们似乎在自我提升过程的起点上假设了超人类的能力”,以便人工智能能够实现起飞所需的戏剧性的、领域一般性的改进。Hall认为,一个初出茅庐的人工智能与其靠自己不断地自我改进硬件、软件和基础设施,不如专注于一个它最有效的领域,然后在市场上购买剩余的组件,因为市场上产品的质量不断提高,而单是人工智能的提高很难跟上世界其他地区所用的尖端技术。
         
[[Ben Goertzel]] agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. Goerzel refers to this scenario as a "semihard takeoff".<ref name="Goertzel2014">{{cite news|last1=Goertzel|first1=Ben|title=Superintelligence — Semi-hard Takeoff Scenarios|url=http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/|accessdate=25 October 2014|agency=h+ Magazine|date=26 Sep 2014}}</ref>
 
[[Ben Goertzel]] agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. Goerzel refers to this scenario as a "semihard takeoff".<ref name="Goertzel2014">{{cite news|last1=Goertzel|first1=Ben|title=Superintelligence — Semi-hard Takeoff Scenarios|url=http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/|accessdate=25 October 2014|agency=h+ Magazine|date=26 Sep 2014}}</ref>
   −
[[Ben Goertzel]]同意Hall的意见,即,一个新的人类级别的人工智能将很好地利用其智能来积累财富。人工智能的天赋可能会激励公司和政府将其软件分散到整个社会。Goertzel怀疑对5分钟的硬起飞,但他推测从人类到超人的水平,以5年的速度起飞是合理的。Goerzel将这种情况称为“<font color = "#ff8000">半硬起飞semihard takeoff</font>”。
+
Ben Goertzel 同意 Hall 的意见,即一种新的人类级别的人工智能将很好地利用其智能来积累财富。人工智能的天赋可能会激励公司和政府将其软件分散到整个社会。Goertzel 对5分钟的硬起飞持怀疑态度,但他推测从人类到超人的水平,以5年的速度起飞是合理的。Goerzel将这种情况称为“半硬起飞semihard takeoff”。<ref name="Goertzel2014">{{cite news|last1=Goertzel|first1=Ben|title=Superintelligence — Semi-hard Takeoff Scenarios|url=http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/|accessdate=25 October 2014|agency=h+ Magazine|date=26 Sep 2014}}</ref>
 
      
[[Max More]] disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
 
[[Max More]] disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
   −
[[Max More]]不同意这一观点,他认为,如果只有少数超高速的人类水平的人工智能,它们不会从根本上改变世界,因为它们仍将依赖人来完成任务,并且仍然会受到人类认知的限制。即使所有的超高速人工智能都致力于智能增强,但目前还不清楚为什么它们在产生超人类智能方面比现有的人类认知科学家做得更好,尽管进展速度会加快。更进一步指出,超级智能不会在一夜之间改变世界:超级智能需要与现有的、缓慢的人类系统进行接触,以完成对世界的物理影响。”合作、组织和将想法付诸实际变革的需要将确保所有旧规则不会在一夜之间甚至几年内被废除。”
+
Max More不同意这一观点,他认为,如果只有少数超高速的人类水平的人工智能,它们不会从根本上改变世界,因为它们仍将依赖人来完成任务,并且仍然会受到人类认知的限制。即使所有的超高速人工智能都致力于智能增强,但目前还不清楚为什么它们在产生超人类智能方面比现有的人类认知科学家做得更好,尽管进展速度会加快。更进一步指出,超级智能不会在一夜之间改变世界:超级智能需要与现有的、缓慢的人类系统进行接触,以完成对世界的物理影响。”合作、组织和将想法付诸实际变革的需要将确保所有旧规则不会在一夜之间甚至几年内被抛弃。”<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
 
      
== Immortality 永生==
 
== Immortality 永生==
第476行: 第453行:  
In his 2005 book, ''[[The Singularity is Near]]'', [[Ray Kurzweil|Kurzweil]] suggests that medical advances would allow people to protect their bodies from the effects of aging, making the [[Life extension|life expectancy limitless]]. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.<ref>''The Singularity Is Near'', p.&nbsp;215.</ref> Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests [[somatic gene therapy]]; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.<ref>''The Singularity is Near'', p.&nbsp;216.</ref>
 
In his 2005 book, ''[[The Singularity is Near]]'', [[Ray Kurzweil|Kurzweil]] suggests that medical advances would allow people to protect their bodies from the effects of aging, making the [[Life extension|life expectancy limitless]]. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.<ref>''The Singularity Is Near'', p.&nbsp;215.</ref> Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests [[somatic gene therapy]]; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.<ref>''The Singularity is Near'', p.&nbsp;216.</ref>
   −
在Kurzweil 2005年出版的《<font color = "#ff8000">奇点近了The Singularity is Near</font>》一书中,他指出,医学的进步将使人们能够保护自己的身体免受衰老的影响,从而延长寿命。Kurzweil认为,医学的技术进步将使我们能够不断地修复和更换我们身体中有缺陷的部件,从而将寿命延长到某个他无法确定的年龄。Kurzweil通过讨论当前的生物工程进展进一步支持了他的论点。Kurzweil建议了<font color = "#ff8000">体细胞基因疗法somatic gene therapy</font>;在合成具有特定遗传信息的病毒之后,下一步则是把这项技术应用到基因治疗中,用合成的基因取代人类的DNA。
+
在Kurzweil 2005年出版的《奇点近了The Singularity is Near》一书中,他指出,医学的进步将使人们能够保护自己的身体免受衰老的影响,从而延长寿命。Kurzweil认为,医学的技术进步将使我们能够不断地修复和更换我们身体中有缺陷的部件,从而将寿命延长到某个他无法确定的年龄。Kurzweil通过讨论当前的生物工程进展进一步支持了他的论点。Kurzweil建议了体细胞基因疗法somatic gene therapy;在合成具有特定遗传信息的病毒之后,下一步则是把这项技术应用到基因治疗中,用合成的基因取代人类的DNA。
       
[[K. Eric Drexler]], one of the founders of [[nanotechnology]], postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical [[biological machine]]s, in his 1986 book ''[[Engines of Creation]]''.
 
[[K. Eric Drexler]], one of the founders of [[nanotechnology]], postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical [[biological machine]]s, in his 1986 book ''[[Engines of Creation]]''.
   −
[[K.Eric Drexler]],<font color = "#ff8000">纳米技术nanotechnology</font>的创始人之一,在他1986年的著作“<font color = "#ff8000">创造的引擎Engines of Creation</font>”中,假设了细胞修复设备,包括在细胞内运行并利用目前假设的[[生物机器]]的设备。
+
[[K.Eric Drexler]],纳米技术nanotechnology的创始人之一,在他1986年的著作“创造的引擎Engines of Creation”中,假设了细胞修复设备,包括在细胞内运行并利用目前假设的[[生物机器]]的设备。
      第489行: 第466行:       −
根据[[Richard Feynman]],他过去的研究生和合作者[[Albert Hibbs]]最初向他建议(大约在1959年)<font color = "#ff8000">费曼理论微型机械Feynman's theoretical micromachines</font>的“医学”用途。Hibbs认为,有一天,理论上某些修理机器的尺寸可能会被尽可能地缩小(正如费曼所说的那样)“[[Molecular machine#Biological|swallow the doctor]]”。这个想法被收入了费曼1959年的文章“<font color = "#ff8000">在底部有很多空间[There's Plenty of Room at the Bottom</font>”。
+
根据[[Richard Feynman]],他过去的研究生和合作者[[Albert Hibbs]]最初向他建议(大约在1959年)费曼理论微型机械Feynman's theoretical micromachines的“医学”用途。Hibbs认为,有一天,理论上某些修理机器的尺寸可能会被尽可能地缩小(正如费曼所说的那样)“[[Molecular machine#Biological|swallow the doctor]]”。这个想法被收入了费曼1959年的文章“在底部有很多空间[There's Plenty of Room at the Bottom”。
      第532行: 第509行:  
In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher [[Ray Solomonoff]] articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.<ref name=chalmers /><ref name="std"/>
 
In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher [[Ray Solomonoff]] articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.<ref name=chalmers /><ref name="std"/>
   −
1985年,在《<font color = "#ff8000">人工智能的时间尺度The Time Scale of Artificial Intelligence</font>》一书中,人工智能研究人员[[Ray Solomonoff]]以数学的方式阐述了他所说的“无限点”的相关概念:如果一个人类水平的能自我改进人工智能的研究社区需要四年时间使其速度加倍,那么两年,然后一年,依此类推,它们的能力在有限的时间内无限增长。<ref name=chalmers/><ref name=“std”/>
+
1985年,在《人工智能的时间尺度The Time Scale of Artificial Intelligence》一书中,人工智能研究人员[[Ray Solomonoff]]以数学的方式阐述了他所说的“无限点”的相关概念:如果一个人类水平的能自我改进人工智能的研究社区需要四年时间使其速度加倍,那么两年,然后一年,依此类推,它们的能力在有限的时间内无限增长。<ref name=chalmers/><ref name=“std”/>
    
Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",<ref name="vinge1993" /> spread widely on the internet and helped to popularize the idea.<ref name="google5"/> This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.<ref name="vinge1993" />
 
Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",<ref name="vinge1993" /> spread widely on the internet and helped to popularize the idea.<ref name="google5"/> This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.<ref name="vinge1993" />
      −
Vinge 1993年的文章《<font color = "#ff8000">未来的技术奇点:如何在后人类时代生存The Coming Technological Singularity: How to Survive in the Post-Human Era</font>》在互联网上广为传播,普及了这一理念。“在三十年内,我们将拥有创造超人智慧的技术手段。不久之后,人类时代将结束。”Vinge认为,科幻小说作者无法写出超越人类智力的现实主义后奇点人物,因为这种智力的思想将超出人类的表达能力。
+
Vinge 1993年的文章《未来的技术奇点:如何在后人类时代生存The Coming Technological Singularity: How to Survive in the Post-Human Era》在互联网上广为传播,普及了这一理念。“在三十年内,我们将拥有创造超人智慧的技术手段。不久之后,人类时代将结束。”Vinge认为,科幻小说作者无法写出超越人类智力的现实主义后奇点人物,因为这种智力的思想将超出人类的表达能力。
      第543行: 第520行:  
In 2000, [[Bill Joy]], a prominent technologist and a co-founder of [[Sun Microsystems]], voiced concern over the potential dangers of the singularity.<ref name="JoyFuture"/>
 
In 2000, [[Bill Joy]], a prominent technologist and a co-founder of [[Sun Microsystems]], voiced concern over the potential dangers of the singularity.<ref name="JoyFuture"/>
   −
在2000年,[[Bill Joy]],一位著名的技术专家和[[Sun Microsystems]]的联合创始人,表达了对奇点的潜在危险的担忧。
+
2000年,著名的技术专家和 Sun Microsystems 的联合创始人 Bill Joy,表达了对奇点潜在危险的担忧。<ref name="JoyFuture"/>
 
         
In 2005, Kurzweil published ''[[The Singularity is Near]]''. Kurzweil's publicity campaign included an appearance on ''[[The Daily Show with Jon Stewart]]''.<ref name="episode"/>
 
In 2005, Kurzweil published ''[[The Singularity is Near]]''. Kurzweil's publicity campaign included an appearance on ''[[The Daily Show with Jon Stewart]]''.<ref name="episode"/>
   −
2005年,库兹韦尔发表了“<font color = "#ff8000">奇点近了The Singularity is Near</font>”。库兹韦尔的宣传活动包括上“[[乔恩·斯图尔特的每日秀The Daily Show with Jon Stewart]]”。
+
2005年,库兹韦尔发表了《奇点临近 The Singularity is Near》。库兹韦尔的宣传活动包括参加“ Jon Stewart 的每日秀 The Daily Show with Jon Stewart”。<ref name="episode"/>
 
      
In 2007, [[Eliezer Yudkowsky]] suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.<ref name="yudkowsky.net"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref> For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.<ref name="yudkowsky.net"/>
 
In 2007, [[Eliezer Yudkowsky]] suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.<ref name="yudkowsky.net"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref> For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.<ref name="yudkowsky.net"/>
      −
2007年,[[Eliezer Yudkowsky]]指出,许多被赋予“奇点”的不同定义是相互不兼容的,而不是相互支持的。例如,Kurzweil认为的当前的技术发展轨迹,都是在自我完善的人工智能或超人智能的到来之后才会出现的。Yudkowsky认为,这代表着一种既有I.J.Good提出的智力不连续上升,也有Vinge关于不可预测性的紧张关系。
+
2007年,Eliezer Yudkowsky指出,“奇点”被赋予的许多不同的定义是互不兼容而不是相互支持的<ref name="yudkowsky.net"/><ref>Sandberg, Anders. "An overview of models of technological singularity." Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.</ref>。例如,库兹韦尔推断了在自我提升的人工智能或超人智能到来之前的当前技术轨迹。Yudkowsky 认为,这与I.J.Good 提出的智能的不连续上升和 Vinge 关于不可预测性的论点存在矛盾。<ref name="yudkowsky.net"/>
    
In 2009, Kurzweil and [[X-Prize]] founder [[Peter Diamandis]] announced the establishment of [[Singularity University]], a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."<ref name="singularityu"/> Funded by [[Google]], [[Autodesk]], [[ePlanet Ventures]], and a group of [[High tech|technology industry]] leaders, Singularity University is based at [[NASA]]'s [[Ames Research Center]] in [[Mountain View, California|Mountain View]], [[California]]. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.
 
In 2009, Kurzweil and [[X-Prize]] founder [[Peter Diamandis]] announced the establishment of [[Singularity University]], a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."<ref name="singularityu"/> Funded by [[Google]], [[Autodesk]], [[ePlanet Ventures]], and a group of [[High tech|technology industry]] leaders, Singularity University is based at [[NASA]]'s [[Ames Research Center]] in [[Mountain View, California|Mountain View]], [[California]]. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.
   −
2009年,Kurzweil和[[X-Prize]]的创始人[[Peter Diamandis]]宣布成立[[奇点大学]],这是一所未经认可的私立学院,其宣称的使命是“教育、激励和赋能领导者来使用指数技术应对人类的重大挑战。”奇点大学由[[Google]]、[[Autodesk]]、[[ePlanet Ventures]]和一群高科技产业的领导团队资助,总部设在[[NASA]][[Ames Research Center]](加利福尼亚州,Mountain View)。这家非营利组织在每年夏季举办为期十周的研究生课程,涵盖十种不同的技术和相关领域,并全年举办一系列高管课程。
+
2009年,Kurzweil和X-Prize的创始人Peter Diamandis宣布成立奇点大学,这是一所未经认证的私立学院,其宣称的使命是“教育、激励和赋能领导者来使用指数技术应对人类的重大挑战<ref name="singularityu"/> 。”奇点大学由Google、Autodesk、ePlanet Ventures和一群高科技产业的领导团队资助,总部设在位于加州山景城的美国宇航局 NASA 的 艾姆斯研究中心 Ames Research Center。这家非营利组织在每年夏季举办为期十周的研究生课程,涵盖十种不同的技术和相关领域,并全年举办一系列高管课程。
 
        第570行: 第544行:  
In 2007, the Joint Economic Committee of the [[United States Congress]] released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.<ref>{{cite book|url=https://books.google.com/books?id=vyp1AwAAQBAJ&pg=PA375|title=Encyclopedia of Nanoscience and Society|first=David H.|last=Guston|date=14 July 2010|publisher=SAGE Publications|isbn=978-1-4522-6617-6}}</ref><ref>{{cite web | url=http://www.thenewatlantis.com/docLib/20120213_TheFutureisComingSoonerThanYouThink.pdf | title=Nanotechnology: The Future is Coming Sooner Than You Think | publisher=Joint Economic Committee | date=March 2007}}</ref><ref>{{cite web|url=http://crnano.typepad.com/crnblog/2007/03/congress_and_th.html|title=Congress and the Singularity}}</ref>
 
In 2007, the Joint Economic Committee of the [[United States Congress]] released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.<ref>{{cite book|url=https://books.google.com/books?id=vyp1AwAAQBAJ&pg=PA375|title=Encyclopedia of Nanoscience and Society|first=David H.|last=Guston|date=14 July 2010|publisher=SAGE Publications|isbn=978-1-4522-6617-6}}</ref><ref>{{cite web | url=http://www.thenewatlantis.com/docLib/20120213_TheFutureisComingSoonerThanYouThink.pdf | title=Nanotechnology: The Future is Coming Sooner Than You Think | publisher=Joint Economic Committee | date=March 2007}}</ref><ref>{{cite web|url=http://crnano.typepad.com/crnblog/2007/03/congress_and_th.html|title=Congress and the Singularity}}</ref>
   −
2007年,<font color = "#ff8000">美国国会联合经济委员会the Joint Economic Committee of the United States Congress</font>一份关于纳米技术的未来的报告预测,在中期的未来会发生诸多重大的技术和政治变化,包括可能出现的技术奇点。
+
2007年,美国国会联合经济委员会 the Joint Economic Committee of the United States Congress 发布了一份关于纳米技术的未来的报告。它预测在中期的未来会发生诸多重大的技术和政治变化,包括可能的技术奇点。<ref>{{cite book|url=https://books.google.com/books?id=vyp1AwAAQBAJ&pg=PA375|title=Encyclopedia of Nanoscience and Society|first=David H.|last=Guston|date=14 July 2010|publisher=SAGE Publications|isbn=978-1-4522-6617-6}}</ref><ref>{{cite web | url=http://www.thenewatlantis.com/docLib/20120213_TheFutureisComingSoonerThanYouThink.pdf | title=Nanotechnology: The Future is Coming Sooner Than You Think | publisher=Joint Economic Committee | date=March 2007}}</ref><ref>{{cite web|url=http://crnano.typepad.com/crnblog/2007/03/congress_and_th.html|title=Congress and the Singularity}}</ref>
 +
 
       
Former President of the United States [[Barack Obama]] spoke about singularity in his interview to ''[[Wired (magazine)|Wired]]'' in 2016:<ref>{{cite journal|url=https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/|title=Barack Obama Talks AI, Robo Cars, and the Future of the World|first=Scott|last=Dadich|journal=Wired|date=12 October 2016}}</ref>
 
Former President of the United States [[Barack Obama]] spoke about singularity in his interview to ''[[Wired (magazine)|Wired]]'' in 2016:<ref>{{cite journal|url=https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/|title=Barack Obama Talks AI, Robo Cars, and the Future of the World|first=Scott|last=Dadich|journal=Wired|date=12 October 2016}}</ref>
 
+
2016年,美国前总统巴拉克·奥巴马 Barack Obama 在接受《连线》杂志采访时谈到了奇点:<ref>{{cite journal|url=https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/|title=Barack Obama Talks AI, Robo Cars, and the Future of the World|first=Scott|last=Dadich|journal=Wired|date=12 October 2016}}</ref>
美国前总统[[Barack Obama]]在2016年接受“Wired”杂志采访时谈到了奇点: