更改

跳到导航 跳到搜索
添加818字节 、 2021年2月10日 (三) 22:04
第298行: 第298行:  
“技术奇点”一词反映出这样的变化可能突然发生,而且很难预测由此产生的新世界将如何运作。<ref name="positive-and-negative">{{Citation|last=Yudkowsky |first=Eliezer |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks |editor-last=Bostrom |editor-first=Nick |editor2-last=Cirkovic |editor2-first=Milan |publisher=Oxford University Press |year=2008 |url=http://singinst.org/AIRisk.pdf |bibcode=2008gcr..book..303Y |isbn=978-0-19-857050-9 |page=303 |url-status=dead |archiveurl=https://web.archive.org/web/20080807132337/http://www.singinst.org/AIRisk.pdf |archivedate=2008-08-07 }}</ref><ref name="theuncertainfuture"/> 目前尚不清楚导致奇点的智能爆炸是有益还是有害,甚至是[[存在风险|存在威胁]]。<ref name="catastrophic"/><ref name="nickbostrom"/>由于人工智能是奇点风险的一个主要因素,许多组织追求将人工智能目标系统与人类价值观相协调的技术理论,包括[[人类未来研究所],[[机器智能研究所],<ref name="positive-and-negative"/> [[人类兼容人工智能中心]]和[[未来生命研究所]]。
 
“技术奇点”一词反映出这样的变化可能突然发生,而且很难预测由此产生的新世界将如何运作。<ref name="positive-and-negative">{{Citation|last=Yudkowsky |first=Eliezer |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks |editor-last=Bostrom |editor-first=Nick |editor2-last=Cirkovic |editor2-first=Milan |publisher=Oxford University Press |year=2008 |url=http://singinst.org/AIRisk.pdf |bibcode=2008gcr..book..303Y |isbn=978-0-19-857050-9 |page=303 |url-status=dead |archiveurl=https://web.archive.org/web/20080807132337/http://www.singinst.org/AIRisk.pdf |archivedate=2008-08-07 }}</ref><ref name="theuncertainfuture"/> 目前尚不清楚导致奇点的智能爆炸是有益还是有害,甚至是[[存在风险|存在威胁]]。<ref name="catastrophic"/><ref name="nickbostrom"/>由于人工智能是奇点风险的一个主要因素,许多组织追求将人工智能目标系统与人类价值观相协调的技术理论,包括[[人类未来研究所],[[机器智能研究所],<ref name="positive-and-negative"/> [[人类兼容人工智能中心]]和[[未来生命研究所]]。
   −
Physicist [[Stephen Hawking]] said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."<ref name=hawking_2014/> Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."<ref name=hawking_2014/> Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:<ref name=hawking_2014>{{cite web |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'  |work=[[The Independent]] |author=Stephen Hawking |date=1 May 2014 |accessdate=May 5, 2014|author-link=Stephen Hawking }}</ref>{{quote|So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.}}
+
Physicist [[Stephen Hawking]] said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."<ref name=hawking_2014/> Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."<ref name=hawking_2014/> Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:<ref name=hawking_2014>{{cite web |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'  |work=[[The Independent]] |author=Stephen Hawking |date=1 May 2014 |accessdate=May 5, 2014|author-link=Stephen Hawking }}</ref>
 +
{{quote|So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.}}
    
物理学家[[史蒂芬霍金]]在2014年表示,“成功创造人工智能将是人类历史上最大的事件。不幸的是,这也可能是最后一次,除非我们学会如何规避风险。“霍金认为,在未来几十年里,人工智能可能会带来“无法估量的利益和风险”,例如“技术超越金融市场,超越人类研究人员,超越操纵人类领袖,开发实现我们甚至无法明白的武器”。”<ref name=hawking_2014/>霍金建议,应该更认真地对待人工智能,应该做更多的工作来为奇点做准备:<ref name=hawking_2014>{{cite web |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'  |work=[[The Independent]] |author=Stephen Hawking |date=1 May 2014 |accessdate=May 5, 2014|author-link=Stephen Hawking }}</ref>{{所以,面对可能出现的无法估量的利益和风险的未来,专家们肯定会尽一切可能确保最好的结果,对吧?错了。如果一个优越的外星文明给我们发了一条信息说,“我们几十年后就会到达”,我们会不会只回答,“好吧,你到了这里就打电话给我们——我们会关灯的”?可能不是——但这或多或少就是人工智能所发生的事情。}}
 
物理学家[[史蒂芬霍金]]在2014年表示,“成功创造人工智能将是人类历史上最大的事件。不幸的是,这也可能是最后一次,除非我们学会如何规避风险。“霍金认为,在未来几十年里,人工智能可能会带来“无法估量的利益和风险”,例如“技术超越金融市场,超越人类研究人员,超越操纵人类领袖,开发实现我们甚至无法明白的武器”。”<ref name=hawking_2014/>霍金建议,应该更认真地对待人工智能,应该做更多的工作来为奇点做准备:<ref name=hawking_2014>{{cite web |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'  |work=[[The Independent]] |author=Stephen Hawking |date=1 May 2014 |accessdate=May 5, 2014|author-link=Stephen Hawking }}</ref>{{所以,面对可能出现的无法估量的利益和风险的未来,专家们肯定会尽一切可能确保最好的结果,对吧?错了。如果一个优越的外星文明给我们发了一条信息说,“我们几十年后就会到达”,我们会不会只回答,“好吧,你到了这里就打电话给我们——我们会关灯的”?可能不是——但这或多或少就是人工智能所发生的事情。}}
第306行: 第307行:  
{{Harvtxt|Berglas|2008}}声称没有直接的进化动机促使人工智能对人类友好。进化并不倾向于产生人类所重视的结果,也没有理由期望一个任意的优化过程会促进人类所期望的结果,而不是无意中导致人工智能以一种不是其创造者意图的方式行动。<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf |date=2012-06-11 }}2006年8月31日“全球灾难性风险”出版物草稿,2011年7月18日检索(PDF文件)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref>[[Anders Sandberg]]也详细阐述了这种情况,讨论了各种常见的反驳意见。<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref>人工智能研究人员[[Hugo de Garis]]认为,人工智能可能会简单地消灭人类[[工具性融合|获取稀缺资源]],<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref>人类将无力阻止它们。<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref>另一方面,人工智能是在进化的压力下发展起来的,以促进自身的生存,这一点可以超越人类。<ref name="nickbostrom7" />
 
{{Harvtxt|Berglas|2008}}声称没有直接的进化动机促使人工智能对人类友好。进化并不倾向于产生人类所重视的结果,也没有理由期望一个任意的优化过程会促进人类所期望的结果,而不是无意中导致人工智能以一种不是其创造者意图的方式行动。<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf |date=2012-06-11 }}2006年8月31日“全球灾难性风险”出版物草稿,2011年7月18日检索(PDF文件)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref>[[Anders Sandberg]]也详细阐述了这种情况,讨论了各种常见的反驳意见。<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref>人工智能研究人员[[Hugo de Garis]]认为,人工智能可能会简单地消灭人类[[工具性融合|获取稀缺资源]],<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref>人类将无力阻止它们。<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref>另一方面,人工智能是在进化的压力下发展起来的,以促进自身的生存,这一点可以超越人类。<ref name="nickbostrom7" />
   −
{{Reflist
     −
{通货再膨胀
+
{{Harvtxt|Bostrom|2002}} discusses human extin{{blockquote|[Computers] have, literally ..., no [[intelligence]], no [[motivation]], no [[autonomy]], and no agency.  We design them to behave as if they had certain sorts of [[psychology]], but there is no psychological reality to the corresponding processes or behavior. ...  [T]he machinery has no beliefs, desires, [or] motivations.<ref>[[John R. Searle]], “What Your Computer Can’t Know”, ''[[The New York Review of Books]]'', 9 October 2014, p. 54.</ref>}}
   −
 
+
{{blockquote |[计算机]从字面上讲,没有[[智能]]、没有[[动机]]、没有[[自主]]和代理。我们设计他们的行为,好像他们有某种[[心理学]],但没有心理现实的对应过程或行为。。。[T] 机械没有信仰、欲望或动机。<ref>[[John R. Searle]], “What Your Computer Can’t Know”, ''[[The New York Review of Books]]'', 9 October 2014, p. 54.</ref>}}tion scenarios, and lists superintelligence as a possible cause:
 
  −
|refs =
  −
 
  −
2012年10月15日
  −
 
  −
{{Harvtxt|Bostrom|2002}} discusses human extinction scenarios, and lists superintelligence as a possible cause:
      
{Harvtxt|Bostrom|2002}讨论了人类灭绝的情景,并将超级智能列为可能的原因:
 
{Harvtxt|Bostrom|2002}讨论了人类灭绝的情景,并将超级智能列为可能的原因:
259

个编辑

导航菜单