− | 物理学家史蒂芬·霍金在2014年表示,“成功创造人工智能将是人类历史上最大的事件。不幸的是,这也可能是最后一次,除非我们学会如何规避风险。”<ref name=hawking_2014/>霍金认为,在未来几十年里,人工智能可能会带来“无法估量的利益和风险”,例如“技术超越金融市场的聪明程度,超越人类研究人员的创造力,超越人类领袖的操控力,开发我们甚至无法理解的武器”。霍金建议,<ref name=hawking_2014/>人们应该更认真地对待人工智能,并做更多的工作来为奇点做准备:<ref name=hawking_2014>{{cite web |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' |work=[[The Independent]] |author=Stephen Hawking |date=1 May 2014 |accessdate=May 5, 2014|author-link=Stephen Hawking }}</ref> | + | 物理学家史蒂芬·霍金在2014年表示,“成功创造人工智能将是人类历史上最大的事件。不幸的是,这也可能是最后一次,除非我们学会如何规避风险。”<ref name=hawking_2014/>霍金认为,在未来几十年里,人工智能可能会带来“无法估量的利益和风险”,例如“技术超越金融市场的聪明程度,超越人类研究人员的创造力,超越人类领袖的操控力,开发我们甚至无法理解的武器”。霍金建议,<ref name=hawking_2014/>人们应该更认真地对待人工智能,并做更多的工作来为奇点做准备:<ref name=hawking_2014>{{cite web |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' |work=The Independent |author=Stephen Hawking |date=1 May 2014 |accessdate=May 5, 2014 }}</ref> |
− | Berglas(2008)声称, 没有直接的进化动机促使人工智能对人类友好。进化并不倾向于产生人类所重视的结果,也没有理由期望一个任意的优化过程会促进人类所期望的结果,而不是无意中导致人工智能以违背其创造者原有意图的方式行事。<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">[[Eliezer Yudkowsky]]: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk] {{webarchive|url=https://web.archive.org/web/20120611190606/http://singinst.org/upload/artificial-intelligence-risk.pdf |date=2012-06-11 }}. Draft for a publication in ''Global Catastrophic Risk'' from August 31, 2006, retrieved July 18, 2011 (PDF file)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref>安德斯·桑德伯格 Anders Sandberg也也对这一情景进行了详细阐述,讨论了各种常见的反驳意见。<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref>人工智能研究员 Hugo de Garis<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref>认为,人工智能可能会为了获取稀缺资源而直接消灭人类,并且人类将无力阻止它们。<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref>或者,在进化压力下为了促进自身生存而发展起来的人工智能可能会胜过人类。<ref name="nickbostrom7" /> | + | Berglas(2008)声称, 没有直接的进化动机促使人工智能对人类友好。进化并不倾向于产生人类所重视的结果,也没有理由期望一个任意的优化过程会促进人类所期望的结果,而不是无意中导致人工智能以违背其创造者原有意图的方式行事。<ref name="nickbostrom8">Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"], in ''Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence'', Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17</ref><ref name="singinst">Eliezer Yudkowsky: [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk]. Draft for a publication in ''Global Catastrophic Risk'' from August 31, 2006, retrieved July 18, 2011 (PDF file)</ref><ref name="singinst9">[http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ The Stamp Collecting Device, Nick Hay]</ref>安德斯·桑德伯格 Anders Sandberg也也对这一情景进行了详细阐述,讨论了各种常见的反驳意见。<ref name="aleph">[http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html 'Why we should fear the Paperclipper'], 2011-02-14 entry of Sandberg's blog 'Andart'</ref>人工智能研究员 Hugo de Garis<ref name="selfawaresystems.com" /><ref name="selfawaresystems10">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.]</ref>认为,人工智能可能会为了获取稀缺资源而直接消灭人类,并且人类将无力阻止它们。<ref name="forbes">de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html "The Coming Artilect War"], Forbes.com, 22 June 2009.</ref>或者,在进化压力下为了促进自身生存而发展起来的人工智能可能会胜过人类。<ref name="nickbostrom7" /> |
− | 按照Eliezer Yudkowsky的观点,人工智能安全的一个重要问题是,不友好的人工智能可能比友好的人工智能更容易创建。虽然两者都需要递归优化过程的进步,但友好的人工智能还需要目标结构在自我改进过程中保持不变(否则人工智能可以将自己转变成不友好的东西),以及一个与人类价值观相一致且不会自动毁灭人类的目标结构。另一方面,一个不友好的人工智能可以针对任意的目标结构进行优化,<ref name="singinst12">[http://singinst.org/upload/CEV.html Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004 ] {{webarchive|url=https://web.archive.org/web/20100815055725/http://singinst.org/upload/CEV.html |date=2010-08-15 }}</ref>而目标结构不需要在自我改进过程中保持不变。Bill Hibbard(2014)提出了一种人工智能设计,<ref name="JAGI2012">{{Citation| journal=Journal of Artificial General Intelligence| year=2012| volume=3| issue=1| title=Model-Based Utility Functions| first=Bill| last=Hibbard| postscript=.| doi=10.2478/v10229-011-0013-5| page=1|arxiv = 1111.3934 |bibcode = 2012JAGI....3....1H | s2cid=8434596}}</ref>可以避免包括自欺欺人、<ref name="selfawaresystems"/><ref name="AGI-12a">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf Avoiding Unintended AI Behaviors.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. [http://intelligence.org/2012/12/19/december-2012-newsletter/ This paper won the Machine Intelligence Research Institute's 2012 Turing Prize for the Best AGI Safety Paper].</ref>无意的工具性行为和奖励机制<ref name="AGI-12a"/>的腐败等一些危险。他还讨论了人工智能<ref name="JET2008">{{Citation| url=http://jetpress.org/v17/hibbard.htm| journal=Journal of Evolution and Technology| year=2008| volume=17| title=The Technology of Mind and a New Social Contract| first=Bill| last=Hibbard| postscript=.}}</ref>和人工智能测试的社会影响。<ref name="AGI-12b">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf Decision Support for Safe AI Design|.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.</ref>他在2001年出版的《超级智能机器 Super-Intelligent Machines》一书中提倡对人工智能的公共教育和公众控制。该书还提出了一个简单的易受奖励机制的腐败影响的设计。 | + | 按照Eliezer Yudkowsky的观点,人工智能安全的一个重要问题是,不友好的人工智能可能比友好的人工智能更容易创建。虽然两者都需要递归优化过程的进步,但友好的人工智能还需要目标结构在自我改进过程中保持不变(否则人工智能可以将自己转变成不友好的东西),以及一个与人类价值观相一致且不会自动毁灭人类的目标结构。另一方面,一个不友好的人工智能可以针对任意的目标结构进行优化,<ref name="singinst12">[http://singinst.org/upload/CEV.html Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004 ]</ref>而目标结构不需要在自我改进过程中保持不变。Bill Hibbard(2014)提出了一种人工智能设计,<ref name="JAGI2012">{{Citation| journal=Journal of Artificial General Intelligence| year=2012| volume=3| issue=1| title=Model-Based Utility Functions| first=Bill| last=Hibbard| postscript=.| doi=10.2478/v10229-011-0013-5| page=1|arxiv = 1111.3934 |bibcode = 2012JAGI....3....1H }}</ref>可以避免包括自欺欺人、<ref name="selfawaresystems"/><ref name="AGI-12a">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf Avoiding Unintended AI Behaviors.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. [http://intelligence.org/2012/12/19/december-2012-newsletter/ This paper won the Machine Intelligence Research Institute's 2012 Turing Prize for the Best AGI Safety Paper].</ref>无意的工具性行为和奖励机制<ref name="AGI-12a"/>的腐败等一些危险。他还讨论了人工智能<ref name="JET2008">{{Citation| url=http://jetpress.org/v17/hibbard.htm| journal=Journal of Evolution and Technology| year=2008| volume=17| title=The Technology of Mind and a New Social Contract| first=Bill| last=Hibbard| postscript=.}}</ref>和人工智能测试的社会影响。<ref name="AGI-12b">[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf Decision Support for Safe AI Design|.] Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.</ref>他在2001年出版的《超级智能机器 Super-Intelligent Machines》一书中提倡对人工智能的公共教育和公众控制。该书还提出了一个简单的易受奖励机制的腐败影响的设计。 |