更改

删除101字节 、 2020年11月13日 (五) 20:57
第549行: 第549行:       −
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}  对人类的潜在威胁===
+
===对人类的潜在威胁===
    
{{Main|Existential risk from artificial general intelligence}}
 
{{Main|Existential risk from artificial general intelligence}}
  −
      
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}
 
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}
第559行: 第557行:  
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}
 
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}
   −
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这么让人工智能这盏灯亮着吗?可能不是,但这或多或少是正在发生在人工智能领域内的。
+
一篇论文提出了“人工智能可能会引发世界末日”,且这种风险需要更多关注。这一论点也已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这样对这个问题不管不顾吗?可能不是,但这或多或少是正在发生在人工智能领域内的。
      第567行: 第565行:  
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?
 
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?
   −
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大程度地提高其递归改进的人工智能在达到超级智能后继续以友好,而非破坏性的方式运行的可能性?
+
许多关注末日风险的学者认为,最好的方法是进行(可能是大规模的)研究,去解决困难的“控制问题”,以回答这个疑问: 程序员可以实现哪些保障措施、算法或架构,以最大程度地确保其不断改进的人工智能在达到超级智能后会继续以友好地运行,而非破坏?
      第575行: 第573行:  
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.
 
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.
   −
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 一个极端如,杰伦·拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。
+
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的、非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 极端如杰伦·拉尼尔(Jaron Lanier)认为,目前机器以任何方式具有智能只是“一种幻觉” ,是富人的“惊人骗局”。
      第583行: 第581行:  
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."
 
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."
   −
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达'''<font color="#ff8000">技术奇点(technological singularity)</font>'''之前就已经自我毁灭了。戈登·摩尔(Gordon Moore)'''<font color="#ff8000">摩尔定律(Moore's Law)</font>'''的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。
+
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达'''<font color="#ff8000">技术奇点(technological singularity)</font>'''之前就已经自我毁灭了。而戈登·摩尔(Gordon Moore)——'''<font color="#ff8000">摩尔定律(Moore's Law)</font>'''的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。
 
  −
 
      
==See also  请参阅==
 
==See also  请参阅==
370

个编辑