更改

跳到导航 跳到搜索
添加280字节 、 2020年10月12日 (一) 18:56
无编辑摘要
第597行: 第597行:  
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.
 
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.
   −
抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。
+
人们在研究中使用特定的语境时倾向于重新定义抽象的实践,使得研究人员集中在数个概念上。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象算子的参与提出了问题。
      第613行: 第613行:  
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences.  David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.
 
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences.  David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.
   −
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。
+
许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫·格勒尼特(David Gelernter)写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念联系起来。
         −
==Controversies and dangers==
+
==Controversies and dangers 争议和风险==
      第629行: 第629行:  
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.
 
As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.
   −
截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。
+
截至2020年3月,通用人工智能仍处于推测性的状态,因为迄今此类系统尚未被展示。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。一个极端如,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内具有完成人类能做的任何工作的能力。”然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学的深入理解”。机器人专家阿兰·温菲尔德(Alan Winfield)在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。
      第637行: 第637行:  
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.
 
AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.
   −
人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。
+
人工智能专家一直在考虑通用人工智能兴衰可能性,可能在2010年代出现了复苏。2012年和2013年进行的四次民意调查显示,专家们平均有50%的信心相信通用人工智能会在2040年到2050年间到来,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90%。当前通用人工智能项目进展中的进一步考虑可以在确认人类水平通用人工智能和基于智商测试的通用人工智能测试下面找到。
         −
===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===
+
===Potential threat to human existence{{anchor|Risk_of_human_extinction}} 对人类的潜在威胁===
    
{{Main|Existential risk from artificial general intelligence}}
 
{{Main|Existential risk from artificial general intelligence}}
第649行: 第649行:  
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}
 
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}
   −
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}
+
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably not, but this is more or less what is happening with AI.'}}
   −
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能
+
人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆·马斯克,比尔·盖茨和斯蒂芬·霍金。支持这一观点的最著名的人工智能研究者是斯图尔特·罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们就这么让人工智能这盏灯亮着吗?可能不是,但这或多或少是正在发生在人工智能领域内的。
      第659行: 第659行:  
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?
 
Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?
   −
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?
+
许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大程度地提高其递归改进的人工智能在达到超级智能后继续以友好,而非破坏性的方式运行的可能性?
      第667行: 第667行:  
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.
 
The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.
   −
认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。
+
人工智能可以造成世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是神秘的、宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 一个极端如,杰伦·拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。
      第675行: 第675行:  
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."
 
Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."
   −
现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球
+
现有的许多批评认为,通用人工智能短期内不太可能成功。计算机科学家戈登·贝尔(Gordon Bell)认为人类在到达技术奇点之前就已经自我毁灭了。戈登·摩尔(Gordon Moore),摩尔定律的最初提出者,宣称“我是一个怀疑论者。我不认为技术奇点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁吴恩达(Andrew Ng)说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球。
         −
==See also==
+
==See also 请参阅==
    
{{div col|colwidth=30em}}
 
{{div col|colwidth=30em}}
97

个编辑

导航菜单