更改

跳到导航 跳到搜索
删除389字节 、 2021年8月6日 (五) 22:48
无编辑摘要
第53行: 第53行:  
如果一种超人类智能被发明出来。无论是通过人类智能的放大还是通过人工智能,它将带来比现在的人类更强的问题解决和发明创造能力。这种人工智能被称为种子人工智能 Seed AI。因为如果人工智能的工程能力能够与它的人类创造者相匹敌或超越,那么它就有潜力自主改进自己的软件和硬件,或者设计出更强大的机器。这台能力更强的机器可以继续设计一台能力更强的机器。这种自我递归改进的迭代可以加速,在物理定律或理论计算设定的任何上限之前,可能会发生巨大的质变。据推测,经过多次迭代,这样的人工智能将远远超过人类的认知能力。
 
如果一种超人类智能被发明出来。无论是通过人类智能的放大还是通过人工智能,它将带来比现在的人类更强的问题解决和发明创造能力。这种人工智能被称为种子人工智能 Seed AI。因为如果人工智能的工程能力能够与它的人类创造者相匹敌或超越,那么它就有潜力自主改进自己的软件和硬件,或者设计出更强大的机器。这台能力更强的机器可以继续设计一台能力更强的机器。这种自我递归改进的迭代可以加速,在物理定律或理论计算设定的任何上限之前,可能会发生巨大的质变。据推测,经过多次迭代,这样的人工智能将远远超过人类的认知能力。
   −
==Intelligence explosion智能爆炸==
+
==智能爆炸==
    
Intelligence explosion is a possible outcome of humanity building [[artificial general intelligence]] (AGI). AGI would be capable of recursive self-improvement, leading to the rapid emergence of [[Superintelligence|artificial superintelligence]] (ASI), the limits of which are unknown, shortly after technological singularity is achieved.
 
Intelligence explosion is a possible outcome of humanity building [[artificial general intelligence]] (AGI). AGI would be capable of recursive self-improvement, leading to the rapid emergence of [[Superintelligence|artificial superintelligence]] (ASI), the limits of which are unknown, shortly after technological singularity is achieved.
第71行: 第71行:  
古德的设想如下:随着计算机能力的增加,人们有可能制造出一台比人类更智能的机器;这种超人的智能拥有比现在人类更强大的问题解决和发明创造的能力。这台超级智能机器随后设计一台功能更强大的机器,或者重写自己的软件来变得更加智能;这台(甚至更强大的)机器接着继续设计功能更强大的机器,以此类推。这些递归式的自我完善的迭代加速,允许在物理定律或理论计算设定的任何上限之内发生巨大的质变。<ref name="stat"/>
 
古德的设想如下:随着计算机能力的增加,人们有可能制造出一台比人类更智能的机器;这种超人的智能拥有比现在人类更强大的问题解决和发明创造的能力。这台超级智能机器随后设计一台功能更强大的机器,或者重写自己的软件来变得更加智能;这台(甚至更强大的)机器接着继续设计功能更强大的机器,以此类推。这些递归式的自我完善的迭代加速,允许在物理定律或理论计算设定的任何上限之内发生巨大的质变。<ref name="stat"/>
   −
==Other manifestations其他表现形式==
+
==其他表现形式==
   −
===Emergence of superintelligence超级智能的出现===
+
===超级智能的出现===
    
{{Further|Superintelligence}}
 
{{Further|Superintelligence}}
第89行: 第89行:  
技术预言家和研究人员对人类智能是否或何时可能被超越存在分歧。一些人认为,人工智能(AI)的进步可能会产生没有人类认知局限的一般推理系统。另一些人则认为,人类将进化或直接改变自己的生物性,从而从根本上实现更高的智能。许多未来研究的场景结合了这两种可能的元素,认为人类很可能会与计算机交互,或以将他们的意识上传到计算机的方式实现大量的智能增益。
 
技术预言家和研究人员对人类智能是否或何时可能被超越存在分歧。一些人认为,人工智能(AI)的进步可能会产生没有人类认知局限的一般推理系统。另一些人则认为,人类将进化或直接改变自己的生物性,从而从根本上实现更高的智能。许多未来研究的场景结合了这两种可能的元素,认为人类很可能会与计算机交互,或以将他们的意识上传到计算机的方式实现大量的智能增益。
   −
===Non-AI singularity非人工智能奇点===
+
===非人工智能奇点===
 
Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as [[molecular nanotechnology]],<ref name="hplusmagazine"/><ref name="yudkowsky.net"/><ref name="agi-conf"/> although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.<ref name="vinge1993" />
 
Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as [[molecular nanotechnology]],<ref name="hplusmagazine"/><ref name="yudkowsky.net"/><ref name="agi-conf"/> although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.<ref name="vinge1993" />
    
一些作家writers更宽泛地使用“奇点”的概念,用来指代任何我们社会中由新技术带来的剧烈变化,如分子纳米技术,<ref name="hplusmagazine"/><ref name="yudkowsky.net"/><ref name="agi-conf"/> 尽管Vinge和其他作家明确指出,如果没有超级智能,这些改变就不能算作真正的奇点。<ref name="vinge1993" />
 
一些作家writers更宽泛地使用“奇点”的概念,用来指代任何我们社会中由新技术带来的剧烈变化,如分子纳米技术,<ref name="hplusmagazine"/><ref name="yudkowsky.net"/><ref name="agi-conf"/> 尽管Vinge和其他作家明确指出,如果没有超级智能,这些改变就不能算作真正的奇点。<ref name="vinge1993" />
   −
===Speed superintelligence速度超智能===
+
===速度超智能===
 
A speed superintelligence describes an AI that can do everything that a human can do, where the only difference is that the machine runs faster.<ref>{{cite book |doi=10.1007/978-3-662-54033-6_2 |year=2017 |publisher=Springer Berlin Heidelberg |pages=11–23 |author=Kaj Sotala and Roman Yampolskiy |title=The Technological Singularity |chapter=Risks of the Journey to the Singularity |series=The Frontiers Collection |isbn=978-3-662-54031-2 |conference=The Frontiers Collection }}</ref> For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds.<ref name="singinst.org"/> Such a difference in information processing speed could drive the singularity.<ref>{{cite book |doi=10.1002/9781118922590.ch16 |year=2016 |publisher=John Wiley \& Sons, Inc |pages=171–224 |author=David J. Chalmers |title=Science Fiction and Philosophy |chapter=The Singularity |isbn=9781118922590 |conference=Science Fiction and Philosophy }}</ref>
 
A speed superintelligence describes an AI that can do everything that a human can do, where the only difference is that the machine runs faster.<ref>{{cite book |doi=10.1007/978-3-662-54033-6_2 |year=2017 |publisher=Springer Berlin Heidelberg |pages=11–23 |author=Kaj Sotala and Roman Yampolskiy |title=The Technological Singularity |chapter=Risks of the Journey to the Singularity |series=The Frontiers Collection |isbn=978-3-662-54031-2 |conference=The Frontiers Collection }}</ref> For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds.<ref name="singinst.org"/> Such a difference in information processing speed could drive the singularity.<ref>{{cite book |doi=10.1002/9781118922590.ch16 |year=2016 |publisher=John Wiley \& Sons, Inc |pages=171–224 |author=David J. Chalmers |title=Science Fiction and Philosophy |chapter=The Singularity |isbn=9781118922590 |conference=Science Fiction and Philosophy }}</ref>
    
速度超级智能描述了一个人工智能,它可以做任何人类能做的事情,唯一的区别是这个机器运行得更快.<ref>{{cite book |doi=10.1007/978-3-662-54033-6_2 |year=2017 |publisher=Springer Berlin Heidelberg |pages=11–23 |author=Kaj Sotala and Roman Yampolskiy |title=The Technological Singularity |chapter=Risks of the Journey to the Singularity |series=The Frontiers Collection |isbn=978-3-662-54031-2 |conference=The Frontiers Collection }}</ref> 。例如,与人类相比,它信息处理的速度提高了一百万倍,一个主观年将在30个物理秒内过去。<ref name="singinst.org"/>这种在信息处理上的差异可能会导致奇点。.<ref>{{cite book |doi=10.1002/9781118922590.ch16 |year=2016 |publisher=John Wiley \& Sons, Inc |pages=171–224 |author=David J. Chalmers |title=Science Fiction and Philosophy |chapter=The Singularity |isbn=9781118922590 |conference=Science Fiction and Philosophy }}</ref>
 
速度超级智能描述了一个人工智能,它可以做任何人类能做的事情,唯一的区别是这个机器运行得更快.<ref>{{cite book |doi=10.1007/978-3-662-54033-6_2 |year=2017 |publisher=Springer Berlin Heidelberg |pages=11–23 |author=Kaj Sotala and Roman Yampolskiy |title=The Technological Singularity |chapter=Risks of the Journey to the Singularity |series=The Frontiers Collection |isbn=978-3-662-54031-2 |conference=The Frontiers Collection }}</ref> 。例如,与人类相比,它信息处理的速度提高了一百万倍,一个主观年将在30个物理秒内过去。<ref name="singinst.org"/>这种在信息处理上的差异可能会导致奇点。.<ref>{{cite book |doi=10.1002/9781118922590.ch16 |year=2016 |publisher=John Wiley \& Sons, Inc |pages=171–224 |author=David J. Chalmers |title=Science Fiction and Philosophy |chapter=The Singularity |isbn=9781118922590 |conference=Science Fiction and Philosophy }}</ref>
   −
==Plausibility合理性==
+
==合理性==
    
Many prominent technologists and academics dispute the plausibility of a technological singularity, including [[Paul Allen]], [[Jeff Hawkins]], [[John Henry Holland|John Holland]], [[Jaron Lanier]], and [[Gordon Moore]], whose [[Moore's law|law]] is often cited in support of the concept.<ref name="spectrum.ieee.org"/><ref name="ieee"/><ref name="Allen"/>
 
Many prominent technologists and academics dispute the plausibility of a technological singularity, including [[Paul Allen]], [[Jeff Hawkins]], [[John Henry Holland|John Holland]], [[Jaron Lanier]], and [[Gordon Moore]], whose [[Moore's law|law]] is often cited in support of the concept.<ref name="spectrum.ieee.org"/><ref name="ieee"/><ref name="Allen"/>
第125行: 第125行:  
2017年,一项对2015年 NeurIPS 和 ICML 机器学习会议上发表论文的作者的电子邮件调查询问了智能爆炸的可能性。在受访者中,12% 的人认为“很有可能” ,17% 的人认为“有可能” ,21% 的人认为“可能性中等” ,24% 的人认为“不太可能” ,26% 的人认为“非常不可能”。<ref>{{cite arxiv|last1=Grace|first1=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|title=When Will AI Exceed Human Performance? Evidence from AI Experts|eprint=1705.08807|date=24 May 2017|class=cs.AI}}</ref>
 
2017年,一项对2015年 NeurIPS 和 ICML 机器学习会议上发表论文的作者的电子邮件调查询问了智能爆炸的可能性。在受访者中,12% 的人认为“很有可能” ,17% 的人认为“有可能” ,21% 的人认为“可能性中等” ,24% 的人认为“不太可能” ,26% 的人认为“非常不可能”。<ref>{{cite arxiv|last1=Grace|first1=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|title=When Will AI Exceed Human Performance? Evidence from AI Experts|eprint=1705.08807|date=24 May 2017|class=cs.AI}}</ref>
   −
=== Speed improvements速度改进 ===
+
===速度改进 ===
    
Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. Simply put,<ref name="arstechnica">{{cite web|last=Siracusa |first=John |url=https://arstechnica.com/apple/reviews/2009/08/mac-os-x-10-6.ars/8 |title=Mac OS X 10.6 Snow Leopard: the Ars Technica review |publisher=Arstechnica.com |date=2009-08-31 |accessdate=2011-09-09}}</ref> [[Moore's Law]] suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.<ref name="singularity6">Eliezer Yudkowsky, 1996 [http://www.yudkowsky.net/obsolete/singularity.html "Staring into the Singularity"]</ref> An upper limit on speed may eventually be reached, although it is unclear how high this would be.  Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity."<ref name="Hawkins">{{cite magazine |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |title=Tech Luminaries Address Singularity |date=1 June 2008 |magazine=[[IEEE Spectrum]]}}</ref>
 
Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. Simply put,<ref name="arstechnica">{{cite web|last=Siracusa |first=John |url=https://arstechnica.com/apple/reviews/2009/08/mac-os-x-10-6.ars/8 |title=Mac OS X 10.6 Snow Leopard: the Ars Technica review |publisher=Arstechnica.com |date=2009-08-31 |accessdate=2011-09-09}}</ref> [[Moore's Law]] suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.<ref name="singularity6">Eliezer Yudkowsky, 1996 [http://www.yudkowsky.net/obsolete/singularity.html "Staring into the Singularity"]</ref> An upper limit on speed may eventually be reached, although it is unclear how high this would be.  Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity."<ref name="Hawkins">{{cite magazine |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |title=Tech Luminaries Address Singularity |date=1 June 2008 |magazine=[[IEEE Spectrum]]}}</ref>
第135行: 第135行:  
很难直接将基于硅的硬件与神经元相比较。但是{Harvtxt| Berglas|2008}指出计算机语音识别正在接近人类的能力,而且这种能力似乎需要0.01%的脑容量。这个类比表明,现代计算机硬件与人脑一样强大,只差几个数量级。
 
很难直接将基于硅的硬件与神经元相比较。但是{Harvtxt| Berglas|2008}指出计算机语音识别正在接近人类的能力,而且这种能力似乎需要0.01%的脑容量。这个类比表明,现代计算机硬件与人脑一样强大,只差几个数量级。
   −
====Exponential growth指数增长====
+
====指数增长====
    
Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future
 
Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future
第161行: 第161行:  
库兹韦尔将“奇点”一词用于描述人工智能(相对于其他技术)的快速增长,例如他写道: “奇点将允许我们超越生物体和大脑的局限……后奇点时代,人类与机器之间将不再有区别”。库兹韦尔相信奇点将在大约2045年之前出现,那时基于计算机的智能将明显超越人类脑力的总和,在这个日期之前计算机技术的进步“并不代表奇点”,因为它们“还不符合智慧的深刻扩展”。
 
库兹韦尔将“奇点”一词用于描述人工智能(相对于其他技术)的快速增长,例如他写道: “奇点将允许我们超越生物体和大脑的局限……后奇点时代,人类与机器之间将不再有区别”。库兹韦尔相信奇点将在大约2045年之前出现,那时基于计算机的智能将明显超越人类脑力的总和,在这个日期之前计算机技术的进步“并不代表奇点”,因为它们“还不符合智慧的深刻扩展”。
   −
====Accelerating change加速变革====
+
====加速变革====
      第208行: 第208行:  
经常被引用的危险包括那些与分子纳米技术和基因工程有关的技术。这些威胁是奇点论的倡导者和批评者面临的主要议题,也是比尔 · 乔伊《连线 Wired》杂志上所发表文章《为什么未来不需要我们Why the future doesn't need us》的主题。
 
经常被引用的危险包括那些与分子纳米技术和基因工程有关的技术。这些威胁是奇点论的倡导者和批评者面临的主要议题,也是比尔 · 乔伊《连线 Wired》杂志上所发表文章《为什么未来不需要我们Why the future doesn't need us》的主题。
   −
=== Algorithm improvements算法改进 ===
+
===算法改进 ===
    
Some intelligence technologies, like "seed AI",<ref name="Yampolskiy, Roman V 2015"/><ref name="ReferenceA"/> may also have the potential to not just make themselves faster, but also more efficient, by modifying their [[source code]]. These improvements would make further improvements possible, which would make further improvements possible, and so on.
 
Some intelligence technologies, like "seed AI",<ref name="Yampolskiy, Roman V 2015"/><ref name="ReferenceA"/> may also have the potential to not just make themselves faster, but also more efficient, by modifying their [[source code]]. These improvements would make further improvements possible, which would make further improvements possible, and so on.
第241行: 第241行:  
Carl Shulman和Anders Sandberg认为,算法改进可能是奇点的限制因素;虽然硬件效率趋于稳步提高,但软件创新更不具可预测性,可能会受到连续、累积的研究的限制。他们认为,智能爆炸在受软件限制的奇点情况中发生的可能性实际上比在受硬件限制的奇点更可能发生,因为在软件受限的情况下,一旦开发出人类水平的人工智能,它可以在非常快的硬件上连续运行,廉价硬件的丰富将使人工智能研究不那么受限制。一旦软件知道如何使用硬件,大量的硬件就可以被释放出来,这被称为“计算过剩”。
 
Carl Shulman和Anders Sandberg认为,算法改进可能是奇点的限制因素;虽然硬件效率趋于稳步提高,但软件创新更不具可预测性,可能会受到连续、累积的研究的限制。他们认为,智能爆炸在受软件限制的奇点情况中发生的可能性实际上比在受硬件限制的奇点更可能发生,因为在软件受限的情况下,一旦开发出人类水平的人工智能,它可以在非常快的硬件上连续运行,廉价硬件的丰富将使人工智能研究不那么受限制。一旦软件知道如何使用硬件,大量的硬件就可以被释放出来,这被称为“计算过剩”。
   −
===Criticisms危机===
+
===危机===
    
Some critics, like philosopher [[Hubert Dreyfus]], assert that computers or machines cannot achieve [[human intelligence]], while others, like physicist [[Stephen Hawking]], hold that the definition of intelligence is irrelevant if the net result is the same.<ref name="dreyfus"/>
 
Some critics, like philosopher [[Hubert Dreyfus]], assert that computers or machines cannot achieve [[human intelligence]], while others, like physicist [[Stephen Hawking]], hold that the definition of intelligence is irrelevant if the net result is the same.<ref name="dreyfus"/>
第303行: 第303行:  
除了对奇点概念的一般性批评外,一些批评者还对库兹韦尔的标志性图表提出了质疑。一种批评是,这种性质的对数图像本质上就会存在倾向于直线的有偏差结果。其他人批评库兹韦尔在数据点的使用上存在选择偏差。<ref name="PZMyers"/>例如,生物学家P. Z. Myers指出,许多早期的进化“事件”都是随意挑选的。库兹韦尔反驳了这一点,他绘制了15个中立来源的进化事件图,并表明它们都符合一条直线.《经济学人》用一张图表来嘲讽这个概念:一把剃须刀上的刀片数在过去几年里从一个增加到多达五个,并且它将以更快的速度增长到无穷大。<ref name="moreblades"/>
 
除了对奇点概念的一般性批评外,一些批评者还对库兹韦尔的标志性图表提出了质疑。一种批评是,这种性质的对数图像本质上就会存在倾向于直线的有偏差结果。其他人批评库兹韦尔在数据点的使用上存在选择偏差。<ref name="PZMyers"/>例如,生物学家P. Z. Myers指出,许多早期的进化“事件”都是随意挑选的。库兹韦尔反驳了这一点,他绘制了15个中立来源的进化事件图,并表明它们都符合一条直线.《经济学人》用一张图表来嘲讽这个概念:一把剃须刀上的刀片数在过去几年里从一个增加到多达五个,并且它将以更快的速度增长到无穷大。<ref name="moreblades"/>
   −
==Potential impacts潜在影响==
+
==潜在影响==
    
Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the [[Paleolithic]] era until the [[Neolithic Revolution]]. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.<ref name="Hanson">{{Citation |url=http://www.spectrum.ieee.org/robotics/robotics-software/economics-of-the-singularity |title=Economics Of The Singularity |author=Robin Hanson |work=IEEE Spectrum Special Report: The Singularity }} & [http://hanson.gmu.edu/longgrow.pdf Long-Term Growth As A Sequence of Exponential Modes]</ref>
 
Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the [[Paleolithic]] era until the [[Neolithic Revolution]]. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.<ref name="Hanson">{{Citation |url=http://www.spectrum.ieee.org/robotics/robotics-software/economics-of-the-singularity |title=Economics Of The Singularity |author=Robin Hanson |work=IEEE Spectrum Special Report: The Singularity }} & [http://hanson.gmu.edu/longgrow.pdf Long-Term Growth As A Sequence of Exponential Modes]</ref>
第309行: 第309行:  
过去由于一些技术进步,经济增长率发生了巨大变化。以人口增长为基础,从[[旧石器时代]]到[[新石器时代]],经济每25万年翻一番。新农业经济每900年翻一番,增长显著。在当今时代,从工业革命开始,世界经济产出每15年翻一番,比农业时代快60倍。罗宾·汉森Robin Hanson认为,如果超人智能的兴起引发了类似的革命,人们会预期经济至少每季度翻一番,甚至可能每周翻一番。
 
过去由于一些技术进步,经济增长率发生了巨大变化。以人口增长为基础,从[[旧石器时代]]到[[新石器时代]],经济每25万年翻一番。新农业经济每900年翻一番,增长显著。在当今时代,从工业革命开始,世界经济产出每15年翻一番,比农业时代快60倍。罗宾·汉森Robin Hanson认为,如果超人智能的兴起引发了类似的革命,人们会预期经济至少每季度翻一番,甚至可能每周翻一番。
   −
===Uncertainty and risk不确定性和风险===
+
===不确定性和风险===
    
{{Further|Existential risk from artificial general intelligence}}
 
{{Further|Existential risk from artificial general intelligence}}
第351行: 第351行:  
按照[[Eliezer Yudkowsky]]的观点,人工智能安全的一个重要问题是,不友好的人工智能可能比友好的人工智能更容易创建。虽然两者都需要递归优化过程的进步,但友好的人工智能还需要目标结构在自我改进过程中保持不变(否则人工智能可以将自己转变成不友好的东西),以及一个与人类价值观相一致且不会自动毁灭人类的目标结构。另一方面,一个不友好的人工智能可以针对任意的目标结构进行优化,而目标结构不需要在自我改进过程中保持不变。Bill Hibbard (2014)提出了一种人工智能设计,可以避免包括自欺欺人、无意的工具性行为和奖励机制的腐败等一些危险。他还讨论了人工智能和人工智能测试的社会影响。他在2001年出版的“超级智能机器Super-Intelligent Machines”一书中提倡对人工智能的公共教育和公众控制。该书还提出了一个简单的易受奖励机制的腐败影响的设计。It also proposed a simple design that was vulnerable to corruption of the reward generator.
 
按照[[Eliezer Yudkowsky]]的观点,人工智能安全的一个重要问题是,不友好的人工智能可能比友好的人工智能更容易创建。虽然两者都需要递归优化过程的进步,但友好的人工智能还需要目标结构在自我改进过程中保持不变(否则人工智能可以将自己转变成不友好的东西),以及一个与人类价值观相一致且不会自动毁灭人类的目标结构。另一方面,一个不友好的人工智能可以针对任意的目标结构进行优化,而目标结构不需要在自我改进过程中保持不变。Bill Hibbard (2014)提出了一种人工智能设计,可以避免包括自欺欺人、无意的工具性行为和奖励机制的腐败等一些危险。他还讨论了人工智能和人工智能测试的社会影响。他在2001年出版的“超级智能机器Super-Intelligent Machines”一书中提倡对人工智能的公共教育和公众控制。该书还提出了一个简单的易受奖励机制的腐败影响的设计。It also proposed a simple design that was vulnerable to corruption of the reward generator.
   −
===Next step of sociobiological evolution社会生物进化的下一步===
+
===社会生物进化的下一步===
    
{{Further|Sociocultural evolution}}
 
{{Further|Sociocultural evolution}}
第397行: 第397行:  
如果数字存储以目前每年30-38%的复合年增长率继续增长,它将在大约110年内与地球上所有细胞中的所有DNA所包含的信息总量相抗衡。这将意味着在仅仅150年的时间里,生物圈中储存的信息量翻了一番”。
 
如果数字存储以目前每年30-38%的复合年增长率继续增长,它将在大约110年内与地球上所有细胞中的所有DNA所包含的信息总量相抗衡。这将意味着在仅仅150年的时间里,生物圈中储存的信息量翻了一番”。
   −
===Implications for human society对人类社会的影响===
+
===对人类社会的影响===
    
{{further|Artificial intelligence in fiction}}
 
{{further|Artificial intelligence in fiction}}
第418行: 第418行:  
Frank S. Robinson 预言,一旦人类实现了具有人类智能的机器,科学技术问题将被远远优于人类的智力来解决。他指出,人工系统能够比人类更直接地共享数据,并预测这将导致一个全球的超级智能网络,使人类的能力相形见绌。Robinson还讨论了在这样一次智能爆炸之后,未来可能会有多大的不同。其中一个例子就是太阳能,地球接收到的太阳能远远多于人类捕获的太阳能,因此捕捉更多的太阳能将为文明发展带来巨大的希望。
 
Frank S. Robinson 预言,一旦人类实现了具有人类智能的机器,科学技术问题将被远远优于人类的智力来解决。他指出,人工系统能够比人类更直接地共享数据,并预测这将导致一个全球的超级智能网络,使人类的能力相形见绌。Robinson还讨论了在这样一次智能爆炸之后,未来可能会有多大的不同。其中一个例子就是太阳能,地球接收到的太阳能远远多于人类捕获的太阳能,因此捕捉更多的太阳能将为文明发展带来巨大的希望。
   −
==Hard vs. soft takeoff硬起飞与软起飞==
+
==硬起飞与软起飞==
    
[[File:Recursive self-improvement.svg|thumb|upright=1.6|In this sample recursive self-improvement scenario, humans modifying an AI's architecture would be able to double its performance every three years through, for example, 30 generations before exhausting all feasible improvements (left). If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years (right).<ref name="yudkowsky-global-risk">[[Eliezer Yudkowsky]]. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).</ref>|链接=Special:FilePath/Recursive_self-improvement.svg]]
 
[[File:Recursive self-improvement.svg|thumb|upright=1.6|In this sample recursive self-improvement scenario, humans modifying an AI's architecture would be able to double its performance every three years through, for example, 30 generations before exhausting all feasible improvements (left). If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years (right).<ref name="yudkowsky-global-risk">[[Eliezer Yudkowsky]]. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).</ref>|链接=Special:FilePath/Recursive_self-improvement.svg]]
第449行: 第449行:  
Max More不同意这一观点,他认为,如果只有少数超高速的人类水平的人工智能,它们不会从根本上改变世界,因为它们仍将依赖人来完成任务,并且仍然会受到人类认知的限制。即使所有的超高速人工智能都致力于智能增强,但目前还不清楚为什么它们在产生超人类智能方面比现有的人类认知科学家做得更好,尽管进展速度会加快。更进一步指出,超级智能不会在一夜之间改变世界:超级智能需要与现有的、缓慢的人类系统进行接触,以完成对世界的物理影响。”合作、组织和将想法付诸实际变革的需要将确保所有旧规则不会在一夜之间甚至几年内被抛弃。”<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
 
Max More不同意这一观点,他认为,如果只有少数超高速的人类水平的人工智能,它们不会从根本上改变世界,因为它们仍将依赖人来完成任务,并且仍然会受到人类认知的限制。即使所有的超高速人工智能都致力于智能增强,但目前还不清楚为什么它们在产生超人类智能方面比现有的人类认知科学家做得更好,尽管进展速度会加快。更进一步指出,超级智能不会在一夜之间改变世界:超级智能需要与现有的、缓慢的人类系统进行接触,以完成对世界的物理影响。”合作、组织和将想法付诸实际变革的需要将确保所有旧规则不会在一夜之间甚至几年内被抛弃。”<ref name=More>{{cite web|last1=More|first1=Max|title=Singularity Meets Economy|url=http://hanson.gmu.edu/vc.html#more|accessdate=10 November 2014}}</ref>
   −
== Immortality 永生==
+
==永生==
    
In his 2005 book, ''[[The Singularity is Near]]'', [[Ray Kurzweil|Kurzweil]] suggests that medical advances would allow people to protect their bodies from the effects of aging, making the [[Life extension|life expectancy limitless]]. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.<ref>''The Singularity Is Near'', p.&nbsp;215.</ref> Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests [[somatic gene therapy]]; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.<ref>''The Singularity is Near'', p.&nbsp;216.</ref>
 
In his 2005 book, ''[[The Singularity is Near]]'', [[Ray Kurzweil|Kurzweil]] suggests that medical advances would allow people to protect their bodies from the effects of aging, making the [[Life extension|life expectancy limitless]]. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.<ref>''The Singularity Is Near'', p.&nbsp;215.</ref> Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests [[somatic gene therapy]]; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.<ref>''The Singularity is Near'', p.&nbsp;216.</ref>
第476行: 第476行:       −
==History of the concept概念史==
+
==概念史==
    
A paper by Mahendra Prasad, published in ''[[AI Magazine]]'', asserts that the 18th-century mathematician [[Marquis de Condorcet]] was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity.<ref>{{Cite journal|last=Prasad|first=Mahendra|year=2019|title=Nicolas de Condorcet and the First Intelligence Explosion Hypothesis|journal=AI Magazine|volume=40|issue=1|pages=29–33|doi=10.1609/aimag.v40i1.2855}}</ref>
 
A paper by Mahendra Prasad, published in ''[[AI Magazine]]'', asserts that the 18th-century mathematician [[Marquis de Condorcet]] was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity.<ref>{{Cite journal|last=Prasad|first=Mahendra|year=2019|title=Nicolas de Condorcet and the First Intelligence Explosion Hypothesis|journal=AI Magazine|volume=40|issue=1|pages=29–33|doi=10.1609/aimag.v40i1.2855}}</ref>
第539行: 第539行:       −
==In politics 在政治中==
+
==在政治中==
      第560行: 第560行:       −
==See also参阅==
+
==参考文献==
    
|archive-url = https://web.archive.org/web/20010527181244/http://www.aeiveos.com/~bradbury/Authors/Computing/Good-IJ/SCtFUM.html |archive-date = 2001-05-27
 
|archive-url = https://web.archive.org/web/20010527181244/http://www.aeiveos.com/~bradbury/Authors/Computing/Good-IJ/SCtFUM.html |archive-date = 2001-05-27

导航菜单