更改

跳到导航 跳到搜索
删除91字节 、 2021年7月4日 (日) 21:23
无编辑摘要
第11行: 第11行:  
The '''technological singularity'''—also, simply, '''the singularity'''<ref>Cadwalladr, Carole (2014). "[https://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence Are the robots about to rise? Google's new director of engineering thinks so…]" ''The Guardian''. Guardian News and Media Limited.</ref>—is a [[hypothetical]] point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.<ref>{{cite web |title=Collection of sources defining "singularity" |url=http://www.singularitysymposium.com/definition-of-singularity.html |website=singularitysymposium.com |accessdate=17 April 2019}}</ref><ref name="Singularity hypotheses">{{cite book |author1=Eden, Amnon H. |author2=Moor, James H. |title=Singularity hypotheses: A Scientific and Philosophical Assessment |date=2012 |publisher=Springer |location=Dordrecht |isbn=9783642325601 |pages=1–2}}</ref> According to the most popular version of the singularity hypothesis, called [[Technological singularity#Intelligence explosion|intelligence explosion]], an upgradable [[intelligent agent]] will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful [[superintelligence]] that qualitatively far surpasses all [[human intelligence]].
 
The '''technological singularity'''—also, simply, '''the singularity'''<ref>Cadwalladr, Carole (2014). "[https://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence Are the robots about to rise? Google's new director of engineering thinks so…]" ''The Guardian''. Guardian News and Media Limited.</ref>—is a [[hypothetical]] point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.<ref>{{cite web |title=Collection of sources defining "singularity" |url=http://www.singularitysymposium.com/definition-of-singularity.html |website=singularitysymposium.com |accessdate=17 April 2019}}</ref><ref name="Singularity hypotheses">{{cite book |author1=Eden, Amnon H. |author2=Moor, James H. |title=Singularity hypotheses: A Scientific and Philosophical Assessment |date=2012 |publisher=Springer |location=Dordrecht |isbn=9783642325601 |pages=1–2}}</ref> According to the most popular version of the singularity hypothesis, called [[Technological singularity#Intelligence explosion|intelligence explosion]], an upgradable [[intelligent agent]] will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful [[superintelligence]] that qualitatively far surpasses all [[human intelligence]].
   −
The technological singularity—also, simply, the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
      
<font color="#ff8000"> 技术奇点Technological singularity</font>——简称<font color="#ff8000"> 奇点Singularity</font>是一个假设的时间点,在这个时间点上,技术增长变得不可控制和不可逆转,从而导致人类文明发生无法预见的变化。奇点假说(也被称为<font color="#ff8000">智能爆炸intelligence explosion</font>)最普遍的版本是:可升级的智能体最终将进入一个自我完善的“<font color="#32cd32">失控反应runaway reaction</font>”循环,每个新的、更智慧的一代将出现得越来越快,引起智能的“爆炸”,并产生一种性质上远超所有人类智能的强大超级智能。
 
<font color="#ff8000"> 技术奇点Technological singularity</font>——简称<font color="#ff8000"> 奇点Singularity</font>是一个假设的时间点,在这个时间点上,技术增长变得不可控制和不可逆转,从而导致人类文明发生无法预见的变化。奇点假说(也被称为<font color="#ff8000">智能爆炸intelligence explosion</font>)最普遍的版本是:可升级的智能体最终将进入一个自我完善的“<font color="#32cd32">失控反应runaway reaction</font>”循环,每个新的、更智慧的一代将出现得越来越快,引起智能的“爆炸”,并产生一种性质上远超所有人类智能的强大超级智能。
第152行: 第151行:  
马丁 · 福特Martin Ford的《隧道中的灯光: 自动化,加速技术和未来经济The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future》
 
马丁 · 福特Martin Ford的《隧道中的灯光: 自动化,加速技术和未来经济The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future》
   −
[[Image:PPTMooresLawai.jpg|thumb|Ray Kurzweil writes that, due to [[paradigm shift]]s, a trend of exponential growth extends [[Moore's law]] from [[integrated circuits]] to earlier [[transistor]]s, [[vacuum tube]]s, [[relay]]s, and [[electromechanics|electromechanical]] computers. He predicts that the exponential growth will continue, and that in a few decades the computing power of all computers will exceed that of ("unenhanced") human brains, with superhuman [[artificial intelligence]] appearing around the same time.]]
+
[[Image:PPTMooresLawai.jpg|thumb|Ray Kurzweil writes that, due to [[paradigm shift]]s, a trend of exponential growth extends [[Moore's law]] from [[integrated circuits]] to earlier [[transistor]]s, [[vacuum tube]]s, [[relay]]s, and [[electromechanics|electromechanical]] computers. He predicts that the exponential growth will continue, and that in a few decades the computing power of all computers will exceed that of ("unenhanced") human brains, with superhuman [[artificial intelligence]] appearing around the same time.|链接=Special:FilePath/PPTMooresLawai.jpg]]
    
[[图片:PPTMooresLawai.jpg|thumb | Ray Kurzweil写道,由于[[范式转换]]s,指数增长的趋势将[[摩尔定律]]从[[集成电路]]扩展到早期的[[晶体管]]、[[真空管]]、[[继电器]]和[[机电机械]]计算机。他预测,这种指数增长将继续下去,在几十年内,所有计算机的计算能力将超过(“未增强的”)人脑,同时出现超人[[人工智能]]
 
[[图片:PPTMooresLawai.jpg|thumb | Ray Kurzweil写道,由于[[范式转换]]s,指数增长的趋势将[[摩尔定律]]从[[集成电路]]扩展到早期的[[晶体管]]、[[真空管]]、[[继电器]]和[[机电机械]]计算机。他预测,这种指数增长将继续下去,在几十年内,所有计算机的计算能力将超过(“未增强的”)人脑,同时出现超人[[人工智能]]
   −
[[File:Moore's Law over 120 Years.png|thumb|left|An updated version of Moore's law over 120 Years (based on [[Ray Kurzweil|Kurzweil's]] [[c:File:PPTMooresLawai.jpg|graph]]). The 7 most recent data points are all [[Nvidia GPUs|NVIDIA GPUs]].]]
+
[[File:Moore's Law over 120 Years.png|thumb|left|An updated version of Moore's law over 120 Years (based on [[Ray Kurzweil|Kurzweil's]] [[c:File:PPTMooresLawai.jpg|graph]]). The 7 most recent data points are all [[Nvidia GPUs|NVIDIA GPUs]].|链接=Special:FilePath/Moore's_Law_over_120_Years.png]]
    
[[资料图:摩尔超过120的定律年.png|拇指|左|摩尔定律120年的更新版本(基于[[Ray Kurzweil | Kurzweil's]][[c:文件:PPTMooresLawai.jpg|图形]])。最近的7个数据点都是[[Nvidia GPU | Nvidia GPU]].]]
 
[[资料图:摩尔超过120的定律年.png|拇指|左|摩尔定律120年的更新版本(基于[[Ray Kurzweil | Kurzweil's]][[c:文件:PPTMooresLawai.jpg|图形]])。最近的7个数据点都是[[Nvidia GPU | Nvidia GPU]].]]
第179行: 第178行:  
{{Main |加速变革}}
 
{{Main |加速变革}}
   −
[[Image:ParadigmShiftsFrr15Events.svg|thumb|According to Kurzweil, his [[logarithmic scale|logarithmic graph]] of 15 lists of [[paradigm shift]]s for key [[human history|historic]] events shows an [[exponential growth|exponential]] trend]]
+
[[Image:ParadigmShiftsFrr15Events.svg|thumb|According to Kurzweil, his [[logarithmic scale|logarithmic graph]] of 15 lists of [[paradigm shift]]s for key [[human history|historic]] events shows an [[exponential growth|exponential]] trend|链接=Special:FilePath/ParadigmShiftsFrr15Events.svg]]
    
[[图片:ParadigmShiftsFrr15Events.svg|thumb |根据Kurzweil的说法,他对关键的[[人类历史|历史]]事件的15个[[范式转移]]列表的[[对数标度|对数图]]显示了[[指数增长|指数]]趋势]]
 
[[图片:ParadigmShiftsFrr15Events.svg|thumb |根据Kurzweil的说法,他对关键的[[人类历史|历史]]事件的15个[[范式转移]]列表的[[对数标度|对数图]]显示了[[指数增长|指数]]趋势]]
第247行: 第246行:  
虽然不是恶意的,但没有理由认为人工智能会积极促进人类目标的实现,除非这些目标可以被编程,否则,它们就可能利用目前用于支持人类的资源来促进自己的目标,从而导致人类灭绝。
 
虽然不是恶意的,但没有理由认为人工智能会积极促进人类目标的实现,除非这些目标可以被编程,否则,它们就可能利用目前用于支持人类的资源来促进自己的目标,从而导致人类灭绝。
   −
[[Carl Shulman]] and [[Anders Sandberg]] suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."<ref name=MuehlhauserSalamon2012>{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
+
[[Carl Shulman]] and [[Anders Sandberg]] suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."<ref name="MuehlhauserSalamon2012">{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
    
Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained. An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."
 
Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained. An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."
第382行: 第381行:  
{{进一步{社会文化进化}}
 
{{进一步{社会文化进化}}
   −
[[File:Major Evolutionary Transitions digital.jpg|thumb|upright=1.6|Schematic Timeline of Information and Replicators in the Biosphere: Gillings et al.'s "[[The Major Transitions in Evolution|major evolutionary transitions]]" in information processing.<ref name="InfoBiosphere2016" />]]
+
[[File:Major Evolutionary Transitions digital.jpg|thumb|upright=1.6|Schematic Timeline of Information and Replicators in the Biosphere: Gillings et al.'s "[[The Major Transitions in Evolution|major evolutionary transitions]]" in information processing.<ref name="InfoBiosphere2016" />|链接=Special:FilePath/Major_Evolutionary_Transitions_digital.jpg]]
   −
[[档案:主要进化过渡数字.jpg|thumb |直立=1.6 |生物圈中信息和复制因子的示意时间线:Gillings等人在信息处理中的“[[进化中的主要转变|主要进化转变]]”。<ref name="InfoBiosphere2016" />]]
+
[[档案:主要进化过渡数字.jpg|thumb |直立=1.6 |生物圈中信息和复制因子的示意时间线:Gillings等人在信息处理中的“[[进化中的主要转变|主要进化转变]]”。<ref name="InfoBiosphere2016" />]][[Index.php?title=技术奇点#cite%20note-InfoBiosphere2016-89|<span class="mw-reflink-text">[89]</span>]]
   −
[[File:Biological vs. digital information.jpg|thumb|Amount of digital information worldwide (5{{e|21}} bytes) versus human genome information worldwide (10<sup>19</sup> bytes) in 2014.<ref name="InfoBiosphere2016" />]]
+
[[File:Biological vs. digital information.jpg|thumb|Amount of digital information worldwide (5{{e|21}} bytes) versus human genome information worldwide (10<sup>19</sup> bytes) in 2014.<ref name="InfoBiosphere2016" />|链接=Special:FilePath/Biological_vs._digital_information.jpg]]
   −
[[档案:生物vs。数字信息.jpg|2014年,全球数字信息总量(5{e | 21}字节)与全球人类基因组信息(10<sup>19</sup>字节)的对比。<ref name="InfoBiosphere2016" />]]
+
[[档案:生物vs。数字信息.jpg|2014年,全球数字信息总量(5{e | 21}字节)与全球人类基因组信息(10<sup>19</sup>字节)的对比。<ref name="InfoBiosphere2016" />]][[Index.php?title=技术奇点#cite%20note-InfoBiosphere2016-89|<span class="mw-reflink-text">[89]</span>]]
    
While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.{{citation needed|date=April 2018}}
 
While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.{{citation needed|date=April 2018}}
第446行: 第445行:  
==Hard vs. soft takeoff硬起飞与软起飞==
 
==Hard vs. soft takeoff硬起飞与软起飞==
   −
[[File:Recursive self-improvement.svg|thumb|upright=1.6|In this sample recursive self-improvement scenario, humans modifying an AI's architecture would be able to double its performance every three years through, for example, 30 generations before exhausting all feasible improvements (left). If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years (right).<ref name="yudkowsky-global-risk">[[Eliezer Yudkowsky]]. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).</ref>]]
+
[[File:Recursive self-improvement.svg|thumb|upright=1.6|In this sample recursive self-improvement scenario, humans modifying an AI's architecture would be able to double its performance every three years through, for example, 30 generations before exhausting all feasible improvements (left). If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years (right).<ref name="yudkowsky-global-risk">[[Eliezer Yudkowsky]]. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).</ref>|链接=Special:FilePath/Recursive_self-improvement.svg]]
    
[[文件:在这个示例递归自我改进场景中,修改人工智能体系结构的人可以每三年将其性能提高一倍,例如,30代人,然后用尽所有可行的改进(左)。相反,如果人工智能足够聪明,能够像人类研究人员那样修改自己的架构,那么每一代人完成一次重新设计所需的时间将减半,并且它在6年内将所有30代可行的代都推进(右图)。<ref name="yudkowsky-global-risk">[[Eliezer Yudkowsky]]. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).</ref>]]
 
[[文件:在这个示例递归自我改进场景中,修改人工智能体系结构的人可以每三年将其性能提高一倍,例如,30代人,然后用尽所有可行的改进(左)。相反,如果人工智能足够聪明,能够像人类研究人员那样修改自己的架构,那么每一代人完成一次重新设计所需的时间将减半,并且它在6年内将所有30代可行的代都推进(右图)。<ref name="yudkowsky-global-risk">[[Eliezer Yudkowsky]]. "Artificial intelligence as a positive and negative factor in global risk." Global catastrophic risks (2008).</ref>]]
3

个编辑

导航菜单