更改

删除2,098字节 、 2021年2月10日 (三) 21:43
第68行: 第68行:     
智能爆炸是构建<font color = "#ff8000">人工通用智能artificial general intelligence</font>的可能结果。在技术奇点实现后不久,AGI 将能够进行自我迭代,导致<font color = "#ff8000">人工超级智能artificial superintelligence, ASI</font>的迅速出现,但其局限性尚不清楚。
 
智能爆炸是构建<font color = "#ff8000">人工通用智能artificial general intelligence</font>的可能结果。在技术奇点实现后不久,AGI 将能够进行自我迭代,导致<font color = "#ff8000">人工超级智能artificial superintelligence, ASI</font>的迅速出现,但其局限性尚不清楚。
  −
I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. He speculated on the effects of superhuman machines, should they ever be invented: For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds.
  −
  −
I. J. 古德在1965年推测,通用人工智能可能会带来智能爆炸。他推测,如果它们被发明出来的话,超人类机器的影响有: 例如,相对于人类,信息处理速度将提高一百万倍,主观的一年会在物理的30秒内流逝。
  −
        第82行: 第77行:     
{{让我们把超智能机器定义为一种机器,它可以进行远超无论多么聪明的一个人的所有智力活动。由于机器的设计是一种智力活动,那么一台超智能机器可以设计出更好的机器;那么毫无疑问会出现“智能爆炸”,人类的智能将远远落后。因此,第一台超智能机器是人类所需要的最后一项发明,当然,假设机器足够温顺并能够告诉我们如何控制它们的话。}}
 
{{让我们把超智能机器定义为一种机器,它可以进行远超无论多么聪明的一个人的所有智力活动。由于机器的设计是一种智力活动,那么一台超智能机器可以设计出更好的机器;那么毫无疑问会出现“智能爆炸”,人类的智能将远远落后。因此,第一台超智能机器是人类所需要的最后一项发明,当然,假设机器足够温顺并能够告诉我们如何控制它们的话。}}
  −
Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.
  −
  −
包括 Paul Allen,Jeff Hawkins,John Holland,Jaron Lanier,和 Gordon Moore 在内的许多著名的技术专家和学者对技术奇点的合理性提出了质疑,<font color = "#32cd32">他们的定律经常被引用来支持这个概念。whose law is often cited in support of the concept.</font>
  −
  −
      
Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (even more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.<ref name="stat"/>
 
Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (even more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.<ref name="stat"/>
    
古德的设想是这样的:随着计算机功率的增加,人们有可能制造出一台比人类更智慧的机器;这种超人的智能拥有比现在人类更强大的问题解决和发明创造的能力。这台超级智能机器随后设计一台功能更强大的机器,或者重新编写自己的软件来变得更加智能;这台(甚至更强大的)机器接着继续设计功能更强大的机器,以此类推。这些自我迭代加速允许在物理定律或理论计算设定的任何上限之内发生巨大的定性的变化。<ref name="stat"/>
 
古德的设想是这样的:随着计算机功率的增加,人们有可能制造出一台比人类更智慧的机器;这种超人的智能拥有比现在人类更强大的问题解决和发明创造的能力。这台超级智能机器随后设计一台功能更强大的机器,或者重新编写自己的软件来变得更加智能;这台(甚至更强大的)机器接着继续设计功能更强大的机器,以此类推。这些自我迭代加速允许在物理定律或理论计算设定的任何上限之内发生巨大的定性的变化。<ref name="stat"/>
  −
--[[用户:嘉树|嘉树]]([[用户讨论:嘉树|讨论]]) 又有重复的内容
  −
  −
Robin Hanson expressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult to find. Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity.
  −
  −
罗宾 · 汉森(Robin Hanson)对人类智力的增强表示怀疑,他写道,一旦提高人类智力的“唾手可得的”简单方法用尽,进一步的改进将越来越难。即使有各种提高人类智能的方法的推测,但对非人类人工智能(特别是种子人工智能)的推测仍是所有能推进奇点的假说中最受欢迎的一个。
  −
  −
      
==Other manifestations其他表现形式==
 
==Other manifestations其他表现形式==
259

个编辑