更改

添加30字节 、 2020年12月21日 (一) 21:05
第68行: 第68行:  
If a superhuman intelligence were to be invented—either through the [[Intelligence amplification|amplification of human intelligence]] or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as '''Seed AI'''<ref name="Yampolskiy, Roman V 2015">Yampolskiy, Roman V. "Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. 384-393.</ref><ref name="ReferenceA">[[Eliezer Yudkowsky]]. General Intelligence and Seed AI-Creating Complete Minds Capable of Open-Ended Self-Improvement, 2001</ref> because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI [[Superintelligence|would far surpass human cognitive abilities]].
 
If a superhuman intelligence were to be invented—either through the [[Intelligence amplification|amplification of human intelligence]] or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as '''Seed AI'''<ref name="Yampolskiy, Roman V 2015">Yampolskiy, Roman V. "Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. 384-393.</ref><ref name="ReferenceA">[[Eliezer Yudkowsky]]. General Intelligence and Seed AI-Creating Complete Minds Capable of Open-Ended Self-Improvement, 2001</ref> because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI [[Superintelligence|would far surpass human cognitive abilities]].
   −
如果一种超人智能是通过[[智能放大|人类智能的放大]]或人工智能发明的,那么它将带来比现在人类所能具备的更大的解决问题和发明的能力。这样的人工智能被称为“种子人工智能”<ref name="Yampolskiy, Roman V 2015">Yampolskiy, Roman V. "Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. 384-393.</ref><ref name="ReferenceA">。通用智能和种子人工智能创造完整的头脑能够开放式自我完善,2001年。因为如果人工智能的工程能力与人类创造者的工程能力相匹配或超越,它就有可能自主改进自己的软件和硬件,或者设计出更强大的机器。这台性能更强的机器可以继续设计一台性能更强大的机器。这些递归自我改进的迭代可以加速,在物理定律或理论计算设定的任何上限之前,潜在地允许巨大的质量变化。据推测,经过多次迭代,这样的人工智能[[超级智能|将远远超过人类的认知能力]]
+
如果一种超人智能是通过[[智能放大|人类智能的放大]]或由人工智能发明的,那么它将带来比现在人类所能具备的更大的解决问题和发明创造的能力。这样的人工智能被称为“种子人工智能”<ref name="Yampolskiy, Roman V 2015">Yampolskiy, Roman V. "Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. 384-393.</ref><ref name="ReferenceA">[[Eliezer Yudkowsky]]. General Intelligence and Seed AI-Creating Complete Minds Capable of Open-Ended Self-Improvement, 2001</ref>因为如果人工智能的工程能力与人类创造者的工程能力相匹配或超越,它就有可能自主改进自己的软件和硬件,或者设计出更强大的机器。这台性能更强的机器可以继续设计一台性能更强大的机器。这些自我改进的迭代可以加速,在物理定律或理论计算设定的任何上限之前,潜在地允许巨大的质量变化。据推测,经过多次迭代,这样的人工智能[[超级智能|将远远超过人类的认知能力]]
    
Intelligence explosion is a possible outcome of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement, leading to the rapid emergence of artificial superintelligence (ASI), the limits of which are unknown, shortly after technological singularity is achieved.
 
Intelligence explosion is a possible outcome of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement, leading to the rapid emergence of artificial superintelligence (ASI), the limits of which are unknown, shortly after technological singularity is achieved.
   −
智能爆炸是人类构建人工通用智能的可能结果。在技术奇异点实现后不久,AGI 将能够进行递归式自我改进,从而导致人工超级智能(ASI)的迅速出现,但其局限性尚不清楚。
+
智能爆炸是人类构建人工通用智能的可能结果。在技术奇点实现后不久,AGI 将能够进行自我改进迭代,从而导致人工超级智能(ASI)的迅速出现,但其局限性尚不清楚。
    
==Intelligence explosion智能爆炸==
 
==Intelligence explosion智能爆炸==
561

个编辑