更改

跳到导航 跳到搜索
添加4,998字节 、 2020年12月21日 (一) 17:22
第392行: 第392行:  
Ben Goertzel 同意 Hall 的建议,即一个新的人类级别的人工智能将会很好地利用它的智能来积累财富。人工智能的天赋可能会激励公司和政府将其软件推广到整个社会。戈特泽尔对五分钟的起飞持怀疑态度,但他推测五年从人类到超人级别的起飞是合理的。Goerzel 将这种情况称为“半硬起飞”。
 
Ben Goertzel 同意 Hall 的建议,即一个新的人类级别的人工智能将会很好地利用它的智能来积累财富。人工智能的天赋可能会激励公司和政府将其软件推广到整个社会。戈特泽尔对五分钟的起飞持怀疑态度,但他推测五年从人类到超人级别的起飞是合理的。Goerzel 将这种情况称为“半硬起飞”。
   −
=== Algorithm improvements ===
+
=== Algorithm improvements算法改进 ===
    
Some intelligence technologies, like "seed AI",<ref name="Yampolskiy, Roman V 2015"/><ref name="ReferenceA"/> may also have the potential to not just make themselves faster, but also more efficient, by modifying their [[source code]]. These improvements would make further improvements possible, which would make further improvements possible, and so on.
 
Some intelligence technologies, like "seed AI",<ref name="Yampolskiy, Roman V 2015"/><ref name="ReferenceA"/> may also have the potential to not just make themselves faster, but also more efficient, by modifying their [[source code]]. These improvements would make further improvements possible, which would make further improvements possible, and so on.
 +
一些智能技术,比如“种子人工智能”,<ref name="Yampolskiy, Roman V 2015"/><ref name="ReferenceA"/> 通过修改它们的[[源代码]],也可能不仅使自己更快,而且更高效。这些改进将使进一步的改进成为可能,从而再次使进一步的改进成为可能,以此类推。
    
Max More disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."
 
Max More disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."
第404行: 第405行:  
The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.{{citation needed|date=July 2017}} An AI rewriting its own source code could do so while contained in an [[AI box]].
 
The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.{{citation needed|date=July 2017}} An AI rewriting its own source code could do so while contained in an [[AI box]].
   −
 
+
递归自改进算法集的机制在两个方面不同于原始计算速度的提高。首先,它不需要外部影响:设计更快硬件的机器仍然需要人类来创建改进的硬件,或者对工厂进行适当的编程。
    
In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.
 
In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.
第412行: 第413行:  
Second, as with [[Vernor Vinge]]’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. [[Eliezer Yudkowsky]] compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.<ref name="yudkowsky">{{cite web|author=Eliezer S. Yudkowsky |url=http://yudkowsky.net/singularity/power |title=Power of Intelligence |publisher=Yudkowsky |accessdate=2011-09-09}}</ref>
 
Second, as with [[Vernor Vinge]]’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. [[Eliezer Yudkowsky]] compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.<ref name="yudkowsky">{{cite web|author=Eliezer S. Yudkowsky |url=http://yudkowsky.net/singularity/power |title=Power of Intelligence |publisher=Yudkowsky |accessdate=2011-09-09}}</ref>
   −
 
+
第二,和[[Vernor Vinge]]关于奇点的概念一样,预测结果要困难得多。虽然速度的提高似乎与人类的智能只是数量上的区别,但实际的算法改进在质量上是不同的。[[Eliezer Yudkowsky]]将其与人类智能带来的变化相比较:人类改变世界的速度比进化速度快数千倍,而且方式完全不同。同样地,生命的进化与以前的地质变化率有着巨大的背离和加速,而智能的提高可能会使变化再次变得不同<ref name="yudkowsky">{{cite web|author=Eliezer S. Yudkowsky |url=http://yudkowsky.net/singularity/power |title=Power of Intelligence |publisher=Yudkowsky |accessdate=2011-09-09}}</ref>
    
K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation.
 
K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation.
   −
纳米技术的创始人之一,在他1986年出版的《创造的引擎》一书中提出了假设的细胞修复装置,包括在细胞内运作并利用假设的生物机器的装置。
+
K、 埃里克·德雷克斯勒K. Eric Drexler,纳米技术的创始人之一,在他1986年出版的《创造的引擎》一书中提出了假设的细胞修复装置,包括在细胞内运作并利用假设的生物机器的装置。
    
There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant under self-improvement, potentially causing the AI to optimise for something other than what was originally intended.<ref name="selfawaresystems">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008 ]</ref><ref name="kurzweilai">{{cite web|url=http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time |title=Artificial General Intelligence: Now Is the Time |publisher=KurzweilAI |accessdate=2011-09-09}}</ref> Secondly, AIs could compete for the same scarce resources mankind uses to survive.<ref name="selfawaresystems.com">[http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/ Omohundro, Stephen M., "The Nature of Self-Improving Artificial Intelligence." Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.]</ref><ref>{{cite book|last1=Barrat|first1=James|title=Our Final Invention|year=2013|publisher=St. Martin's Press|location=New York|isbn=978-0312622374|pages=78–98|edition=First|chapter=6, "Four Basic Drives"|title-link=Our Final Invention}}</ref>
 
There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant under self-improvement, potentially causing the AI to optimise for something other than what was originally intended.<ref name="selfawaresystems">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008 ]</ref><ref name="kurzweilai">{{cite web|url=http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time |title=Artificial General Intelligence: Now Is the Time |publisher=KurzweilAI |accessdate=2011-09-09}}</ref> Secondly, AIs could compete for the same scarce resources mankind uses to survive.<ref name="selfawaresystems.com">[http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/ Omohundro, Stephen M., "The Nature of Self-Improving Artificial Intelligence." Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.]</ref><ref>{{cite book|last1=Barrat|first1=James|title=Our Final Invention|year=2013|publisher=St. Martin's Press|location=New York|isbn=978-0312622374|pages=78–98|edition=First|chapter=6, "Four Basic Drives"|title-link=Our Final Invention}}</ref>
   −
 
+
智能爆炸奇点源于一组递归的自我改进算法,这有着巨大的危险。首先,人工智能的目标结构在自我完善的情况下可能不是一成不变的,这可能会导致人工智能对原本计划之外的东西进行优化。<ref name="selfawaresystems">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008 ]</ref><ref name="kurzweilai">{{cite web|url=http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time |title=Artificial General Intelligence: Now Is the Time |publisher=KurzweilAI |accessdate=2011-09-09}}</ref>第二,人工智能可以竞争人类赖以生存的稀缺资源。<ref name="selfawaresystems.com">[http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/ Omohundro, Stephen M., "The Nature of Self-Improving Artificial Intelligence." Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.]</ref><ref>{{cite book|last1=Barrat|first1=James|title=Our Final Invention|year=2013|publisher=St. Martin's Press|location=New York|isbn=978-0312622374|pages=78–98|edition=First|chapter=6, "Four Basic Drives"|title-link=Our Final Invention}}</ref>
    
According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines. Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.
 
According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines. Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.
第428行: 第429行:  
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.<ref name="kurzweilai.net">{{cite web|url=http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2 |title=Max More and Ray Kurzweil on the Singularity |publisher=KurzweilAI |accessdate=2011-09-09}}</ref><ref name="ReferenceB">{{cite web|url=http://singinst.org/riskintro/index.html |title=Concise Summary &#124; Singularity Institute for Artificial Intelligence |publisher=Singinst.org |accessdate=2011-09-09}}</ref><ref name="nickbostrom7">[http://www.nickbostrom.com/fut/evolution.html Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, pp. 339–371, 2004, Ria University Press.]</ref>
 
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.<ref name="kurzweilai.net">{{cite web|url=http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2 |title=Max More and Ray Kurzweil on the Singularity |publisher=KurzweilAI |accessdate=2011-09-09}}</ref><ref name="ReferenceB">{{cite web|url=http://singinst.org/riskintro/index.html |title=Concise Summary &#124; Singularity Institute for Artificial Intelligence |publisher=Singinst.org |accessdate=2011-09-09}}</ref><ref name="nickbostrom7">[http://www.nickbostrom.com/fut/evolution.html Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, pp. 339–371, 2004, Ria University Press.]</ref>
   −
 
+
虽然不是恶意的,但没有理由认为人工智能会积极促进人类目标的实现,除非这些目标可以被编程,如果不能,就可能利用目前用于支持人类的资源来促进自己的目标,从而导致人类灭绝。<ref name="kurzweilai.net">{{cite web|url=http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2 |title=Max More and Ray Kurzweil on the Singularity |publisher=KurzweilAI |accessdate=2011-09-09}}</ref><ref name="ReferenceB">{{cite web|url=http://singinst.org/riskintro/index.html |title=Concise Summary &#124; Singularity Institute for Artificial Intelligence |publisher=Singinst.org |accessdate=2011-09-09}}</ref><ref name="nickbostrom7">[http://www.nickbostrom.com/fut/evolution.html Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, pp. 339–371, 2004, Ria University Press.]</ref>
    
Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".
 
Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".
第436行: 第437行:  
[[Carl Shulman]] and [[Anders Sandberg]] suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."<ref name=MuehlhauserSalamon2012>{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
 
[[Carl Shulman]] and [[Anders Sandberg]] suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."<ref name=MuehlhauserSalamon2012>{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
   −
 
+
[[Carl Shulman]]和[[Anders Sandberg]]认为,算法改进可能是奇点的限制因素;虽然硬件效率趋于稳步提高,但软件创新更具不可预测性,可能会受到连续、累积研究的限制。他们认为,在软件受限奇点的情况下,智能爆炸实际上比硬件受限奇点更可能发生,因为在软件有限的情况下,一旦开发出人类水平的人工智能,它可以在非常快的硬件上连续运行,廉价硬件的丰富将使人工智能研究不那么受限制。<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref>一旦软件知道如何使用,大量积累的硬件可以释放出来,这被称为“计算过剩”<ref name=MuehlhauserSalamon2012>{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
    
===Criticisms===
 
===Criticisms===
561

个编辑

导航菜单