更改

跳到导航 跳到搜索
删除704字节 、 2021年8月6日 (五) 22:51
第212行: 第212行:  
There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant under self-improvement, potentially causing the AI to optimise for something other than what was originally intended.<ref name="selfawaresystems">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008 ]</ref><ref name="kurzweilai">{{cite web|url=http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time |title=Artificial General Intelligence: Now Is the Time |publisher=KurzweilAI |accessdate=2011-09-09}}</ref> Secondly, AIs could compete for the same scarce resources mankind uses to survive.<ref name="selfawaresystems.com">[http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/ Omohundro, Stephen M., "The Nature of Self-Improving Artificial Intelligence." Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.]</ref><ref>{{cite book|last1=Barrat|first1=James|title=Our Final Invention|year=2013|publisher=St. Martin's Press|location=New York|isbn=978-0312622374|pages=78–98|edition=First|chapter=6, "Four Basic Drives"|title-link=Our Final Invention}}</ref>
 
There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant under self-improvement, potentially causing the AI to optimise for something other than what was originally intended.<ref name="selfawaresystems">[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008 ]</ref><ref name="kurzweilai">{{cite web|url=http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time |title=Artificial General Intelligence: Now Is the Time |publisher=KurzweilAI |accessdate=2011-09-09}}</ref> Secondly, AIs could compete for the same scarce resources mankind uses to survive.<ref name="selfawaresystems.com">[http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/ Omohundro, Stephen M., "The Nature of Self-Improving Artificial Intelligence." Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.]</ref><ref>{{cite book|last1=Barrat|first1=James|title=Our Final Invention|year=2013|publisher=St. Martin's Press|location=New York|isbn=978-0312622374|pages=78–98|edition=First|chapter=6, "Four Basic Drives"|title-link=Our Final Invention}}</ref>
   −
There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant under self-improvement, potentially causing the AI to optimise for something other than what was originally intended. Secondly, AIs could compete for the same scarce resources humankind uses to survive.
      
由递归自我改进的算法集合引起的智能爆炸存在着巨大的危险。首先,人工智能的目标结构在自我完善的情况下可能不是一成不变的,这可能会导致人工智能对原本计划之外的东西进行优化。第二,人工智能可以与人类竞争赖以生存的稀缺资源。
 
由递归自我改进的算法集合引起的智能爆炸存在着巨大的危险。首先,人工智能的目标结构在自我完善的情况下可能不是一成不变的,这可能会导致人工智能对原本计划之外的东西进行优化。第二,人工智能可以与人类竞争赖以生存的稀缺资源。
    
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.<ref name="kurzweilai.net">{{cite web|url=http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2 |title=Max More and Ray Kurzweil on the Singularity |publisher=KurzweilAI |accessdate=2011-09-09}}</ref><ref name="ReferenceB">{{cite web|url=http://singinst.org/riskintro/index.html |title=Concise Summary &#124; Singularity Institute for Artificial Intelligence |publisher=Singinst.org |accessdate=2011-09-09}}</ref><ref name="nickbostrom7">[http://www.nickbostrom.com/fut/evolution.html Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, pp. 339–371, 2004, Ria University Press.]</ref>
 
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.<ref name="kurzweilai.net">{{cite web|url=http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2 |title=Max More and Ray Kurzweil on the Singularity |publisher=KurzweilAI |accessdate=2011-09-09}}</ref><ref name="ReferenceB">{{cite web|url=http://singinst.org/riskintro/index.html |title=Concise Summary &#124; Singularity Institute for Artificial Intelligence |publisher=Singinst.org |accessdate=2011-09-09}}</ref><ref name="nickbostrom7">[http://www.nickbostrom.com/fut/evolution.html Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, pp. 339–371, 2004, Ria University Press.]</ref>
  −
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support humankind to promote its own goals, causing human extinction.
        第225行: 第222行:  
[[Carl Shulman]] and [[Anders Sandberg]] suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."<ref name="MuehlhauserSalamon2012">{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
 
[[Carl Shulman]] and [[Anders Sandberg]] suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."<ref name="MuehlhauserSalamon2012">{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
   −
Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained. An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."
+
Carl Shulman和Anders Sandberg认为,算法改进可能是奇点的限制因素;虽然硬件效率趋于稳步提高,但软件创新更不具可预测性,可能会受到连续、累积的研究的限制。他们认为,智能爆炸在受软件限制的奇点情况中发生的可能性实际上比在受硬件限制的奇点更可能发生,因为在软件受限的情况下,一旦开发出人类水平的人工智能,它可以在非常快的硬件上连续运行,廉价硬件的丰富将使人工智能研究不那么受限制。<ref name=ShulmanSandberg2010>{{cite journal|last=Shulman|first=Carl|author2=Anders Sandberg |title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy|year=2010|url=http://intelligence.org/files/SoftwareLimited.pdf|accessdate=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref> 一旦软件知道如何使用硬件,大量的硬件就可以被释放出来,这被称为“计算过剩”。<ref name="MuehlhauserSalamon2012">{{cite book|last=Muehlhauser|first=Luke|title=Singularity Hypotheses: A Scientific and Philosophical Assessment|year=2012|publisher=Springer|chapter-url=http://intelligence.org/files/IE-EI.pdf|author2=Anna Salamon |editor=Amnon Eden |editor2=Johnny Søraker |editor3=James H. Moor |editor4=Eric Steinhart|chapter=Intelligence Explosion: Evidence and Import}}</ref>
 
  −
Carl Shulman和Anders Sandberg认为,算法改进可能是奇点的限制因素;虽然硬件效率趋于稳步提高,但软件创新更不具可预测性,可能会受到连续、累积的研究的限制。他们认为,智能爆炸在受软件限制的奇点情况中发生的可能性实际上比在受硬件限制的奇点更可能发生,因为在软件受限的情况下,一旦开发出人类水平的人工智能,它可以在非常快的硬件上连续运行,廉价硬件的丰富将使人工智能研究不那么受限制。一旦软件知道如何使用硬件,大量的硬件就可以被释放出来,这被称为“计算过剩”。
      
===危机===
 
===危机===

导航菜单