第18行: |
第18行: |
| | | |
| | | |
− | 第一次在科技领域使用“奇点”这一概念的人是冯·诺依曼 John von Neumann<ref>''The Technological Singularity'' by Murray Shanahan, (MIT Press, 2015), page 233</ref>。Stanislaw Ulam 报告了一次与冯·诺依曼的讨论。“围绕技术的加速进步和人类生活模式的改变,这让我们看到了人类历史上一些本质上的奇点。<ref name="mathematical" /> 一旦超越了这些奇点,我们所熟知的人类事务就无法继续下去了”。后来作者也赞同这一观点。<ref name="Singularity hypotheses" /><ref name="chalmers">{{Cite journal|last=Chalmers|first=David|date=2010|title=The singularity: a philosophical analysis|url=|journal=Journal of Consciousness Studies|volume=17|issue=9–10|pages=7–65|via=}}</ref> | + | 第一次在科技领域使用“奇点”这一概念的人是冯·诺依曼 John von Neumann<ref>''The Technological Singularity'' by Murray Shanahan, (MIT Press, 2015), page 233</ref>。Stanislaw Ulam 报告了一次与冯·诺依曼的讨论。“围绕技术的加速进步和人类生活模式的改变,这让我们看到了人类历史上一些本质上的奇点。<ref name="mathematical" /> 一旦超越了这些奇点,我们所熟知的人类事务就将无法继续下去了”。后来的作者也赞同这一观点。<ref name="Singularity hypotheses" /><ref name="chalmers">{{Cite journal|last=Chalmers|first=David|date=2010|title=The singularity: a philosophical analysis|url=|journal=Journal of Consciousness Studies|volume=17|issue=9–10|pages=7–65|via=}}</ref> |
| | | |
| [[I. J. Good]]'s "intelligence explosion" model predicts that a future superintelligence will trigger a singularity.<ref name="vinge1993">Vinge, Vernor. [http://mindstalk.net/vinge/vinge-sing.html "The Coming Technological Singularity: How to Survive in the Post-Human Era"], in ''Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace'', G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.</ref> | | [[I. J. Good]]'s "intelligence explosion" model predicts that a future superintelligence will trigger a singularity.<ref name="vinge1993">Vinge, Vernor. [http://mindstalk.net/vinge/vinge-sing.html "The Coming Technological Singularity: How to Survive in the Post-Human Era"], in ''Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace'', G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.</ref> |
第24行: |
第24行: |
| | | |
| | | |
− | I. J.古德的“智能爆炸”模型预测,未来的超级智能将触发一个奇点。<ref name="vinge1993">Vinge, Vernor. [http://mindstalk.net/vinge/vinge-sing.html "The Coming Technological Singularity: How to Survive in the Post-Human Era"], in ''Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace'', G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.</ref> | + | I. J.古德的“智能爆炸”模型预测未来的超级智能将触发一个奇点。<ref name="vinge1993">Vinge, Vernor. [http://mindstalk.net/vinge/vinge-sing.html "The Coming Technological Singularity: How to Survive in the Post-Human Era"], in ''Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace'', G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.</ref> |
| | | |
| | | |
第31行: |
第31行: |
| | | |
| | | |
− | 奇点的概念和术语“奇点”是由 Vernor Vinge 在他1993年的文章《即将到来的技术奇点 The Coming Technological Singularity》中得到推广的。他在文中写道,这将标志着人类时代的终结,因为新的超级智能将持续自我升级,并以不可思议的速度在技术上进步。他写道,如果奇点发生在2005年之前或2030年之后,他会感到惊讶。<ref name="vinge1993" />
| + | “奇点”的概念和术语是由 Vernor Vinge 在他1993年的文章《即将到来的技术奇点 The Coming Technological Singularity》中得到推广的。他在文中写道,这将标志着人类时代的终结,因为新的超级智能将持续自我升级,并以不可思议的速度在技术上进步。他写道,如果奇点发生在2005年之前或2030年之后,他会感到惊讶。<ref name="vinge1993" /> |
| | | |
| | | |
第37行: |
第37行: |
| Public figures such as [[Stephen Hawking]] and [[Elon Musk]] have expressed concern that full [[artificial intelligence]] (AI) could result in human extinction.<ref>{{cite news|last1=Sparkes|first1=Matthew|title=Top scientists call for caution over artificial intelligence|url=https://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html|accessdate=24 April 2015|work=[[The Daily Telegraph|The Telegraph (UK)]]|date=13 January 2015}}</ref><ref>{{cite web|url=https://www.bbc.com/news/technology-30290540|title=Hawking: AI could end human race|date=2 December 2014|publisher=BBC|accessdate=11 November 2017}}</ref> The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated. | | Public figures such as [[Stephen Hawking]] and [[Elon Musk]] have expressed concern that full [[artificial intelligence]] (AI) could result in human extinction.<ref>{{cite news|last1=Sparkes|first1=Matthew|title=Top scientists call for caution over artificial intelligence|url=https://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html|accessdate=24 April 2015|work=[[The Daily Telegraph|The Telegraph (UK)]]|date=13 January 2015}}</ref><ref>{{cite web|url=https://www.bbc.com/news/technology-30290540|title=Hawking: AI could end human race|date=2 December 2014|publisher=BBC|accessdate=11 November 2017}}</ref> The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated. |
| | | |
− | 斯蒂芬·霍金和埃隆·马斯克等公众人物对完全人工智能(AI)可能导致人类灭绝表示担忧。<ref>{{cite news|last1=Sparkes|first1=Matthew|title=Top scientists call for caution over artificial intelligence|url=https://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html|accessdate=24 April 2015|work=[[The Daily Telegraph|The Telegraph (UK)]]|date=13 January 2015}}</ref><ref>{{cite web|url=https://www.bbc.com/news/technology-30290540|title=Hawking: AI could end human race|date=2 December 2014|publisher=BBC|accessdate=11 November 2017}}</ref>奇点的后果及其对人类的潜在利益或伤害一直存在激烈的争论。
| + | 斯蒂芬·霍金 Stephen Hawking 和埃隆·马斯克 Elon Musk 等公众人物对完全人工智能(AI)可能导致人类灭绝表示担忧。<ref>{{cite news|last1=Sparkes|first1=Matthew|title=Top scientists call for caution over artificial intelligence|url=https://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html|accessdate=24 April 2015|work=[[The Daily Telegraph|The Telegraph (UK)]]|date=13 January 2015}}</ref><ref>{{cite web|url=https://www.bbc.com/news/technology-30290540|title=Hawking: AI could end human race|date=2 December 2014|publisher=BBC|accessdate=11 November 2017}}</ref>奇点的后果及其对人类的潜在利益或伤害一直存在激烈的争论。 |
| | | |
| | | |
| Four polls of AI researchers, conducted in 2012 and 2013 by [[Nick Bostrom]] and [[Vincent C. Müller]], suggested a median probability estimate of 50% that [[artificial general intelligence]] (AGI) would be developed by 2040–2050.<ref name="newyorker">{{cite news|last1=Khatchadourian|first1=Raffi|title=The Doomsday Invention|url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=31 January 2018|work=The New Yorker|date=16 November 2015}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). "Future progress in artificial intelligence: A survey of expert opinion". In V. C. Müller (ed): ''Fundamental issues of artificial intelligence'' (pp. 555–572). Springer, Berlin. http://philpapers.org/rec/MLLFPI</ref> | | Four polls of AI researchers, conducted in 2012 and 2013 by [[Nick Bostrom]] and [[Vincent C. Müller]], suggested a median probability estimate of 50% that [[artificial general intelligence]] (AGI) would be developed by 2040–2050.<ref name="newyorker">{{cite news|last1=Khatchadourian|first1=Raffi|title=The Doomsday Invention|url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=31 January 2018|work=The New Yorker|date=16 November 2015}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). "Future progress in artificial intelligence: A survey of expert opinion". In V. C. Müller (ed): ''Fundamental issues of artificial intelligence'' (pp. 555–572). Springer, Berlin. http://philpapers.org/rec/MLLFPI</ref> |
| | | |
− | 2012年到2013年,Nick Bostrom和Vincent c. Müller 对人工智能研究人员进行了四次调查。结果显示,通用人工智能(artificial general intelligence, AGI)在2040年至2050年被成功开发出来的概率估计的中位数为50% 。<ref name="newyorker">{{cite news|last1=Khatchadourian|first1=Raffi|title=The Doomsday Invention|url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=31 January 2018|work=The New Yorker|date=16 November 2015}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). "Future progress in artificial intelligence: A survey of expert opinion". In V. C. Müller (ed): ''Fundamental issues of artificial intelligence'' (pp. 555–572). Springer, Berlin. http://philpapers.org/rec/MLLFPI</ref> | + | 2012年到2013年,Nick Bostrom 和 Vincent c. Müller 对人工智能研究人员进行了四次调查。结果显示,通用人工智能(artificial general intelligence, AGI)在2040年至2050年被成功开发出来的概率估计的中位数为50%。<ref name="newyorker">{{cite news|last1=Khatchadourian|first1=Raffi|title=The Doomsday Invention|url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=31 January 2018|work=The New Yorker|date=16 November 2015}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). "Future progress in artificial intelligence: A survey of expert opinion". In V. C. Müller (ed): ''Fundamental issues of artificial intelligence'' (pp. 555–572). Springer, Berlin. http://philpapers.org/rec/MLLFPI</ref> |
| | | |
| ==背景== | | ==背景== |