更改

跳到导航 跳到搜索
添加137,995字节 、 2020年5月12日 (二) 18:01
此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。

{{Short description|Hypothetical human-level or stronger AI}}

{{Use British English|date = March 2019}}

{{Use dmy dates|date=December 2019}}

{{Artificial intelligence}}

'''Artificial general intelligence''' ('''AGI''') is the hypothetical<ref>{{cite news |title=DeepMind and Google: the battle to control artificial intelligence |url=https://www.economist.com/news/2019/04/05/deepmind-and-google-the-battle-to-control-artificial-intelligence |accessdate=15 March 2020 |work=[[The Economist]] ([[1843 (magazine)]]) |date=2019 |quote=AGI stands for Artificial General Intelligence, a hypothetical computer program...}}</ref> intelligence of a machine that has the capacity to understand or learn any intellectual task that a [[human being]] can. It is a primary goal of some [[artificial intelligence]] research and a common topic in [[science fiction]] and [[futures studies]]. AGI can also be referred to as '''strong AI''',<ref>Kurzweil, ''Singularity'' (2005) p. 260</ref><ref name = "Kurzweil 2005-08-05">{{Citation|url=https://www.forbes.com/home/free_forbes/2005/0815/030.htmlhttps://www.forbes.com/home/free_forbes/2005/0815/030.html |first=Ray |last=Kurzweil |date=5 August 2005 |magazine=[[Forbes]]|title=Long Live AI}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref><ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence

Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI,<ref>{{Citation|url=https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |title=Advanced Human ntelligence

人工通用智能(Artificial general intelligence,AGI)是一种机器的假设智能,它有能力理解或学习任何人类能够完成的智力任务。这是一些人工智能研究的主要目标,也是科幻小说和未来研究的共同话题。也可以被称为强大的人工智能,参考文献{ Citation | url https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html | title Advanced Human intelligence

|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> '''full AI''',<ref>{{Cite web |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |access-date=22 February 2014 |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |url-status=live }}</ref>

|first=Mike|last=Treder|work=Responsible Nanotechnology|date=10 August 2005 |archive-url=https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16 October 2019 |url-status=live}}</ref> full AI,

2005年8月10日2019年10月 https://web.archive.org/web/20191016214415/https://crnano.typepad.com/crnblog/2005/08/advanced_human_.html|archive-date=16,

or '''general intelligent action'''.{{sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the [[physical symbol system]] hypothesis.}}

or general intelligent action.

或者是一般的智能行为。

Some academic sources reserve the term "strong AI" for machines that can experience [[Chinese room#Strong AI|consciousness]].{{sfn|Searle|1980|ps=, See below for the origin of the term "strong AI", and see the academic definition of "[[Chinese room#Strong AI|strong AI]]" in the article [[Chinese room]].}} Today's AI is speculated to be many years, if not decades, away from AGI.<ref>[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref>{{Cite journal|last=Grace|first=Katja|last2=Salvatier|first2=John|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain|date=2018-07-31|title=Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts|journal=Journal of Artificial Intelligence Research|volume=62|pages=729–754|doi=10.1613/jair.1.11222|issn=1076-9757}}</ref>

Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be many years, if not decades, away from AGI.

一些学术资源保留了“强大的人工智能”这个术语,用来形容能够体验意识的机器。据推测,如果不是几十年的话,今天的人工智能将在很多年之后才能到达 AGI。



Some authorities emphasize a distinction between ''strong AI'' and ''applied AI'',<ref>Encyclopædia Britannica [http://www.britannica.com/eb/article-219086/artificial-intelligence Strong AI, applied AI, and cognitive simulation] {{Webarchive|url=https://web.archive.org/web/20071015054758/http://www.britannica.com/eb/article-219086/artificial-intelligence |date=15 October 2007 }} or Jack Copeland [http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html What is artificial intelligence?] {{Webarchive|url=https://web.archive.org/web/20070818125256/http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI02.html |date=18 August 2007 }} on AlanTuring.net</ref> also called ''narrow AI''<ref name = "Kurzweil 2005-08-05"/> or ''[[weak AI]]''.<ref>{{Cite web |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |title=The Open University on Strong and Weak AI |access-date=8 October 2007 |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |url-status=live }} {{Dead link |date=October 2019}}</ref> In contrast to strong AI, weak AI is not intended to perform human [[cognitive]] abilities. Rather, weak AI is limited to the use of software to study or accomplish specific [[problem solving]] or [[reason]]ing tasks.

Some authorities emphasize a distinction between strong AI and applied AI, also called narrow AI In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.

一些权威机构强调强 AI 和应用 AI 之间的区别,也称为狭义 AI 与强 AI 相比,弱 AI 并不是为了执行人类的认知能力。相反,弱 AI 仅限于使用软件来研究或完成特定的问题解决或推理任务。



As of 2017, over forty organizations are researching AGI.<ref name=baum/>

As of 2017, over forty organizations are researching AGI.

截止到2017年,已经有超过四十家机构在研究 AGI。



==Requirements==

{{main|Cognitive science}}



Various criteria for [[intelligence]] have been proposed (most famously the [[Turing test]]) but to date, there is no definition that satisfies everyone.<ref>AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web| url=http://www-formal.stanford.edu/jmc/whatisai/node1.html| title=Basic Questions| last=McCarthy| first=John| authorlink=John McCarthy (computer scientist)| publisher=[[Stanford University]]| year=2007| access-date=6 December 2007| archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html| archive-date=26 October 2007| url-status=live}} (For a discussion of some definitions of intelligence used by [[artificial intelligence]] researchers, see [[philosophy of artificial intelligence]].)</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref>

Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref>

人们提出了各种各样的智力标准(最著名的是图灵测试) ,但到目前为止,还没有一个定义能满足所有人。然而,人工智能研究人员普遍认为,智能需要做到以下几点: 参考

This list of intelligent traits is based on the topics covered by major AI textbooks, including:

This list of intelligent traits is based on the topics covered by major AI textbooks, including:

这个智能特征列表是基于主要的人工智能教科书所涉及的主题,包括:

{{Harvnb|Russell|Norvig|2003}},

,

,

{{Harvnb|Luger|Stubblefield|2004}},

,

,

{{Harvnb|Poole|Mackworth|Goebel|1998}} and

and



{{Harvnb|Nilsson|1998}}.

.

.

</ref>

</ref>

/ 参考

* [[automated reasoning|reason]], use strategy, solve puzzles, and make judgments under [[uncertainty]];

* [[knowledge representation|represent knowledge]], including [[Commonsense knowledge base|commonsense knowledge]];

* [[automated planning and scheduling|plan]];

* [[machine learning|learn]];

* communicate in [[natural language processing|natural language]];

* and [[Artificial intelligence systems integration|integrate all these skills]] towards common goals.



Other important capabilities include the ability to [[machine perception|sense]] (e.g. [[computer vision|see]]) and the ability to act (e.g. [[robotics|move and manipulate objects]]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-262-16239-3}}</ref> This would include an ability to detect and respond to [[hazard]].<ref>{{cite journal | last1 = White | first1 = R. W. | year = 1959 | title = Motivation reconsidered: The concept of competence | journal = Psychological Review | volume = 66 | issue = 5| pages = 297–333 | doi=10.1037/h0040934| pmid = 13844397 }}</ref> Many interdisciplinary approaches to intelligence (e.g. [[cognitive science]], [[computational intelligence]] and [[decision making]]) tend to emphasise the need to consider additional traits such as [[imagination]] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and [[Self-determination theory|autonomy]].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref>

Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy.

其他重要的能力包括感知能力(例如:。和行动的能力(例如:。移动和操纵物体)在世界上的智能行为是被观察。这将包括检测和应对危险的能力。许多跨学科的智力研究方法(例如:。认知科学、计算智力和决策)倾向于强调考虑额外特征的必要性,例如想象力(被认为是形成未编入程序的心理图像和概念的能力)和自主性。

Computer based systems that exhibit many of these capabilities do exist (e.g. see [[computational creativity]], [[automated reasoning]], [[decision support system]], [[robot]], [[evolutionary computation]], [[intelligent agent]]), but not yet at human levels.

Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.

基于计算机的系统,展示了许多这些能力确实存在(例如:。参见计算创造性、自动推理、决策支持系统、机器人、进化计算、智能代理) ,但还没有达到人类的水平。



===Tests for confirming human-level AGI{{anchor|Tests_for_confirming_human-level_AGI}}===

The following tests to confirm human-level AGI have been considered:<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014|date=11 August 2013|archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/|archive-date=25 April 2014|url-status=live}}</ref><ref>{{Cite web|url=https://www.talkyblog.com/artificial_general_intelligence_agi/|title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence|date=13 July 2019|website=Talky Blog|language=en-US|access-date=17 July 2019|archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/|archive-date=17 July 2019|url-status=live}}</ref>

The following tests to confirm human-level AGI have been considered:

考虑了下列测试以确认人类水平 AGI:

;[[Turing test|The Turing Test]] ([[Alan Turing|''Turing'']])

The Turing Test (Turing)

图灵测试(图灵)

: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.

A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.

一个机器人和一个人类都与另一个人类相反,后者必须评估两者中哪一个是机器,如果它能骗过评估者很大一部分时间,那么机器就通过了测试。注意: 图灵并没有规定什么是智能,只是知道它是一台机器就应该取消它的资格。

;The Coffee Test ([[Steve Wozniak|''Wozniak'']])

The Coffee Test (Wozniak)

咖啡测试(沃兹尼亚克)

: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.

A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.

一台机器需要进入一个普通的美国家庭,并弄清楚如何制作咖啡: 找到咖啡机,找到咖啡,加水,找到一个马克杯,并通过按下正确的按钮来煮咖啡。

;The Robot College Student Test ([[Ben Goertzel|''Goertzel'']])

The Robot College Student Test (Goertzel)

机器人大学生考试(Goertzel)

: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.

A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.

一台机器进入一所大学,学习和通过与人类相同的课程,并获得学位。

;The Employment Test ([[Nils John Nilsson|''Nilsson'']])

The Employment Test (Nilsson)

就业测试(Nilsson)

: A machine works an economically important job, performing at least as well as humans in the same job.

A machine works an economically important job, performing at least as well as humans in the same job.

机器从事一项经济上重要的工作,在同一项工作中表现至少和人类一样好。



=== Problems requiring AGI to solve ===

{{Main|AI-complete}}



The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">Shapiro, Stuart C. (1992). [http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] {{Webarchive|url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |date=1 February 2016 }} In Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp.&nbsp;54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref>

The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm.

对于计算机来说,最困难的问题被非正式地称为“ AI 完成”或“ AI 困难” ,这意味着解决这些问题相当于人类智能的一般才能,或强大的人工智能,超出了特定目的算法的能力。



AI-complete problems are hypothesised to include general [[computer vision]], [[natural language understanding]], and dealing with unexpected circumstances while solving any real world problem.<ref>Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf {{Webarchive|url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |date=22 May 2013 }}</ref>

AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.

人工智能完全问题假设包括一般的计算机视觉,自然语言理解,以及在解决任何现实世界问题的同时处理意外情况。



AI-complete problems cannot be solved with current computer technology alone, and also require [[human computation]]. This property could be useful, for example, to test for the presence of humans, as [[CAPTCHA]]s aim to do; and for [[computer security]] to repel [[brute-force attack]]s.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. [http://www.captcha.net/captcha_crypt.pdf CAPTCHA: Using Hard AI Problems for Security] {{Webarchive|url=https://web.archive.org/web/20160304001102/http://www.captcha.net/captcha_crypt.pdf |date=4 March 2016 }}. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>{{cite journal | first = Richard | last = Bergmair | title = Natural Language Steganography and an "AI-complete" Security Primitive | citeseerx = 10.1.1.105.129 | date = 7 January 2006 }} (unpublished?)</ref>

AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks.

目前的计算机技术不能单独解决人工智能完全问题,而且还需要人工计算。例如,这个特性可以用来测试人类是否存在(CAPTCHAs 的目标就是这样做) ,以及用于计算机安全性以抵御蛮力攻击。



== History ==

=== Classical AI ===

{{Main|History of artificial intelligence}}

Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |title=Scientist on the Set: An Interview with Marvin Minsky |access-date=5 April 2008 |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |url-status=live }}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved,"<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}

Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.

现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,人工普通智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·西蒙在1965年写道: “机器将在20年内完成人类能做的任何工作。”他们的预言启发了斯坦利 · 库布里克和亚瑟·查理斯·克拉克的人物哈尔9000,他们代表了人工智能研究人员相信他们在2001年能够创造的东西。人工智能先驱马文 · 明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的共识预测,使 HAL 9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大大得到解决,”尽管明斯基声称,他的话被错误引用了。



However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".<ref>The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|p=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all<ref>As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web | url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University | year=2000 | access-date=29 September 2007 | archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | archive-date=30 September 2008 | url-status=live }}</ref> and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work=The New York Times|date=14 October 2005}}</ref>

However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."

然而,在20世纪70年代早期,很明显,研究人员严重低估了该项目的难度。资助机构开始对 AGI 持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的第五代计算机项目(Fifth Generation Computer Project)重新唤起了人们对 AGI 的兴趣,设定了一个为期10年的时间表,其中包括 AGI 的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功,工业界和政府都将资金重新投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测 AGI 即将取得的成就被证明是根本错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而闻名。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热的梦想家”的标签



=== Narrow AI research ===

{{Main|Artificial intelligence}}



In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref>

In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.

在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,在这一领域的发展被认为是一个新兴的趋势,并有望在10多年内发生一个成熟的阶段。



Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the [[Commonsense knowledge base|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref></blockquote>

Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote>

大多数主流人工智能研究人员希望,通过结合解决各种子问题的程序,可以开发出强大的人工智能。汉斯 · 莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线相遇,超过一半的路程,准备好提供真实世界的能力和常识知识,而这些知识在推理程序中一直难以捉摸,令人沮丧。当隐喻性的黄金钉将两者结合起来时,就会产生完全智能的机器。” / blockquote



However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the [[Symbol grounding problem|Symbol Grounding Hypothesis]] by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>{{cite journal | last1 = Harnad | first1 = S | year = 1990 | title = The Symbol Grounding Problem | journal = Physica D | volume = 42 | issue = 1–3| pages = 335–346 | doi=10.1016/0167-2789(90)90087-6| bibcode = 1990PhyD...42..335H | arxiv = cs/9906002}}</ref></blockquote>

However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote>

然而,即使是这种基本哲学也存在争议; 例如,普林斯顿大学的斯蒂文 · 哈纳德在1990年关于符号根植假说的论文中总结道: “人们经常提出这样的期望,即认知建模的“自上而下”(符号)方法将在某种程度上满足介于两者之间的“自下而上”(感官)方法。如果本文中的基础考虑是正确的,那么这种期望是无望的模块化的,从感觉到符号真的只有一条可行的路径: 从头开始。像计算机软件级别这样自由浮动的符号级别永远不可能通过这条路径(反之亦然)达到——也不清楚为什么我们甚至应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。” / blockquote



===Modern artificial general intelligence research===

The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}

The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.

”人工通用智能”一词早在1997年就由马克 · 古布鲁德在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被 Shane Legg 和 Ben Goertzel 重新引入并推广。研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于 AGI 的范围。王(音译)和本 · 戈泽尔(音译)将 AGI 2006年的研究活动描述为”发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了 AGI 的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由 Todor Arnaudov 开设。2018年,麻省理工学院在 AGI 开设了一门课程,由 Lex Fridman 组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对 AGI 关注甚少,一些人声称,智能过于复杂,无法在短期内完全复制。然而,少数计算机科学家积极参与 AGI 的研究,其中许多人正在为 AGI 的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。在他的书的序言中,Goertzel 说,一个真正灵活的 AGI 制造所需的时间估计从10年到超过一个世纪不等,但是 AGI 研究团体的共识似乎是 Ray Kurzweil 在《奇点迫近讨论的时间表。在2015年至2045年之间)是合理的。



However, most mainstream AI researchers doubt that progress will be this rapid.{{citation_needed|date=January 2017}} Organizations explicitly pursuing AGI include the Swiss AI lab [[IDSIA]],{{citation needed|date=December 2017}} Nnaisense,<ref>{{cite news|last1=Markoff|first1=John|title=When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'|url=https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|accessdate=26 December 2017|work=The New York Times|date=27 November 2016|archive-url=https://web.archive.org/web/20171226234555/https://www.nytimes.com/2016/11/27/technology/artificial-intelligence-pioneer-jurgen-schmidhuber-overlooked.html|archive-date=26 December 2017|url-status=live}}</ref> [[Vicarious (company)|Vicarious]],<!--<ref name=baum/>--> [[Maluuba]],<ref name=baum/> the [[OpenCog|OpenCog Foundation]], Adaptive AI, [[LIDA (cognitive architecture)|LIDA]], and [[Numenta]] and the associated [[Redwood Neuroscience Institute]].<ref>{{cite book|author1=James Barrat|title=Our Final Invention: Artificial Intelligence and the End of the Human Era|date=2013|publisher=St. Martin's Press|location=New York|isbn=9780312622374|edition=First|chapter=Chapter 11: A Hard Takeoff|title-link=Our Final Invention|author1-link=James Barrat}}</ref> In addition, organizations such as the [[Machine Intelligence Research Institute]]<ref>{{cite web|title=About the Machine Intelligence Research Institute|url=https://intelligence.org/about/|website=Machine Intelligence Research Institute|accessdate=26 December 2017|archive-url=https://web.archive.org/web/20180121025925/https://intelligence.org/about/|archive-date=21 January 2018|url-status=live}}</ref> and [[OpenAI]]<ref>{{cite news|title=About OpenAI|url=https://openai.com/about/|accessdate=26 December 2017|work=[[OpenAI]]|language=en-us|archive-url=https://web.archive.org/web/20171222181056/https://openai.com/about/|archive-date=22 December 2017|url-status=live}}</ref> have been founded to influence the development path of AGI. Finally, projects such as the [[Human Brain Project]]<ref>{{cite news|last1=Theil|first1=Stefan|title=Trouble in Mind|url=https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|accessdate=26 December 2017|work=Scientific American|pages=36–42|language=en|doi=10.1038/scientificamerican1015-36|bibcode=2015SciAm.313d..36T|archive-url=https://web.archive.org/web/20171109234151/https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/|archive-date=9 November 2017|url-status=live}}</ref> have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being [[DeepMind]], the Human Brain Project, and [[OpenAI]].<ref name=baum>{{cite journal|title=Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1|url=https://ssrn.com/abstract=3070741|date=12 November 2017|last1=Baum|first1=Seth}}</ref>

However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious,<!-- In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.

然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求 AGI 的组织包括瑞士人工智能实验室 IDSIA,Nnaisense,Vicarious,! -- 此外,还成立了机器智能研究所和 OpenAI 等机构来影响 AGI 的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对 AGI 的一项调查将45个已知的“活跃研发项目”(通过已发表的研究)归类为“活跃研发项目” ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。



In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{cite journal|title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence|journal=Annals of Data Science|volume=4|issue=2|pages=179–191|arxiv=1709.10242|doi=10.1007/s40745-017-0109-0|year=2017|last1=Liu|first1=Feng|last2=Shi|first2=Yong|last3=Liu|first3=Ying|bibcode=2017arXiv170910242L}}</ref><ref>{{cite web|title=Google-KI doppelt so schlau wie Siri|url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003|accessdate=2 January 2019|archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/|archive-date=3 January 2019|url-status=live}}</ref>

In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.

2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些 AI 达到了约47,这大约相当于一个六岁的儿童在一年级。一个成年人平均体重约为100磅。2014年也进行了类似的测试,智商分数达到了最高值27。



In 2019, video game programmer and aerospace engineer [[John Carmack]] announced plans to research AGI.<ref name="lawler">{{cite web |url=https://www.engadget.com/2019-11-13-john-carmack-agi.html |title=John Carmack takes a step back at Oculus to work on human-like AI |date=November 13, 2019 |first=Richard |last=Lawler |accessdate=April 4, 2020 |publisher=[[Engadget]]}}</ref>

In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI.

2019年,游戏程序师和航空工程师 John Carmack 宣布了研究 AGI 的计划。



==Processing power needed to simulate a brain ==



===Whole brain emulation===

{{main|Mind uploading}}

A popular discussed approach to achieving general intelligent action is [[whole brain emulation]]. A low-level brain model is built by [[brain scanning|scanning]] and [[Brain mapping|mapping]] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a [[computer simulation|simulation]] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap>

A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap>

实现一般智能行为的一种流行的讨论方法是全脑模拟。一个低层次的大脑模型是通过扫描和绘制生物大脑的详细情况,并将其状态复制到计算机系统或其他计算设备中来建立的。计算机运行的模拟模型如此忠实于原始模型,以至于它的行为在本质上与原始大脑相同,或者对于所有的实际目的,难以区分。 参考名称路线图

{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in [[computational neuroscience]] and [[neuroinformatics]], in the context of [[brain simulation]] for medical research purposes. It is discussed in [[artificial intelligence]] research{{sfn|Goertzel|2007}} as an approach to strong AI. [[Neuroimaging]] technologies that could deliver the necessary detailed understanding are improving rapidly, and [[futurist]] Ray Kurzweil in the book ''The Singularity Is Near''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.

. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.

.“基本思想是,取一个特定的大脑,详细地扫描其结构,并构建一个与原始大脑如此忠实的软件模型,以至于在适当的硬件上运行时,它基本上与原始大脑的行为方式相同。整个大脑模拟在计算神经科学和神经信息学医学期刊上讨论过,这是为了医学研究的大脑模拟。它是人工智能研究中讨论的一种强人工智能的方法。神经成像技术可以提供必要的详细的理解正在迅速提高,未来学家 Ray Kurzweil 在《奇点迫近书中预测,一张具有足够质量的地图将在类似的时间尺度上达到所需的计算能力。



===Early estimates ===

[[File:Estimations of Human Brain Emulation Required Performance.svg|thumb|right|400px|Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg]] and [[Nick Bostrom]]), along with the fastest supercomputer from [[TOP500]] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where [[consciousness]] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The [[human brain]] has a huge number of [[synapses]]. Each of the 10<sup>11</sup> (one hundred billion) [[neurons]] has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second ([[SUPS]]).{{sfn|Russell|Norvig|2003}} In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> (For comparison, if a "computation" was equivalent to one "[[FLOPS|floating point operation]]" – a measure used to rate current [[supercomputer]]s – then 10<sup>16</sup> "computations" would be equivalent to 10 [[Peta-|petaFLOPS]], [[FLOPS#Performance records|achieved in 2011]]). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.

Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and [[Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.]] For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 10<sup>11</sup> (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 10<sup>16</sup> "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.

估计需要多少处理能力才能在不同水平上模拟人类大脑(来自 Ray Kurzweil,[ Anders Sandberg 和 Nick Bostrom ]) ,以及每年从 TOP500绘制出的最快超级计算机。请注意对数尺度趋势线和指数趋势线,它假设计算能力每1.1年翻一番。库兹韦尔相信,在神经模拟中上传思维是可能的,而桑德伯格和博斯特罗姆的报告对意识在哪里产生则不太确定。]对于低层次的大脑模拟,需要一个非常强大的计算机。人类的大脑有大量的突触。每个10个 sup 11 / sup (1000亿)神经元平均有7000个突触连接(突触)到其他神经元。据估计,一个三岁儿童的大脑约有10个 sup 15 / sup 突触(1千万亿)。这个数字随着年龄的增长而下降,成年后趋于稳定。对于一个成年人的估计有所不同,从10个 sup 14 / sup 到5个 sup 10 sup 14 / sup 突触(100万亿到500万亿)不等。基于神经元活动的简单开关模型,对大脑处理能力的估计大约是每秒10次 / 秒(100万亿)突触更新(SUPS)。1997年,库兹韦尔研究了相当于人脑所需硬件的各种估计,并采用了每秒10 sup 16 / sup 计算(cps)的数字。(作为比较,如果一次“计算”相当于一次“浮点运算”——一种用于对当前超级计算机进行评级的措施——那么10 sup 16 / sup“计算”相当于2011年完成的10petaflops)。他用这个数字来预测,如果在撰写本文时计算机能力方面的指数增长继续下去的话,那么在2015年到2025年之间的某个时候,必要的硬件将会出现。



===Modelling the neurons in more detail===

The [[artificial neuron]] model assumed by Kurzweil and used in many current [[artificial neural network]] implementations is simple compared with [[biological neuron model|biological neurons]]. A brain simulation would likely have to capture the detailed cellular behaviour of biological [[neurons]], presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for [[glial cells]], which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62|access-date=24 January 2014|archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62|archive-date=8 February 2014|url-status=live}}</ref>

The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.

与生物神经元相比,Kurzweil 假设的人工神经元模型在当前许多人工神经网络实现中的应用是简单的。大脑模拟可能需要捕捉生物神经元细胞行为的细节,目前只能从最广泛的轮廓中理解。对神经行为的生物、化学和物理细节(特别是在分子尺度上)进行全面建模所需要的计算能力将比 Kurzweil 的估计大几百万数量级。此外,这些估计没有考虑胶质细胞,胶质细胞至少和神经元一样多,数量可能比神经元多10:1,现在已知它们在认知过程中发挥作用。



=== Current research===

There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The [[Artificial Intelligence System]] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref>{{cite journal |last=Izhikevich |first=Eugene M. |last2=Edelman |first2=Gerald M. |date=4 March 2008 |title=Large-scale model of mammalian thalamocortical systems |url=http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |journal=PNAS |volume=105 |issue=9 |pages=3593–3598 |doi= 10.1073/pnas.0712231105|access-date=23 June 2015 |archive-url=https://web.archive.org/web/20090612095651/http://vesicle.nsi.edu/users/izhikevich/publications/large-scale_model_of_human_brain.pdf |archive-date=12 June 2009 |pmid=18292226 |pmc=2265160|bibcode=2008PNAS..105.3593I }}</ref> The [[Blue Brain]] project used one of the fastest supercomputer architectures in the world, [[IBM]]'s [[Blue Gene]] platform, to create a real time simulation of a single rat [[Neocortex|neocortical column]] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=11 August 2008}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," [[Henry Markram]], director of the Blue Brain Project said in 2009 at the [[TED (conference)|TED conference]] in Oxford.<ref>{{Cite news |url=http://news.bbc.co.uk/1/hi/technology/8164060.stm |title=Artificial brain '10 years away' 2009 BBC news |date=22 July 2009 |access-date=25 July 2009 |archive-url=https://web.archive.org/web/20170726040959/http://news.bbc.co.uk/1/hi/technology/8164060.stm |archive-date=26 July 2017 |url-status=live }}</ref> There have also been controversial claims to have simulated a [[cat intelligence#Computer simulation of the cat brain|cat brain]]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>[http://gauntlet.ucalgary.ca/story/10343 University of Calgary news] {{Webarchive|url=https://web.archive.org/web/20090818081044/http://gauntlet.ucalgary.ca/story/10343 |date=18 August 2009 }}, [http://www.nbcnews.com/id/12037941 NBC News news] {{Webarchive|url=https://web.archive.org/web/20170704063922/http://www.nbcnews.com/id/12037941/ |date=4 July 2017 }}</ref>

There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006. A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.

有一些研究项目正在使用更复杂的神经模型研究大脑模拟,这些模型是在传统的计算机体系结构上实现的。人工智能系统项目在2005年实现了一个“大脑”(有10个 sup 11 / sup 神经元)的非实时模拟。在一个由27个处理器组成的集群上,模拟一个模型的一秒钟花费了50天时间。2006年,蓝脑项目利用世界上最快的超级计算机架构之一,IBM 的蓝色基因平台,创建了一个包含大约10,000个神经元和10个 sup 8 / sup 突触的单个大鼠皮层柱的实时模拟。一个更长期的目标是建立一个人脑生理过程的详细的功能模拟: “建立一个人脑并不是不可能的,我们可以在10年内完成,”2009年在牛津举行的 TED 大会上,蓝脑项目主任亨利 · 马克拉姆说。还有一些有争议的说法是模拟猫的大脑。神经硅接口已被提出作为一种替代的实施策略,可能会更好地伸缩。



[[Hans Moravec]] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>{{cite web|url=http://www.transhumanist.com/volume1/moravec.htm |title=Archived copy |accessdate=23 June 2006 |url-status=dead |archiveurl=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archivedate=15 June 2006 }}</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.

Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.

Hans Moravec 在他1997年的论文《计算机硬件何时能与人脑相匹配? 》中提出了上述观点(“大脑更复杂” ,“神经元必须建模得更详细”) .他测量了现有软件模拟神经组织,特别是视网膜功能的能力。他的研究结果并不取决于神经胶质细胞的数量,也不取决于处理神经元在哪里工作。



The actual complexity of modeling biological neurons has been explored in [[OpenWorm|OpenWorm project]] that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.

The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.

在 OpenWorm 项目中,已经探讨了建模生物神经元的实际复杂性,该项目旨在完全模拟一个蠕虫,其神经网络中只有302个神经元(在总共约1000个细胞中)。在项目开始之前,动物的神经网络已经被很好地记录了下来。然而,尽管一开始任务看起来很简单,基于一般神经网络的模型并不起作用。目前,研究的重点是精确模拟生物神经元(部分在分子水平上) ,但结果还不能被称为完全成功。即使在人脑尺度模型中需要解决的问题的数量与神经元的数量不成比例,沿着这条路径所做的工作量也是显而易见的。



===Criticisms of simulation-based approaches===

A fundamental criticism of the simulated brain approach derives from [[embodied cognition]] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like in ''[[Second Life]]''), but it is not yet known whether this would be sufficient.

A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.

对模拟大脑方法的一个基本批评来自具身认知,在那里人体化被视为人类智力的一个重要方面。许多研究者认为,具体化是必要的基础意义。如果这种观点是正确的,那么任何功能齐全的大脑模型都需要包含更多的神经元(例如,一个机器人身体)。Goertzel 提出了虚拟化身(就像在《第二人生》中那样) ,但是目前还不知道这是否足够。



Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref>{{Cite web |url=http://www.setiai.com/archives/cat_honey_bee_brain.html |title=some links to bee brain studies |access-date=30 March 2010 |archive-url=https://web.archive.org/web/20111005162232/http://www.setiai.com/archives/cat_honey_bee_brain.html |archive-date=5 October 2011 |url-status=dead }}</ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:

Desktop computers using microprocessors capable of more than 10<sup>9</sup> cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest no such simulation exists . There are at least three reasons for this:

自2005年以来,台式计算机使用的微处理器能够超过10 sup 9 / sup cps (库兹韦尔的非标准单位“每秒计算” ,见上文)。根据 Kurzweil (和 Moravec)使用的大脑能量估算,这台计算机应该能够支持蜜蜂大脑的模拟,但是尽管有些人感兴趣,这样的模拟并不存在。这至少有三个原因:

#The neuron model seems to be oversimplified (see next section).

The neuron model seems to be oversimplified (see next section).

神经元模型似乎过于简化了(见下一节)。

#There is insufficient understanding of higher cognitive processes{{refn|In Goertzels' AGI book ([[#CITEREFYudkowsky2006|Yudkowsky 2006]]), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware}} to establish accurately what the brain's neural activity, observed using techniques such as [[Neuroimaging#Functional magnetic resonance imaging|functional magnetic resonance imaging]], correlates with.

There is insufficient understanding of higher cognitive processes to establish accurately what the brain's neural activity, observed using techniques such as functional magnetic resonance imaging, correlates with.

人们对高级认知过程的理解不够充分,无法准确地确定大脑的神经活动---- 使用功能性磁共振成像等技术观察到的活动---- 与之相关。

#Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.

Even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.

即使我们对认知的理解有了足够的进步,早期的仿真程序也可能非常低效,因此需要更多的硬件。

#The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. [[The Extended Mind]] thesis formalizes the philosophical concept, and research into [[Cephalopoda|cephalopods]] has demonstrated clear examples of a decentralized system.<ref>{{cite journal | pmid = 15829594 | doi=10.1152/jn.00684.2004 | volume=94 | issue=2 | title=Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement |date=August 2005 | journal=J. Neurophysiol. | pages=1443–58 | last1 = Yekutieli | first1 = Y | last2 = Sagiv-Zohar | first2 = R | last3 = Aharonov | first3 = R | last4 = Engel | first4 = Y | last5 = Hochner | first5 = B | last6 = Flash | first6 = T}}</ref>

The brain of an organism, while critical, may not be an appropriate boundary for a cognitive model. To simulate a bee brain, it may be necessary to simulate the body, and the environment. The Extended Mind thesis formalizes the philosophical concept, and research into cephalopods has demonstrated clear examples of a decentralized system.

有机体的大脑虽然关键,但可能不是认知模型的合适边界。为了模拟蜜蜂的大脑,可能需要模拟身体和环境。扩展心智论文形式化了哲学概念,对头足类动物的研究已经证明了分散系统的明显例子。



In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}</ref><ref>[http://search.eb.com/eb/article-75525 "nervous system, human."] ''[[Encyclopædia Britannica]]''. 9 January 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the [[cerebral cortex]] and 69 billion in the [[cerebellum]].{{sfn|Azevedo et al.|2009}} [[Glial cell]] synapses are currently unquantified but are known to be extremely numerous.

In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. Glial cell synapses are currently unquantified but are known to be extremely numerous.

此外,人类大脑的规模目前还没有得到很好的限制。据估计,人类大脑大约有1000亿个神经元和100万亿个突触。另一个估计是860亿个神经元,其中163亿在大脑皮层,690亿在小脑。神经胶质细胞突触目前尚未定量,但已知数量极多。



==Strong AI and consciousness==

{{See also|Philosophy of artificial intelligence|Turing test}}



In 1980, philosopher [[John Searle]] coined the term "strong AI" as part of his [[Chinese room]] argument.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref>

In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He wanted to distinguish between two different hypotheses about artificial intelligence:

1980年,哲学家约翰•塞尔(John Searle)将“强人工智能”(strong AI)一词作为他在中文房间里辩论的一部分。他想要区分关于人工智能的两种不同假设:

* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the [[mind body problem]]" or "the [[philosophy of mind]]".)

* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.

The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:

The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks.<ref>For example:

第一个被称为“强人工智能假设” ,第二个被称为“弱人工智能假设” ,因为第一个假设提出了更强的陈述: 它假定机器发生了某种特殊的事情,超出了我们能够测试的所有能力。Searle 将“强 AI 假说”称为“强 AI”。这种用法在人工智能学术研究和教科书中也很常见。例如:

* {{Harvnb|Russell|Norvig|2003}},

* [http://www.encyclopedia.com/doc/1O87-strongAI.html Oxford University Press Dictionary of Psychology] {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html |date=3 December 2007 }} (quoted in "High Beam Encyclopedia"),

* [http://www.aaai.org/AITopics/html/phil.html MIT Encyclopedia of Cognitive Science] {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html |date=19 July 2008 }} (quoted in "AITopics")

* [http://planetmath.org/encyclopedia/StrongAIThesis.html Planet Math] {{Webarchive|url=https://web.archive.org/web/20070919012830/http://planetmath.org/encyclopedia/StrongAIThesis.html |date=19 September 2007 }}

* [http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm Will Biological Computers Enable Artificially Intelligent Machines to Become Persons?] {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm |date=13 May 2008 }} Anthony Tongen</ref>



The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]], "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}}

The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."

弱人工智能假说等同于人工一般智能是可能的假说。根据罗素和诺维格的说法,“大多数人工智能研究人员认为弱人工智能假说是理所当然的,并且不关心强人工智能假说。”



In contrast to Searle, [[Ray Kurzweil]] uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not.

In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not.

与 Searle 不同的是,Ray Kurzweil 使用“强人工智能”这个词来描述任何人工智能系统,这个系统的行为就像它有思想一样,不管哲学家是否能够确定它是否真的有思想。

In science fiction, AGI is associated with traits such as [[consciousness]], [[sentience]], [[sapience]], and [[self-awareness]] observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "[[strong AI hypothesis]]." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a [[mind]] and [[consciousness]]. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.

In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.

在科幻小说中,AGI 与生物的意识、知觉、智慧和自我意识等特征有关。然而,根据 Searle 的说法,一般智力是否足以产生意识还是一个悬而未决的问题。“强 AI”(如上文库兹韦尔所定义的)不应与塞尔的“强 AI 假设”相混淆强有力的人工智能假说认为,一台像人一样智能运行的计算机必然具有思想和意识。Agi 只是指机器显示的智能量,不管有没有头脑。



===Consciousness===

There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in [[science fiction]] and the [[ethics of artificial intelligence]]:

There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:

在科幻小说和人工智能伦理中扮演重要角色的强大人工智能概念中,除了与智能有关的人类思维还有其他方面:

* [[consciousness]]: To have [[qualia|subjective experience]] and [[thought]].<ref>Note that [[consciousness]] is difficult to define. A popular definition, due to [[Thomas Nagel]], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref>

* [[self-awareness]]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.

* [[sentience]]: The ability to "feel" perceptions or emotions subjectively.

* [[sapience]]: The capacity for wisdom.



These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the [[animal rights|rights of non-human animals]]. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.<ref>{{Cite journal|last=Sotala|first=Kaj|last2=Yampolskiy|first2=Roman V|date=2014-12-19|title=Responses to catastrophic AGI risk: a survey|journal=Physica Scripta|volume=90|issue=1|pages=8|doi=10.1088/0031-8949/90/1/018001|issn=0031-8949|doi-access=free}}</ref>

These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI.

这些特征具有道德维度,因为拥有这种强人工智能形式的机器可能拥有法律权利,类似于非人类动物的权利。因此,已经开展了初步工作,探讨如何将全面的道德行为者纳入现有的法律和社会框架。这些方法都集中在强大的人工智能的法律地位和权利上。



However, [[Bill Joy]], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | journal=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are [[Necessary and sufficient condition|necessary]] for strong AI. The role of [[consciousness]] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the [[neural correlates of consciousness]], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, [[Emergence|naturally emerge]] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.

However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.

然而,比尔 · 乔伊等人认为,具有这些特征的机器可能会威胁到人类的生命或尊严。这些特征对于强 AI 来说是否是必要的还有待证明。意识的作用并不清楚,目前也没有对其存在的一致的测试。如果一台机器装有一个模拟意识相关神经区的装置,它会自动具有自我意识吗?也有可能这些特性中的一些,比如感知能力,自然而然地从一个完全智能的机器中产生,或者一旦机器开始以一种明显的智能方式行动,人们就会自然而然地把这些特性归因于机器。例如,智能行为可能足以产生知觉,而不是相反。



===Artificial consciousness research===

{{Main|Artificial consciousness}}



Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort [[Igor Aleksander]]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand [[language]].

Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.

虽然意识在强 ai / AGI 中的作用是有争议的,但是很多 AGI 的研究人员认为研究实现意识的可能性是至关重要的。在早期的努力中,Igor Aleksander 认为创造一个有意识的机器的原则已经存在,但是训练这样一个机器去理解语言需要四十年。



==Possible explanations for the slow progress of AI research==

{{See also|History of artificial intelligence#The problems}}



Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}

Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.

自从1956年开始人工智能研究以来,这一领域的发展速度已经随着时间的推移而放缓,并且阻碍了创造具有人类水平的智能行为的机器的目标。这种延迟的一个可能的解释是计算机缺乏足够的存储空间或处理能力。此外,与人工智能研究过程相关的复杂程度也可能限制人工智能研究的进展。



While most AI researchers believe strong AI can be achieved in the future, there are some individuals like [[Hubert Dreyfus]] and [[Roger Penrose]] who deny the possibility of achieving strong AI.{{sfn|Clocksin|2003}} [[John McCarthy (computer scientist)|John McCarthy]] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}

While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.

虽然大多数人工智能研究人员认为强大的人工智能可以在未来实现,但也有一些人像休伯特 · 德雷福斯和罗杰 · 彭罗斯否认实现强大人工智能的可能性。约翰 · 麦卡锡是众多计算机科学家之一,他们相信人类水平的人工智能将会实现,但是日期无法准确预测。



Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}

Conceptual limitations are another possible reason for the slowness in AI research. AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts".

概念上的局限性是人工智能研究缓慢的另一个可能原因。人工智能研究人员可能需要修改他们学科的概念框架,以便为实现强大的人工智能提供一个更强大的基础和贡献。正如 William Clocksin 在2003年写的那样: “这个框架始于 Weizenbaum 的观察,即智力只在特定的社会和文化背景下表现出来。”。



Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking ([[Moravec's paradox]]).{{sfn|Clocksin|2003}} A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}

Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, such as mathematics, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do, such as walking (Moravec's paradox). A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.

此外,人工智能研究人员已经能够创造出能够执行复杂工作(如数学)的计算机,但相反地,他们却难以开发出能够执行人类简单任务(如行走)的计算机(莫拉维克悖论)。大卫 · 格勒尼特描述的一个问题是,有些人认为思考和推理是等价的。然而,思想和这些思想的创造者是否被孤立的想法引起了人工智能研究者的兴趣。



The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}

The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.

过去几十年人工智能研究中遇到的问题进一步阻碍了人工智能的发展。人工智能研究人员所承诺的失败的预测,以及对人类行为缺乏完整理解,已经帮助削弱了人类水平人工智能的基本概念。尽管人工智能研究的进展带来了进步和失望,但大多数研究人员对人工智能在21世纪可能实现的目标持乐观态度。



Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}

Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions.

还有其他可能的原因可以解释为什么对于强人工智能进行了长时间的研究。错综复杂的科学问题,以及通过心理学和神经生理学充分了解人脑的必要性,限制了许多研究人员在计算机硬件中模拟人脑的功能。许多研究人员倾向于低估与人工智能未来预测有关的任何怀疑,但是如果不认真对待这些问题,人们就会忽视问题的解决方案。



Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}

Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.

Clocksin 说,阻碍人工智能研究进展的一个概念上的限制是,人们可能在计算机程序和设备实现方面使用了错误的技术。当人工智能研究人员第一次开始瞄准人工智能的目标时,主要的兴趣是人类推理。研究人员希望通过推理建立人类知识的计算模型,并找出如何设计一台具有特定认知任务的计算机。



The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}

The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. The most productive use of abstraction in AI research comes from planning and problem solving. Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.

抽象的实践,人们在研究中使用特定的语境时倾向于重新定义,为研究人员提供了集中在几个概念上的机会。抽象在人工智能研究中最有效的应用来自规划和解决问题。虽然目标是提高计算速度,但是抽象的作用已经对抽象操作符的参与提出了问题。



A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.

A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.

人工智能发展缓慢的一个可能原因是许多人工智能研究人员承认启发式是计算机性能和人类性能之间的一个重大缺口。为计算机编程的特定功能可以满足许多要求,使计算机与人类智能相匹配。这些解释并不一定是造成人工智能实现延迟的根本原因,但它们得到了众多研究人员的广泛认同。



There have been many AI researchers that debate over the idea whether [[affective computing|machines should be created with emotions]]. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}} David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion."{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.<ref>{{cite journal|doi=10.1016/j.bushor.2018.08.004|title=Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence | volume=62 | year=2019|journal=Business Horizons|pages=15–25 | last1 = Kaplan | first1 = Andreas | last2 = Haenlein | first2 = Michael}}</ref>

There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. Emotion sums up the experiences of humans because it allows them to remember those experiences. David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.

许多人工智能研究人员一直在争论机器是否应该带有情感。典型的人工智能模型中没有情感,一些研究人员说,将情感编程到机器中可以让它们拥有自己的思想。情感总结了人类的经历,因为它允许人们记住那些经历。大卫 · 格勒尼特写道: “除非计算机能够模拟人类情感的所有细微差别,否则它不会具有创造力。”这种对情绪的关注给人工智能研究人员带来了一些问题,随着未来人工智能研究的进展,它与强人工智能的概念相联系。



==Controversies and dangers==



===Feasibility===

{{expand section|date=February 2016}}

As of March 2020, AGI remains speculative<ref name="spec1">[https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-how-artificial-intelligence-works.pdf europarl.europa.eu: How artificial intelligence works], "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020</ref><ref name="spec2">[https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf itu.int: Beyond Mad?: The Race For Artificial General Intelligence], "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020</ref> as no such system has been demonstrated yet. Opinions vary both on ''whether'' and ''when'' artificial general intelligence will arrive. At one extreme, AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{cite news|last1=Allen|first1=Paul|title=The Singularity Isn't Near|url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/|accessdate=17 September 2014|work=[[MIT Technology Review]]}}</ref> Writing in [[The Guardian]], roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{cite news|last1=Winfield|first1=Alan|title=Artificial intelligence will not turn into a Frankenstein's monster|url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|accessdate=17 September 2014|work=[[The Guardian]]|archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield|archive-date=17 September 2014|url-status=live}}</ref>

As of March 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.

截至2020年3月,德盛安联仍处于投机状态,因为迄今尚未展示此类系统。对于人工通用智能是否会到来以及何时到来,人们的看法各不相同。在一个极端,人工智能的先驱赫伯特·西蒙在1965年写道: “机器将能在20年内完成人类能做的任何工作。”。然而,这个预言并没有实现。微软(Microsoft)联合创始人保罗•艾伦(Paul Allen)认为,这种情报在21世纪不太可能出现,因为它需要“不可预见且根本无法预测的突破”和“对认知的科学深入理解”。机器人专家 Alan Winfield 在《卫报》上发表文章称,现代计算机和人类水平的人工智能之间的鸿沟就像当前的太空飞行和实际的超光速空间飞行之间的鸿沟一样宽。



AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{cite news|author1=Raffi Khatchadourian|title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?|url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|accessdate=7 February 2016|work=[[The New Yorker (magazine)|The New Yorker]]|date=23 November 2015|archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom|archive-date=28 January 2016|url-status=live}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found below [[#Tests_for_confirming_human-level_AGI|''Tests for confirming human-level AGI'']] and [[#IQ-Tests_AGI|''IQ-tests AGI'']].

AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.

人工智能专家对德盛安联兴衰可行性的看法,可能在2010年出现了复苏。2012年和2013年进行的四次民意调查显示,专家对德盛安联50% 有信心的平均猜测是2040年到2050年,具体取决于调查结果,平均猜测是2081年。在这些专家中,16.5% 的人在被问到同样的问题时回答“从来没有” ,但他们的自信心却达到了90% 。进一步的进展考虑可以在确认人类水平 AGI 和 iq 测试 AGI 的测试下面找到。



===Potential threat to human existence{{anchor|Risk_of_human_extinction}}===

{{Main|Existential risk from artificial general intelligence}}



The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are [[Elon Musk]], [[Bill Gates]], and [[Stephen Hawking]]. The most notable AI researcher to endorse the thesis is [[Stuart J. Russell]]. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned",<ref name="BBC News">{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=[[BBC News]]|accessdate=30 January 2015}}</ref> and Hawking criticized widespread indifference in his 2014 editorial: {{cquote|'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{endash}}we'll leave the lights on?' Probably not{{endash}}but this is more or less what is happening with AI.'<ref name="hawking editorial">{{cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |accessdate=3 December 2014 |publisher=[[The Independent (UK)]]}}</ref>}}

The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial: we'll leave the lights on?' Probably notbut this is more or less what is happening with AI.'}}

人工智能构成了世界末日,这种风险需要比现在更多的关注,这一论点已经得到了许多公众人物的支持; 也许最著名的是埃隆 · 马斯克,比尔 · 盖茨和斯蒂芬 · 霍金。支持这一观点的最著名的人工智能研究者是斯图尔特 · 罗素。这篇论文的支持者有时会对怀疑论者表示困惑: 盖茨表示,他不“理解为什么有些人不关心” ,霍金在2014年的社论中批评了普遍的冷漠: 我们会让灯亮着吗可能不是,但这或多或少是正在发生的与人工智能



Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "[[AI control problem|control problem]]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?<ref name="superintelligence" >{{cite book|last1=Bostrom|first1=Nick|author-link=Nick Bostrom|title=Superintelligence: Paths, Dangers, Strategies|date=2014|isbn=978-0199678112|edition=First|quote=|title-link=Superintelligence: Paths, Dangers, Strategies}}<!-- preface --></ref><ref name="physica_scripta" >{{cite journal|title=Responses to catastrophic AGI risk: a survey|journal=[[Physica Scripta]]|date=19 December 2014|volume=90|issue=1|author1=Kaj Sotala|author2-link=Roman Yampolskiy|author2=Roman Yampolskiy}}</ref>

Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?

许多关注世界末日的学者认为,最好的方法是进行(可能是大规模的)研究,解决困难的“控制问题” ,以回答这个问题: 程序员可以实现哪些类型的保障措施、算法或架构,以最大限度地提高其递归改进的人工智能在达到超级智能后继续以友好而不是破坏性的方式运行的可能性?



The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, [[Jaron Lanier]] argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.<ref name="atlantic-but-what">{{cite magazine |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |title=But What Would the End of Humanity Mean for Me? |magazine=The Atlantic | date = 9 May 2014 | accessdate =12 December 2015}}</ref>

The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.

认为人工智能可以提出世界末日的观点也遭到了许多强烈的反对。怀疑论者有时指责该论点是秘密宗教性的,他们非理性地相信超级智能可能取代对万能的上帝的非理性信仰; 在极端情况下,杰伦 · 拉尼尔(Jaron Lanier)认为,目前的机器以任何方式具有智能的整个概念是“一种幻觉” ,是富人的“惊人骗局”。



Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist [[Gordon Bell]] argues that the human race will already destroy itself before it reaches the [[technological singularity]]. [[Gordon Moore]], the original proponent of [[Moore's Law]], declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way."<ref>{{cite news |title=Tech Luminaries Address Singularity |url=https://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity |accessdate=8 April 2020 |work=IEEE Spectrum: Technology, Engineering, and Science News |issue=SPECIAL REPORT: THE SINGULARITY |date=1 June 2008 |language=en}}</ref> [[Baidu]] Vice President [[Andrew Ng]] states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."<ref name=shermer>{{cite news|last1=Shermer|first1=Michael|title=Apocalypse AI|url=https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/|accessdate=27 November 2017|work=Scientific American|date=1 March 2017|pages=77|language=en|doi=10.1038/scientificamerican0317-77|bibcode=2017SciAm.316c..77S}}</ref>

Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."

现有的许多批评认为,德盛安联短期内不太可能成功。计算机科学家 Gordon Bell 认为人类在到达技术奇异点之前就已经自我毁灭了。戈登 · 摩尔,摩尔定律的最初倡导者,宣称“我是一个怀疑论者。我不认为技术奇异点会发生,至少在很长一段时间内不会。我不知道为什么会有这种感觉。”百度副总裁 Andrew Ng 说,人工智能世界末日就像是在担心火星人口过剩,而我们甚至还没有踏上这个星球



==See also==

{{div col|colwidth=30em}}

* [[Automated machine learning]]

* [[Machine ethics]]

* [[Multi-task learning]]

* [[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]

* [[Nick Bostrom]]

* [[Eliezer Yudkowsky]]

* [[Future of Humanity Institute]]

* [[Outline of artificial intelligence]]

* [[Artificial brain]]

* [[Transfer learning]]

* [[Outline of transhumanism]]

* [[General game playing]]

* [[Synthetic intelligence]]

* [[Intelligence amplification]] (IA), the use of information technology in augmenting human intelligence instead of creating an external autonomous "AGI"{{div col end}}



==Notes==

{{reflist|colwidth=30em}}



==References==

• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.{{refbegin|2}}

• Stages of Artificial Intelligence"[https://www.computerscience0.xyz/2020/04/ai-artificial-intelligence-stages-of.html Computer Science]" on 2nd April, 2020.

•2020年4月2日,《人工智能的阶段》[ https://www.computerscience0.xyz/2020/04/ai-Artificial-Intelligence-Stages-of.html 计算机科学]。

* {{cite web |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |title=TechCast Article Series: The Automation of Thought |last1=Halal |first1=William E. |website= |access-date= |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}

* {{Citation | last=Aleksander | first=Igor | author-link=Igor Aleksander | year=1996 | title=Impossible Minds | publisher=World Scientific Publishing Company | isbn=978-1-86094-036-1 | url-access=registration | url=https://archive.org/details/impossiblemindsm0000alek }}

* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}

* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}

* {{Citation|vauthors=Azevedo FA, Carvalho LR, Grinberg LT | title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=https://www.researchgate.net/publication/24024444 |accessdate=4 September 2013| ref={{harvid|Azevedo et al.|2009}}|display-authors=etal}}

* {{Citation

| first=Anthony

| first=Anthony

首先是安东尼

| last=Berglas

| last=Berglas

最后一个贝格拉斯

| title=Artificial Intelligence will Kill our Grandchildren

| title=Artificial Intelligence will Kill our Grandchildren

人工智能会杀死我们的孙子

| year=2008

| year=2008

2008年

| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html

| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html

Http://berglas.org/articles/aikillgrandchildren/aikillgrandchildren.html

}}

}}

}}

* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}

* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=[[Philosophical Transactions of the Royal Society A]] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.|bibcode=2003RSPTA.361.1721C }}

* {{Crevier 1993}}

* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=[[Life Magazine]] | pages = 58–68 }}.

* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}}

* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 978-0-7181-2401-4 }}

* {{Citation

| last = Gelernter | first = David

| last = Gelernter | first = David

最后的盖兰特 | 第一个大卫

| year = 2010

| year = 2010

2010年

| title = Dream-logic, the Internet and Artificial Thought

| title = Dream-logic, the Internet and Artificial Thought

梦的逻辑,互联网和人工思维

| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010

| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010

Http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html : 2010年7月25日

}}

}}

}}

* {{Citation

| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel

| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel

1-first Ben | authorlink Ben Goertzel

| editor2-last = Pennachin | editor2-first= Cassio

| editor2-last = Pennachin | editor2-first= Cassio

| 编辑2-last Pennachin | 编辑2-first Cassio

| year = 2006

| year = 2006

2006年

| title=Artificial General Intelligence

| title=Artificial General Intelligence

人工通用智能

| publisher = Springer

| publisher = Springer

出版商斯普林格

| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf

| url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf

Http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf

| isbn = 978-3-540-23733-4

| isbn = 978-3-540-23733-4

[国际标准图书馆编号978-3-540-23733-4]

| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf

| archive-url = https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf

| 档案-url https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf

| archive-date = 20 March 2013

| archive-date = 20 March 2013

| 档案-日期2013年3月20日

}}

}}

}}

* {{Citation

| last = Goertzel | first = Ben | authorlink = Ben Goertzel

| last = Goertzel | first = Ben | authorlink = Ben Goertzel

作者: Ben Goertzel

| last2 = Wang | first2 = Pei

| last2 = Wang | first2 = Pei

2 Wang | first2 Pei

| year = 2006

| year = 2006

2006年

| title = Introduction: Aspects of Artificial General Intelligence

| title = Introduction: Aspects of Artificial General Intelligence

| 题目简介: 人工通用智能的方方面面

| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1

| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1

Http://sites.google.com/site/narswang/publications/wang-goertzel.agi_aspects.pdf?attredirects=1

}}

}}

}}

* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}

* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}

* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = [[Philosophical Transactions of the Royal Society B]]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}

* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}

* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 978-0-226-40317-5}}

* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = The Singularity is Near | year = 2005 | publisher = Viking Press | title-link = The Singularity is Near }}

* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}

* {{Citation|last=Luger|first=George|first2=William|last2=Stubblefield|year=2004|title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page=[https://archive.org/details/artificialintell0000luge/page/720 720]|isbn=978-0-8053-4780-7|url=https://archive.org/details/artificialintell0000luge/page/720}}

* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}

* {{McCorduck 2004}}

* {{Citation | last = Moravec | first = Hans | author-link = Hans Moravec | year = 1976 | url = http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence | access-date = 29 September 2007 | archive-url = https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | archive-date = 3 March 2016 | url-status = dead }}

* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}

* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | volume=83 | issue=4 | pages=435–50 | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. | doi=10.2307/2183914 | jstor=2183914 }}

* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|location= New York }}

* {{cite journal | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | title=Computer Science as Empirical Inquiry: Symbols and Search| volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | authorlink2=Herbert A. Simon|issue=3 | ref=harv| doi-access= free }}

* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}

* {{Russell Norvig 2003}}

* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 }}

* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | location = New York }}

* {{Citation|last=Searle |first=John |author-link=John Searle |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |year=1980 |doi=10.1017/S0140525X00005756 }}

* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | location = New York }}

* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1–3|pages= 256–267 |postscript=.}}

* {{Citation|vauthors=Williams RW, Herrup K | title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| postscript = .}}<!--| accessdate = 20 June 2009-->

* {{Citation

| editor1-last = de Vega | editor1-first = Manuel

| editor1-last = de Vega | editor1-first = Manuel

| 编辑1-last de Vega | 编辑1-first Manuel

| editor2-last = Glenberg | editor2-first = Arthur

| editor2-last = Glenberg | editor2-first = Arthur

2- 最后的格伦伯格2- 第一个亚瑟

| editor3-last = Graesser | editor3-first = Arthur

| editor3-last = Graesser | editor3-first = Arthur

| 编辑3-last Graesser | 编辑3-first Arthur

| year = 2008

| year = 2008

2008年

| title = Symbols and Embodiment: Debates on meaning and cognition

| title = Symbols and Embodiment: Debates on meaning and cognition

标题符号与具体化: 关于意义与认知的争论

| publisher = Oxford University Press

| publisher = Oxford University Press

牛津大学出版社

| isbn=978-0-19-921727-4

| isbn=978-0-19-921727-4

[国际标准图书编号978-0-19-921727-4]

}}

}}

}}

* {{Citation|last=Yudkowsky |first=Eliezer |author-link=Eliezer Yudkowsky |editor-last=Goertzel |editor-first=Ben |editor2-last=Pennachin |editor2-first=Cassio |title=Artificial General Intelligence |journal=Annual Review of Psychology |publisher=Springer |year=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |doi=10.1146/annurev.psych.49.1.585 |pmid=9496632 |isbn=978-3-540-23733-4 |url-status=dead |archiveurl=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archivedate=11 April 2009 |volume=49 |pages=585–612}}

* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=[[Philosophical Transactions of the Royal Society B]] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}

* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks | bibcode=2008gcr..book..303Y }}.

{{refend}}



==External links==

* [https://cis.temple.edu/~pwang/AGI-Intro.html The AGI portal maintained by Pei Wang]

* [https://web.archive.org/web/20050405071221/http://genesis.csail.mit.edu/index.html The Genesis Group at MIT's CSAIL] – Modern research on the computations that underlay human intelligence

* [http://www.opencog.org/ OpenCog – open source project to develop a human-level AI]

* [http://academia.wikia.com/wiki/A_Method_for_Simulating_the_Process_of_Logical_Human_Thought Simulating logical human thought]

* [http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines What Do We Know about AI Timelines?] – Literature review



{{Existential risk from artificial intelligence}}



{{DEFAULTSORT:Artificial general intelligence}}

[[Category:Hypothetical technology]]

Category:Hypothetical technology

类别: 假设技术

[[Category:Artificial intelligence]]

Category:Artificial intelligence

类别: 人工智能

[[Category:Computational neuroscience]]

Category:Computational neuroscience

类别: 计算神经科学



[[fr:Intelligence artificielle#Intelligence artificielle forte]]

fr:Intelligence artificielle#Intelligence artificielle forte

智力人工 # 智力人工强项

<noinclude>

<small>This page was moved from [[wikipedia:en:Artificial general intelligence]]. Its edit history can be viewed at [[通用人工智能/edithistory]]</small></noinclude>

[[Category:待整理页面]]
1,568

个编辑

导航菜单