更改

跳到导航 跳到搜索
删除17字节 、 2020年11月12日 (四) 22:55
第144行: 第144行:  
目前的计算机技术不能单独解决AI完全问题,还需要人工计算的参与。这个特性可以用来测试人类是否存在(比如说,CAPTCHAs的目标就是测试服务的使用者是人类而非机器人) ,以及应用于计算机安全以抵御暴力攻击。
 
目前的计算机技术不能单独解决AI完全问题,还需要人工计算的参与。这个特性可以用来测试人类是否存在(比如说,CAPTCHAs的目标就是测试服务的使用者是人类而非机器人) ,以及应用于计算机安全以抵御暴力攻击。
   −
== History  历史 ==  
+
==历史 ==  
   −
=== Classical AI  经典人工智能 ===
+
===经典人工智能===
    
{{Main|History of artificial intelligence}}
 
{{Main|History of artificial intelligence}}
第154行: 第154行:  
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.
 
Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus prediction of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," although Minsky states that he was misquoted.
   −
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色哈尔9000,它代表了人工智能研究人员相信他们截至2001年能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)是一个项目顾问,该项目旨在根据当时的一致预测,使哈尔9000尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。
+
现代人工智能研究始于20世纪50年代中期。第一代人工智能研究人员确信,通用人工智能是可能的,并将在短短几十年内出现。人工智能的先驱赫伯特·A·西蒙(Herbert A. Simon)在1965年写道: “机器将在20年内拥有完成人类能做的任何工作的能力。”他们的预言启发了斯坦利·库布里克和亚瑟·查理斯·克拉克塑造的角色“哈尔9000”,它代表了人工智能研究人员相信他们到2001年时能够创造出的东西。人工智能先驱马文·明斯基(Marvin Minsky)当时是一个项目顾问,该项目旨在根据当时的一致预测,使“哈尔9000”尽可能逼真; 克里维尔援引他在1967年关于这个问题的话说,“在一代人的时间里... ... 创造‘人工智能’的问题将大体上得到解决,”尽管明斯基声称,他的话被错误引用了。
      第162行: 第162行:  
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."
 
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]."
   −
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。随着20世纪80年代的开始,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况和专家系统的成功建立,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究人员预测通用人工智能即将取得的成果已经被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们根本不愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签
+
然而,在20世纪70年代初,很明显,研究人员严重低估了该项目的难度。资助机构开始对通用人工智能持怀疑态度,并对研究人员施加越来越大的压力,要求他们生产出有用的“应用人工智能”。在20世纪80年代的初期,日本的'''<font color="#ff8000">第五代计算机项目(Fifth Generation Computer Project)</font>'''重新唤起了人们对通用人工智能的兴趣,并设定了一个长达10年的时间线,其中包括通用人工智能的目标,比如“进行一次随意的交谈”。为了应对这种情况,以及专家系统的成功实现,工业界和政府都重新将资金投入这一领域。然而,人们对人工智能的信心在20世纪80年代末大幅下降,第五代计算机项目的目标从未实现。20年来的第二次,人工智能研究员预测通用人工智能即将取得的成果被证明根本是错误的。到了20世纪90年代,人工智能研究人员因做出虚假承诺而臭名昭著。他们变得彻底不再愿意做预测,也不愿意提及“人类水平”的人工智能,因为他们害怕被贴上“狂热梦想家”的标签
         −
=== Narrow AI research  狭义人工智能的研究===
+
===狭义人工智能的研究===
    
{{Main|Artificial intelligence}}
 
{{Main|Artificial intelligence}}
  −
      
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref>
 
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as [[artificial neural networks]] and statistical [[machine learning]].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.<ref>{{cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |publisher=Gartner Reports |accessdate=7 May 2019 |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |url-status=live }}</ref>
第176行: 第174行:  
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.
 
In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks and statistical machine learning. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years.
   −
在1990年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产生可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。
+
在20世纪90年代和21世纪初,主流人工智能取得了更大的商业成功和学术声望,因为它们把重点放在能够产出可验证结果和商业应用的具体子问题上,例如人工神经网络和统计机器学习。这些“应用人工智能”系统现在在整个技术产业中得到广泛应用,这方面的研究得到了学术界和产业界的大量资助。目前,这一领域的发展被认为是一个新兴的趋势,并有望在10多年内进入一个成熟的阶段。
 
        第184行: 第181行:  
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote>
 
Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems. Hans Moravec wrote in 1988: <blockquote>"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."</blockquote>
   −
大多数主流人工智能研究人员希望,通过结合解决各种子问题的项目,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇。令人沮丧的是,当下,真实世界的能力和常识知识在推理程序中一直难以捉摸。而这两种路线结合的人工智能将能为我们解决这些疑难。当一个黄金钉一样的东西将二者结合起来时,就会产生完全智能的机器。”
+
大多数主流人工智能研究人员希望,通过把能解决各种子问题的程序结合起来,可以开发出强人工智能。汉斯·莫拉维克(Hans Moravec)在1988年写道: “我相信,这种自下而上的人工智能路线,终有一天会与传统的自上而下的路线在后半程相遇,然后提供能解决真实世界中问题的能力,以及常识知识——在推理程序中一直都难以捉摸的令人沮丧的东西。这两种路线结合的人工智能将能为我们解决这些疑难。当以后有一种神奇的方法把这二者结合起来时,完全智能的机器就会产生。”
 
        第192行: 第188行:  
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote>
 
However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating: <blockquote>"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."</blockquote>
   −
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本考虑是正确的,那么绝望的是,这种期望是模块化的,并且从认知到符号真的只有一条可行的路径: 从头开始。类似计算机软件级别的自由浮动的符号永远不可能通过这条路径实现,反之亦然——甚至也不清楚为什么我们应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己降低为可编程计算机的功能等价物)。”
+
然而,连如此基本的哲学问题也存在争议; 例如,普林斯顿大学的斯蒂文·哈纳德(Stevan Harnad)在1990年关于'''<font color="#ff8000">符号基础假说(the Symbol Grounding Hypothesis)</font>'''的论文中总结道: “人们经常提出这样的期望,即建立“自上而下”(符号)的认知模型的方法将在某种程度上与“自下而上”(感官)的方法在建模过程中的某处相会。如果本文中的基本假设是正确的,那就不可能存在模块化的后见方式,且从认知到符号真的只有一条可行的路径: 直接建立从感觉到符号的联系。类似计算机软件级别的无意义的符号永远不可能通过这条路径实现,反之亦然——甚至我们也不清楚为什么应该尝试达到这样一个级别,因为它看起来就像是把我们的符号从它们的内在意义上连根拔起(从而仅仅把我们自己变成和可编程计算机一样的东西)。”
         −
=== Modern artificial general intelligence research  现代通用人工智能的研究===
+
===现代通用人工智能的研究===
    
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}
 
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web|url=http://goertzel.org/who-coined-the-term-agi/|title=Who coined the term "AGI"? » goertzel.org|language=en-US|access-date=28 December 2018|archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/|archive-date=28 December 2018|url-status=live}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> The research objective is much older, for example [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up-to-date summary and lots of links.</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>https://goertzel.org/AGI_Summer_School_2009.htm</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>http://fmi-plovdiv.org/index.jsp?id=1054&ln=1</ref> and 2011<ref>http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of [[Conference on Artificial General Intelligence|AGI conferences]]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by [[Ray Kurzweil]] in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see [http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html Advanced Human Intelligence] {{Webarchive|url=https://web.archive.org/web/20110630032301/http://crnano.typepad.com/crnblog/2005/08/advanced_human_.html |date=30 June 2011 }} where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}}
第202行: 第198行:  
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.
 
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near (i.e. between 2015 and 2045) is plausible.
   −
“通用人工智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用。这个术语在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。通用人工智能的研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目被认为属于通用人工智能的范畴。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在'''<font color="#ff8000">《奇点临近》(The Singularity is Near)</font>'''(即在2015年至2045年之间)中讨论的时间线是可信的。
+
“通用人工智能”一词早在1997年就由马克·古布鲁德(Mark Gubrud)在讨论全自动化军事生产和作业的影响时使用,又在2002年左右被肖恩·莱格(Shane Legg)和本·格兹尔(Ben Goertzel)重新引入并推广。通用人工智能的研究目标要古老得多,例如道格•雷纳特(Doug Lenat)的 Cyc 项目(始于1984年) ,以及艾伦•纽厄尔(Allen Newell)的 Soar 项目都被认为属于通用人工智能的范畴。王培(Pei Wang)和本·格兹尔将2006年的通用人工智能研究活动描述为“发表论文和取得初步成果”。2009年,厦门大学人工脑实验室和 OpenCog 在中国厦门组织了通用人工智能的第一个暑期学校。第一个大学课程于2010年和2011年在保加利亚普罗夫迪夫大学由托多尔·阿瑙多夫(Todor Arnaudov)开设。2018年,麻省理工学院开设了一门通用人工智能课程,由莱克斯·弗里德曼(Lex Fridman)组织,并邀请了一些客座讲师。然而,迄今为止,大多数人工智能研究人员对通用人工智能关注甚少,一些人声称,智能过于复杂,在短期内无法完全复制。然而,少数计算机科学家积极参与通用人工智能的研究,其中许多人正在为通用人工智能的一系列会议做出贡献。这项研究极其多样化,而且往往具有开创性。格兹尔在他的书的序言中,说,制造一个真正灵活的通用人工智能所需的时间约为10年到超过一个世纪不等,但是通用人工智能研究团体的似乎一致认为雷·库兹韦尔(Ray Kurzweil)在'''<font color="#ff8000">《奇点临近》(The Singularity is Near)</font>'''(即在2015年至2045年之间)中讨论的时间线是可信的。
      第210行: 第206行:  
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious. In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.
 
However, most mainstream AI researchers doubt that progress will be this rapid. Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, Nnaisense, Vicarious. In addition, organizations such as the Machine Intelligence Research Institute and OpenAI have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.
   −
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious。此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,像人脑计划这样的项目的目标是建立一个人脑的功能模拟。2017年针对通用人工智能的一项调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。
+
然而,大多数主流的人工智能研究人员怀疑进展是否会如此之快。明确寻求通用人工智能的组织包括瑞士人工智能实验室IDSIA,Nnaisense,Vicarious。此外,机器智能研究所和 OpenAI 等机构也建立起来以影响通用人工智能的发展道路。最后,还有像人脑计划这样的项目,目标是建立一个人脑的功能模拟。2017年针对一项通用人工智能的调查(通过已发表的研究)对45个已知的明确的或暗中研究通用人工智能的“活跃研发项目”进行了分类 ,其中最大的三个是 DeepMind、人类大脑项目和 OpenAI。
      第218行: 第214行:  
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.
 
In 2017, researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests had been carried out in 2014, with the IQ score reaching a maximum value of 27.
   −
2017年,研究人员 Feng Liu,Yong Shi 和 Ying Liu 对公开的和可自由访问的弱智能进行了智能测试,如谷歌人工智能或苹果的 Siri 等。在最大值,这些人工智能达到了约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。
+
2017年,研究人员刘锋、石勇和刘颖对公开的和可自由访问的弱人工智能进行了智商测试,如谷歌人工智能或苹果的 Siri 等。这些人工智能达到的最大值为约47,这大约相当于一个的六岁儿童。一个成年人平均智商为100。2014年也进行了类似的测试,智商分数的最高值达到了27。
      第227行: 第223行:     
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。
 
2019年,游戏程序师和航空工程师约翰·卡迈克(John Carmack)宣布了研究通用人工智能的计划。
  −
      
==Processing power needed to simulate a brain  模拟人脑所需要的处理能力==
 
==Processing power needed to simulate a brain  模拟人脑所需要的处理能力==
370

个编辑

导航菜单