第1行: |
第1行: |
− |
| |
| {{#seo: | | {{#seo: |
− | |keywords=Benoit Mandelbrot,分形,粗糙度理论 | + | |keywords=人工智能,机器智能,计算机科学 |
− | |description=是波兰裔法国裔美国数学家和博学家,因其对分形几何学领域的贡献而受到认可 | + | |description=指由人制造出来的机器所表现出来的智能 |
| }} | | }} |
| | | |
| | | |
− | ! -- 定义 --
| + | '''人工智能 Artificial Intelligence(AI)''',在机科计算学中亦称'''机器智能 Machine Intelligence'''。与人和其他动物表现出的自然智能相反,AI指由人制造出来的机器所表现出来的智能。前沿AI的教科书把AI定义为对'''智能体'''的研究:智能体指任何感知周围环境并采取行动以最大化其成功实现目标的机会的机器。<ref name="Definition of AI"/>通俗来说,“AI”就是机器模仿人类与人类大脑相关的“认知”功能:例如“学习”和“解决问题”。 |
− | '''<font color=#ff8000>人工智能 Artificial Intelligence,AI</font>''',在[计算机科学]中亦称'''<font color=#ff8000>机器智能 Machine Intelligence</font>'''。与人和其他动物表现出的'''<font color=#ff8000>自然智能 Nature Intelligence</font>'''相反,AI指由人制造出来的机器所表现出来的智能。前沿AI的教科书把AI定义为对“智能体”的研究:智能体指任何感知周围环境并采取行动以最大化其成功实现目标的机会的机器。<ref name="Definition of AI"/>通俗来说,“AI”就是机器模仿人类与人类大脑相关的“认知”功能:例如“学习”和“解决问题”。 | |
− | | |
− | AI的范围也有争议:随着机器的能力越来越大,很多被认为需要“智能”的任务被一个个从AI范畴中除名。这就是所谓的AI效应。'''<ref>{{Harvnb|McCorduck|2004|p=204}}</ref><font color=#f32cd32>泰斯勒定理</font>'''巧妙地把AI描述为“AI是任何还没有实现的东西。<ref>{{Cite web|url=http://people.cs.georgetown.edu/~maloof/cosc270.f17/cosc270-intro-handout.pdf|title=Artificial Intelligence: An Introduction, p. 37|last=Maloof|first=Mark|date=|website=georgetown.edu|access-date=}}</ref>”所以比如光学字符识别就往往不再被认为属于AI行列<ref>{{cite web|url=https://hackernoon.com/how-ai-is-getting-groundbreaking-changes-in-talent-management-and-hr-tech-d24ty3zzd|title= How AI Is Getting Groundbreaking Changes In Talent Management And HR Tech|publisher=Hackernoon}}</ref>,而已经成为了一种常规技术。<ref>{{cite magazine |last=Schank |first=Roger C. |title=Where's the AI |magazine=AI magazine |volume=12 |issue=4 |year=1991|p=38}}</ref>被认为是AI的现代机器功能包括顺利理解人类口语,在策略型游戏中完成高水平的竞赛(例如国际象棋和围棋)<ref name="bbc-alphago"/>,自动驾驶汽车,内容分发网络和军事模拟的智能规划<ref>{{Cite web|url=https://www.ai.mil/docs/Understanding%20AI%20Technology.pdf|title=Department of Defense Joint AI Center - Understanding AI Technology|last=Allen|first=Gregory|date=April 2020|website=AI.mil - The official site of the Department of Defense Joint Artificial Intelligence Center|url-status=live|archive-url=|archive-date=|access-date=25 April 2020}}</ref>。
| |
− | | |
− | --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])1. tasks considered to require "intelligence" are often removed from the definition of AI,很多被认为需要“智能”的任务不再被认为是AI 一句为意译 ;2.Tesler's Theorem(暂译为特斯勒定理)未找到确切翻译;
| |
− | '''<font color=#0000ff>已解决!</font>'''
| |
− | | |
− | | |
− | | |
− | ! ——总结历史——
| |
− | | |
− | 1955年AI作为一门学科被建立起来,后来经历过几段乐观时期<ref name="Optimism of early AI"/><ref name="AI in the 80s"/>与紧随而来的亏损以及缺乏资金的困境(也就是“AI寒冬”<ref name="First AI winter"/><ref name="Second AI winter"/>),每次又找到了新的出路,取得了新的成果和新的投资<ref name="AI in the 80s"/><ref name="AI in 2000s"/>。对于大多数描述AI的历史,AI研究被划分为互不关联的子领域<ref name="Fragmentation of AI"/>。人们通常把技术考量作为这些子领域的划分依据,比如特殊目标(例如“机器人学”或者“机器学习”<ref name="Problems of AI"/>),特殊工具的使用(“逻辑”或者人工神经网络),或者在哲学层面深层次的区别<ref name="Biological intelligence vs. intelligence in general"/><ref name="Neats vs. scruffies"/><ref name="Symbolic vs. sub-symbolic"/>。子领域的划分也与社会因素有关(比如某些特定机构或者特定研究者所做的工作)。<ref name="Fragmentation of AI"/>
| |
− | | |
− | ! ——总结问题、方法、工具——
| |
− | | |
− | AI研究的传统问题或者说目标包括'''<font color=#ff8000>自动推理 Automated Reasoning</font> ''','''<font color=#ff8000>知识表示 Knowledge Representation</font>''','''<font color=#ff8000>自动规划 Automated Planning and Scheduling</font>''','''<font color=#ff8000>学习 Learning</font>''','''<font color=#ff8000>自然语言处理 Natural Language Processing</font>''',感知以及移动和熟练操控物体的能力<ref name="Problems of AI"/>。实现通用AI目前仍然是该领域的长远目标。<ref name="General intelligence"/> 比较流行的研究方法包括统计方法,计算智能和传统AI所用的符号计算。目前有大量的工具应用于AI,其中包括搜索和数学优化、人工神经网络以及基于统计学、概率论和经济学的算法。AI领域涉及计算机科学,信息工程,数学,心理学,语言学,哲学及其他学科。
| |
− | | |
− | ! ——总结小说 / 推测,哲学,历史——
| |
− | | |
− | 这一领域是建立在人类智能“可以被精确描述从而使机器可以模拟”的观点上的。<ref>See the [[Dartmouth Workshop|Dartmouth proposal]], under [[#Philosophy|Philosophy]], below.</ref>这一观点引出了关于思维的本质和创造具有类人智能AI的伦理方面的哲学争论,于是自古以来<ref name="McCorduck's thesis"/>就有一些神话、小说以及哲学对此类问题展开过探讨。<ref>{{cite web|url=https://betanews.com/2016/10/21/artificial-intelligence-stephen-hawking/|title=Stephen Hawking believes AI could be mankind's last accomplishment|date=21 October 2016|website=BetaNews|url-status=live|archiveurl=https://web.archive.org/web/20170828183930/https://betanews.com/2016/10/21/artificial-intelligence-stephen-hawking/|archivedate=28 August 2017|df=dmy-all}}</ref><ref name="pmid31835078">{{cite journal |vauthors=Lombardo P, Boehm I, Nairz K |title=RadioComics – Santa Claus and the future of radiology |journal=Eur J Radiol |volume=122 |issue=1 |pages=108771 |year=2020 |pmid=31835078 |doi=10.1016/j.ejrad.2019.108771|doi-access=free }}</ref>一些人认为AI的发展不会威胁人类生存;但另一些人认为AI与以前的技术革命不同,它将带来大规模失业的风险。<ref name="guardian jobs debate"/>
| |
− | | |
− | ! ——总结应用,最新进展——
| |
− | | |
− | 在二十一世纪,随着计算机能力、大量数据和理论认识的同步发展,人工智能技术经历了一次复兴; 人工智能技术已成为技术工业的重要组成部分,帮助解决了计算机科学、软件工程和运筹学中的许多具有挑战性的问题。<ref name="AI widely used"/><ref name="AI in 2000s"/>
| |
− | | |
− | {{toclimit|3}}
| |
− | | |
− | | |
− | | |
− | == 历史 History ==
| |
− | | |
− | ! ——这是一部社会史。“方法”和“工具”部分介绍了技术历史。-->
| |
− | | |
− | {{Main|History of artificial intelligence|Timeline of artificial intelligence}}
| |
− | | |
− | | |
− | [[File:Didrachm Phaistos obverse CdM.jpg|thumb|Silver [[didrachma]] from [[Crete]] depicting [[Talos]], an ancient mythical [[automaton]] with artificial intelligence]银[来自克里特岛的描绘塔罗斯的狄拉克马,一种古代神话中的具有人工智能的自动机]]
| |
− | | |
− | Silver [[didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence]]
| |
− | | |
− | ! -- 20世纪前。也许是为了保持简短。-->
| |
− | | |
− | 具有思维能力的人造生物在古代以故事讲述者的方式出现,<ref name="AI in myth"/> 在小说中也很常见。比如玛丽 · 雪莱的《弗兰肯斯坦》和卡雷尔 · 阿佩克的《罗素姆的万能机器人》(Rossum's Universal Robots,R.U.R.)<ref name="AI in early science fiction"/> ——小说中的角色和他们的命运向人们提出了许多现在在人工智能伦理学中讨论的同样的问题。<ref name="McCorduck's thesis"/>
| |
− | | |
− | | |
− | ! ——主要智能前体: 逻辑学、计算理论、控制论、信息论、早期神经网络
| |
− | | |
− | 机械化或者说“形式化”推理的研究始于古代的哲学家和数学家。这些数理逻辑的研究直接催生了图灵的计算理论,即机器可以通过移动如“0”和“1”的简单的符号,就能模拟任何通过数学推演可以想到的过程,这一观点被称为'''<font color=#ff8000>邱奇-图灵论题 Church–Turing Thesis</font>'''<ref name="Formal reasoning"/>。图灵提出“如果人类无法区分机器和人类的回应,那么机器可以被认为是“智能的”。<ref>{{Citation | last = Turing | first = Alan | authorlink=Alan Turing | year=1948 | chapter=Machine Intelligence | title = The Essential Turing: The ideas that gave birth to the computer age | editor=Copeland, B. Jack | isbn = 978-0-19-825080-7 | publisher = Oxford University Press | location = Oxford | page = 412 }}</ref>目前人们公认的最早的AI工作是由麦卡洛和皮茨在1943年正式设计的图灵完备“人工神经元”。
| |
− | | |
− | | |
− | --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])图灵提出“如果人类无法区分机器和人类的回应,那么机器可以被认为是“智能的” 一句为从原版wiki上补充的
| |
− | | |
− | ! -- “黄金年代”1956 -- 1974 --
| |
− | | |
− | The field of AI research was born at [[Dartmouth workshop|a workshop]] at [[Dartmouth College]] in 1956,<ref name="Dartmouth conference"/> where the term "Artificial Intelligence" was coined by [[John McCarthy (computer scientist)|John McCarthy]] to distinguish the field from cybernetics and escape the influence of the cyberneticist [[Norbert Wiener]].<ref>{{cite journal |last=McCarthy |first=John |authorlink=John McCarthy (computer scientist) |title=Review of ''The Question of Artificial Intelligence'' |journal=Annals of the History of Computing |volume=10 |number=3 |year=1988 |pages=224–229}}, collected in {{cite book |last=McCarthy |first=John |authorlink=John McCarthy (computer scientist) |title=Defending AI Research: A Collection of Essays and Reviews |publisher=CSLI |year=1996 |chapter=10. Review of ''The Question of Artificial Intelligence''}}, p. 73, "[O]ne of the reasons for inventing the term "artificial intelligence" was to escape association with "cybernetics". Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him."</ref> Attendees [[Allen Newell]] ([[Carnegie Mellon University|CMU]]), [[Herbert A. Simon|Herbert Simon]] (CMU), John McCarthy ([[Massachusetts Institute of Technology|MIT]]), [[Marvin Minsky]] (MIT) and [[Arthur Samuel]] ([[IBM]]) became the founders and leaders of AI research.<ref name="Hegemony of the Dartmouth conference attendees"/> They and their students produced programs that the press described as "astonishing":{{sfn|Russell|Norvig|2003|p=18|quote=it was astonishing whenever a computer did anything kind of smartish}} computers were learning [[draughts|checkers]] strategies (c. 1954)<ref>Schaeffer J. (2009) Didn't Samuel Solve That Game?. In: One Jump Ahead. Springer, Boston, MA</ref> (and by 1959 were reportedly playing better than the average human),<ref>{{cite journal|last1=Samuel|first1=A. L.|title=Some Studies in Machine Learning Using the Game of Checkers|journal=IBM Journal of Research and Development|date=July 1959|volume=3|issue=3|pages=210–229|doi=10.1147/rd.33.0210|citeseerx=10.1.1.368.2254}}</ref> solving word problems in algebra, proving [[Theorem|logical theorems]] ([[Logic Theorist]], first run c. 1956) and speaking English.<ref name="Golden years of AI"/> By the middle of the 1960s, research in the U.S. was heavily funded by the [[DARPA|Department of Defense]]<ref name="AI funding in the 60s"/> and laboratories had been established around the world.<ref name="AI in England"/> AI's founders were optimistic about the future: [[Herbert A. Simon|Herbert Simon]] predicted, "machines will be capable, within twenty years, of doing any work a man can do". [[Marvin Minsky]] agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".<ref name="Optimism of early AI"/>
| |
− | | |
− | The field of AI research was born at a workshop at Dartmouth College in 1956, Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research. (and by 1959 were reportedly playing better than the average human), solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. AI's founders were optimistic about the future: Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".
| |
− | | |
− | AI研究于1956年起源于在达特茅斯学院举办的一个研讨会,与会者艾伦·纽厄尔(CMU),赫伯特·西蒙(CMU),约翰·麦卡锡(MIT),马文•明斯基(MIT)和阿瑟·塞缪尔(IBM)成为了AI研究的创始人和领导者。他们和他们的学生做了一个被新闻表述为“叹为观止”的计算机学习策略(以及在1959年就被报道达到人类的平均水平之上) ,解决代数应用题,证明逻辑理论'''<font color=#32cd32>(逻辑理论家)</font>'''以及说英语。到20世纪60年代中期,美国国防高级研究计划局斥重资支持研究,世界各地纷纷建立研究室。AI的创始人对未来充满乐观: 赫伯特 · 西蒙预言,“二十年内,机器将能完成人能做到的一切工作。”。马文•明斯基对此表示同意,他写道: “在一代人的时间里... ... 创造‘AI’的问题将得到实质性的解决。”
| |
− | | |
− | | |
− | --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])不太明白first run c. 1956的含义
| |
− | | |
− | | |
− | <!-- FIRST AI WINTER -->
| |
− | | |
− | <!-- FIRST AI WINTER -->
| |
− | | |
− | ! 第一个人工智能的冬天
| |
− | | |
− | They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of [[Sir James Lighthill]]{{sfn|Lighthill|1973}} and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an "[[AI winter]]",<ref name="First AI winter"/> a period when obtaining funding for AI projects was difficult.
| |
− | | |
− | They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an "AI winter", a period when obtaining funding for AI projects was difficult.
| |
− | | |
− | 他们没有意识到现存任务的一些困难。研究进程放缓,在1974年,由于詹姆斯·莱特希尔的指责以及美国国会需要分拨基金给其他有成效的项目,美国和英国政府都削减了探索性AI研究项目。接下来的几年被称为“AI寒冬”,在这一时期AI研究很难得到经费。
| |
− | | |
− | | |
− | | |
− | | |
− | <!-- BOOM OF THE 1980s, SECOND AI WINTER -->
| |
− | | |
− | <!-- BOOM OF THE 1980s, SECOND AI WINTER -->
| |
− | | |
− | ! 20世纪80年代的繁荣,第二个人工智能的冬天
| |
− | | |
− | In the early 1980s, AI research was revived by the commercial success of [[expert system]]s,<ref name="Expert systems"/> a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's [[fifth generation computer]] project inspired the U.S and British governments to restore funding for [[academic research]].<ref name="AI in the 80s"/> However, beginning with the collapse of the [[Lisp Machine]] market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.<ref name="Second AI winter"/>
| |
− | | |
− | In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.
| |
− | | |
− | 在20世纪80年代初期,由于专家系统在商业上取得的成功,AI研究迎来了复兴,专家系统是一种能够模拟人类专家的知识和分析能力的程序。到1985年,AI市场超过了10亿美元。与此同时,日本的第五代计算机项目促使了美国和英国政府恢复对学术研究的资助。然而,随着1987年 Lisp 机器市场的崩溃,AI再一次遭遇低谷,并陷入了第二次持续更长时间的停滞。
| |
− | | |
− | | |
− | | |
− | The development of [[metal–oxide–semiconductor]] (MOS) [[very-large-scale integration]] (VLSI), in the form of [[complementary MOS]] (CMOS) [[transistor]] technology, enabled the development of practical [[artificial neural network]] (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book ''Analog VLSI Implementation of Neural Systems'' by Carver A. Mead and Mohammed Ismail.<ref name="Mead">{{cite book|url=http://fennetic.net/irc/Christopher%20R.%20Carroll%20Carver%20Mead%20Mohammed%20Ismail%20Analog%20VLSI%20Implementation%20of%20Neural%20Systems.pdf|title=Analog VLSI Implementation of Neural Systems|date=8 May 1989|publisher=[[Kluwer Academic Publishers]]|isbn=978-1-4613-1639-8|last1=Mead|first1=Carver A.|last2=Ismail|first2=Mohammed|series=The Kluwer International Series in Engineering and Computer Science|volume=80|location=Norwell, MA|doi=10.1007/978-1-4613-1639-8}}</ref>
| |
− | | |
− | The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) transistor technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.
| |
− | | |
− | 20世纪80年代,'''<font color=#ff8000>互补金属氧化物半导体;Complementary Metal Oxide Semiconductor,CMOS</font>'''技术的出现带来了'''<font color=#ff8000>金属氧化物半导体;Metal Oxide Semiconductor,MOS</font>''''''<font color=#ff8000>超大规模集成电路;Very Large Scale Integration,VLSI</font>'''的发展,使实用的'''<font color=#ff8000>人工神经网络;Artificial Neural Network,ANN</font>''' 技术得以发展。这一领域里程碑式的出版物是1989年出版的《模拟 VLSI 神经系统的实现》, 作者是卡弗 · a · 米德和穆罕默德 · 伊斯梅尔。
| |
− | | |
− | | |
− | | |
− | <!-- FORMAL METHODS RISING IN THE 90s -->
| |
− | | |
− | <!-- FORMAL METHODS RISING IN THE 90s -->
| |
− | | |
− | ! ——形式方法兴起于90年代
| |
− | | |
− | In the late 1990s and early 21st century, AI began to be used for logistics, [[data mining]], [[medical diagnosis]] and other areas.<ref name="AI widely used"/> The success was due to increasing computational power (see [[Moore's law]] and [[transistor count]]), greater emphasis on solving specific problems, new ties between AI and other fields (such as [[statistics]], [[economics]] and [[mathematical optimization|mathematics]]), and a commitment by researchers to mathematical methods and scientific standards.<ref name="Formal methods in AI"/> [[IBM Deep Blue|Deep Blue]] became the first computer chess-playing system to beat a reigning world chess champion, [[Garry Kasparov]], on 11 May 1997.{{sfn|McCorduck|2004|pp=480–483}}
| |
− | | |
− | In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas. The success was due to increasing computational power (see Moore's law and transistor count), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.
| |
− | | |
− | 在20世纪90年代末和21世纪初,人工智能开始被用于物流、数据挖掘、医疗诊断和其他领域。这种成功归功于计算能力的提高(见摩尔定律和晶体管数量)、对解决特定问题的更大重视、人工智能与其它领域(如统计学、经济学和数学)之间的新联系,以及研究人员对数学方法和科学标准的承诺。1997年5月11日,深蓝成为第一个击败国际象棋卫冕冠军加里·卡斯帕罗夫的计算机国际象棋系统。
| |
− | | |
− | | |
− | | |
− | | |
− | | |
− | <!--DEEP LEARNING, BIG DATA & MACHINE LEARNING IN THE 2010s -->
| |
− | | |
− | <!--DEEP LEARNING, BIG DATA & MACHINE LEARNING IN THE 2010s -->
| |
− | | |
− | ! 2010年代的深度学习、大数据和机器学习
| |
− | | |
− | In 2011, a ''[[Jeopardy!]]'' [[quiz show]] exhibition match, [[IBM]]'s [[question answering system]], [[Watson (artificial intelligence software)|Watson]], defeated the two greatest ''Jeopardy!'' champions, [[Brad Rutter]] and [[Ken Jennings]], by a significant margin.{{sfn|Markoff|2011}} [[Moore's law|Faster computers]], algorithmic improvements, and access to [[big data|large amounts of data]] enabled advances in [[machine learning]] and perception; data-hungry [[deep learning]] methods started to dominate accuracy benchmarks [[Deep learning#Deep learning revolution|around 2012]].<ref>{{cite web|title=Ask the AI experts: What's driving today's progress in AI?|url=https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/ask-the-ai-experts-whats-driving-todays-progress-in-ai|website=McKinsey & Company|accessdate=13 April 2018|language=en}}</ref> The [[Kinect]], which provides a 3D body–motion interface for the [[Xbox 360]] and the [[Xbox One]], uses algorithms that emerged from lengthy AI research<ref>{{cite web|url=http://www.i-programmer.info/news/105-artificial-intelligence/2176-kinects-ai-breakthrough-explained.html|title=Kinect's AI breakthrough explained|author=Administrator|work=i-programmer.info|url-status=live|archiveurl=https://web.archive.org/web/20160201031242/http://www.i-programmer.info/news/105-artificial-intelligence/2176-kinects-ai-breakthrough-explained.html|archivedate=1 February 2016|df=dmy-all}}</ref> as do [[intelligent personal assistant]]s in [[smartphone]]s.<ref>{{cite web|url=http://readwrite.com/2013/01/15/virtual-personal-assistants-the-future-of-your-smartphone-infographic|title=Virtual Personal Assistants & The Future Of Your Smartphone [Infographic]|date=15 January 2013|author=Rowinski, Dan|work=ReadWrite|url-status=live|archiveurl=https://web.archive.org/web/20151222083034/http://readwrite.com/2013/01/15/virtual-personal-assistants-the-future-of-your-smartphone-infographic|archivedate=22 December 2015|df=dmy-all}}</ref> In March 2016, [[AlphaGo]] won 4 out of 5 games of [[Go (game)|Go]] in a match with Go champion [[Lee Sedol]], becoming the first [[Computer Go|computer Go-playing system]] to beat a professional Go player without [[Go handicaps|handicaps]].<ref name="bbc-alphago">{{cite web|url=https://deepmind.com/alpha-go.html|title=AlphaGo – Google DeepMind|url-status=live|archiveurl=https://web.archive.org/web/20160310191926/https://www.deepmind.com/alpha-go.html|archivedate=10 March 2016|df=dmy-all}}</ref><ref>{{cite news|title=Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol|url=https://www.bbc.com/news/technology-35785875|accessdate=1 October 2016|work=BBC News|date=12 March 2016|url-status=live|archiveurl=https://web.archive.org/web/20160826103910/http://www.bbc.com/news/technology-35785875|archivedate=26 August 2016|df=dmy-all}}</ref> In the 2017 [[Future of Go Summit]], [[AlphaGo]] won a [[AlphaGo versus Ke Jie|three-game match]] with [[Ke Jie]],<ref>{{cite journal|url=https://www.wired.com/2017/05/win-china-alphagos-designers-explore-new-ai/|title=After Win in China, AlphaGo's Designers Explore New AI|journal=Wired|date=27 May 2017|url-status=live|archiveurl=https://web.archive.org/web/20170602234726/https://www.wired.com/2017/05/win-china-alphagos-designers-explore-new-ai/|archivedate=2 June 2017|df=dmy-all|last1=Metz|first1=Cade}}</ref> who at the time continuously held the world No. 1 ranking for two years.<ref>{{cite web|url=http://www.goratings.org/|title=World's Go Player Ratings|date=May 2017|url-status=live|archiveurl=https://web.archive.org/web/20170401123616/https://www.goratings.org/|archivedate=1 April 2017|df=dmy-all}}</ref><ref>{{cite web|title=柯洁迎19岁生日 雄踞人类世界排名第一已两年|url=http://sports.sina.com.cn/go/2016-08-02/doc-ifxunyya3020238.shtml|language=Chinese|date=May 2017|url-status=live|archiveurl=https://web.archive.org/web/20170811222849/http://sports.sina.com.cn/go/2016-08-02/doc-ifxunyya3020238.shtml|archivedate=11 August 2017|df=dmy-all}}</ref> This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is a relatively complex game, more so than Chess.
| |
− | | |
− | In 2011, a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012. The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research as do intelligent personal assistants in smartphones. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No. 1 ranking for two years. This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is a relatively complex game, more so than Chess.
| |
− | | |
− | 2011年,IBM的问答系统沃森参加《危险边缘》节目,以明显的优势打败了两名最强的人类冠军布拉德·拉特和肯·詹宁斯。更快的计算,算法的改进,以及大量数据的获取,使得机器学习和感知能力得到提高; 2012年前后,'''<font color=#ff8000>数据饥渴</font>'''深度学习方法实现的精确度已经成为基准。Xbox 360和 Xbox One 的外设Kinect提供了3D人体运动交互功能,同智能手机上的智能助手一样,它使用的算法归功于漫长的AI研究; 2016年3月,AlphaGo与围棋冠军李世石的比赛中五局四胜,成为第一个击败无残疾围棋职业选手的计算机围棋系统;在2017年围棋未来峰会上,AlphaGo在三番棋的对决当中赢得了蝉联两届世界冠军的柯洁。这标志着AI发展的一个重要里程碑的完成,因为围棋是一种比国际象棋更复杂的游戏。
| |
− | | |
− | | |
− | | |
− | According to [[Bloomberg News|Bloomberg's]] Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI [[Google]] increased from a "sporadic usage" in 2012 to more than 2,700 projects. Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks.<ref name=":0">{{cite web
| |
− | | |
− | According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks.<ref name=":0">{{cite web
| |
− | | |
− | 彭博社的杰克·克拉克认为2015年是AI的里程碑年,使用谷歌AI的软件项目从几个到2015年超过了2700个。克拉克还给出了说明2012年以来AI在进步的真实数据,这些数据显示了AI在图像处理任务中的错误率越来越低。
| |
− | | |
− | | |
− | |url = https://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence
| |
− | | |
− | |url = https://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence
| |
− | | |
− | Https://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence
| |
− | | |
− | |title = Why 2015 Was a Breakthrough Year in Artificial Intelligence
| |
− | | |
− | |title = Why 2015 Was a Breakthrough Year in Artificial Intelligence
| |
− | | |
− | 为什么2015年是人工智能的突破之年
| |
− | | |
− | |last = Clark
| |
− | | |
− | |last = Clark
| |
− | | |
− | 最后的克拉克
| |
− | | |
− | |first = Jack
| |
− | | |
− | |first = Jack
| |
− | | |
− | 先是杰克
| |
− | | |
− | |website = Bloomberg News
| |
− | | |
− | |website = Bloomberg News
| |
− | | |
− | 彭博新闻网
| |
− | | |
− | |date = 8 December 2015
| |
− | | |
− | |date = 8 December 2015
| |
− | | |
− | 2015年12月8日
| |
− | | |
− | |access-date = 23 November 2016
| |
− | | |
− | |access-date = 23 November 2016
| |
− | | |
− | | 2016年11月23日
| |
− | | |
− | |quote = After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever.
| |
− | | |
− | |quote = After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever.
| |
− | | |
− | 经过5年人工智能领域的悄然突破,2015年成为了具有里程碑意义的一年。计算机比以往更聪明,学习速度更快。
| |
− | | |
− | |url-status = live
| |
− | | |
− | |url-status = live
| |
− | | |
− | 状态直播
| |
− | | |
− | |archiveurl = https://web.archive.org/web/20161123053855/https://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence
| |
− | | |
− | |archiveurl = https://web.archive.org/web/20161123053855/https://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence
| |
− | | |
− | | archiveurl https://web.archive.org/web/20161123053855/https://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence
| |
− | | |
− | |archivedate = 23 November 2016
| |
− | | |
− | |archivedate = 23 November 2016
| |
− | | |
− | 2016年11月23日
| |
− | | |
− | |df = dmy-all
| |
− | | |
− | |df = dmy-all
| |
− | | |
− | 我不会放过你的
| |
− | | |
− | }}</ref> He attributes this to an increase in affordable [[Artificial neural network|neural networks]], due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.<ref name="AI in 2000s"/> Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.<ref name=":0"/> In a 2017 survey, one in five companies reported they had "incorporated AI in some offerings or processes".<ref>{{cite web|title=Reshaping Business With Artificial Intelligence|url=https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/|website=MIT Sloan Management Review|accessdate=2 May 2018|language=en}}</ref><ref>{{cite web|last1=Lorica|first1=Ben|title=The state of AI adoption|url=https://www.oreilly.com/ideas/the-state-of-ai-adoption|website=O'Reilly Media|accessdate=2 May 2018|language=en|date=18 December 2017}}</ref> Around 2016, [[China]] greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an "AI superpower".<ref>{{Cite web|url=https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy|title=Understanding China's AI Strategy|last=Allen|first=Gregory|date=February 6, 2019|website=Center for a New American Security|access-date=}}</ref><ref>{{cite news |title=Review {{!}} How two AI superpowers – the U.S. and China – battle for supremacy in the field |url=https://www.washingtonpost.com/outlook/in-the-race-for-supremacy-in-artificial-intelligence-its-us-innovation-vs-chinese-ambition/2018/11/02/013e0030-b08c-11e8-aed9-001309990777_story.html |accessdate=4 November 2018 |work=Washington Post |date=2 November 2018 |language=en}}</ref> However, it has been acknowledged that reports regarding artificial intelligence have tended to be exaggerated.<ref>{{Cite web|url=https://www.theregister.co.uk/2019/02/22/artificial_intelligence_you_know_it_isnt_real_yeah/|title=Artificial Intelligence: You know it isn't real, yeah?|first=Alistair Dabbs 22 Feb 2019|last=at 10:11|website=www.theregister.co.uk}}</ref><ref>{{Cite web|url=https://joshworth.com/stop-calling-in-artificial-intelligence/|title=Stop Calling it Artificial Intelligence}}</ref><ref>{{Cite web|url=https://www.gbgplc.com/inside/ai/|title=AI isn't taking over the world – it doesn't exist yet|website=GBG Global website}}</ref>
| |
− | | |
− | }}</ref> He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an "AI superpower". However, it has been acknowledged that reports regarding artificial intelligence have tended to be exaggerated.
| |
− | | |
− | 他把这归因于廉价的神经网络的增加,而神经网络的发展又是因为云计算基础设施以及研究工具和数据集的增加。还有微软的Skype系统可以将一门语言自动翻译成另一门,脸书系统可以把图片描述给盲人听。2016年前后,中国加大了政府资助。在此之后的大量数据供应和研究产出的快速增长让一些观察者认为,中国可能正走上成为“AI超级大国”之路。然而,有关AI的报告被承认了有夸大之嫌。2017年的一个调查中,五分之一的公司报道“他们在一些项目中用到了AI”。
| |
− | | |
− | | |
− | | |
− | == 定义 Definitions ==
| |
− | | |
− | | |
− | | |
− | | |
− | | |
− | Computer science defines AI research as the study of "[[intelligent agent]]s": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.<ref name="Definition of AI"/> A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."<ref>{{Cite journal|title=Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence|first1=Andreas|last1=Kaplan|first2=Michael|last2=Haenlein|date=1 January 2019|journal=Business Horizons|volume=62|issue=1|pages=15–25|doi=10.1016/j.bushor.2018.08.004}}</ref>
| |
− | | |
− | Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.
| |
− | | |
− | 计算机科学将对AI的研究定义为对“智能体”的研究。智能体是任何感知周围环境并采取行动以最大化其成功实现目标的机会的设备。对AI更精确的定义是“一个可以正确理解并学习输入数据,并用学习结果通过灵活调整,实现具体目标或任务的系统”。
| |
− | | |
− | | |
− | | |
− | == 基本知识 Basics ==
| |
− | | |
− | | |
− | | |
− | | |
− | <!-- This section is for explaining, to non-specialists, core concepts that are helpful for understanding AI; feel free to greatly expand or even draw out into its own "Introduction to AI" article, similar to [[Introduction to Quantum Mechanics]] -->
| |
− | | |
− | <!-- This section is for explaining, to non-specialists, core concepts that are helpful for understanding AI; feel free to greatly expand or even draw out into its own "Introduction to AI" article, similar to Introduction to Quantum Mechanics -->
| |
− | | |
− | ! ——这部分是为了向非专业人士解释有助于理解人工智能的核心概念,你可以随意扩展甚至引用到它自己的“人工智能导论”文章中,类似于量子力学导论——
| |
− | | |
− | | |
− | | |
− | | |
− | | |
− | A typical AI analyzes its environment and takes actions that maximize its chance of success.<ref name="Definition of AI"/> An AI's intended [[utility function|utility function (or goal)]] can be simple ("1 if the AI wins a game of [[Go (game)|Go]], 0 otherwise") or complex ("Do mathematically similar actions to the ones succeeded in the past"). Goals can be explicitly defined or induced. If the AI is programmed for "[[reinforcement learning]]", goals can be implicitly induced by rewarding some types of behavior or punishing others.{{efn|The act of doling out rewards can itself be formalized or automated into a "[[reward function]]".}} Alternatively, an evolutionary system can induce goals by using a "[[fitness function]]" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food.{{sfn|Domingos|2015|loc=Chapter 5}} Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.{{sfn|Domingos|2015|loc=Chapter 7}} Such systems can still be benchmarked if the non-goal system is framed as a system whose "goal" is to successfully accomplish its narrow classification task.<ref>Lindenbaum, M., Markovitch, S., & Rusakov, D. (2004). Selective sampling for nearest neighbor classifiers. Machine learning, 54(2), 125–152.</ref>
| |
− | | |
− | A typical AI analyzes its environment and takes actions that maximize its chance of success.
| |
− | | |
− | 一般的AI会分析其环境,并采取行动最大限度地提高其成功的机会。AI的预期效用函数(或者说目标)可以很简单(比如如果赢了围棋就是1,否则为0),也可以很复杂(做一些从数学层面上与过去的成功案例相似的行为)。目标可以被明确定义或诱导。如果AI被设定为“'''<font color=#ff8000>强化学习 Reinforcement Learning </font>'''”,那么目标就可以通过奖励某些行为或惩罚其他行为来间接诱导出来。再比如进化系统可以通过“适应功能”产生突变或者优先发展得分高的AI系统来导出目标,这与动物进化出寻找食物的本能类似。一些诸如最近邻插值的AI系统,不是通过类比来推理的。这些系统通常没有给定目标,除非目标隐含在它们的训练数据中。如果将非目标系统框定为一个以成功完成其小范围分类任务为目标的系统,那么这些系统仍然可以作为基准。
| |
− | | |
− | | |
− | AI often revolves around the use of [[algorithms]]. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.{{efn|Terminology varies; see [[algorithm characterizations]].}} A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following (optimal for first player) recipe for play at [[tic-tac-toe]]:{{sfn|Domingos|2015|loc=Chapter 1}}
| |
− | | |
− | AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute. A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following (optimal for first player) recipe for play at tic-tac-toe:
| |
− | | |
− | AI离不开算法的使用。算法是机械计算机可以执行的一组明确的指令。复杂的算法通常是建立在其他更简单的算法之上的。一个算法的简单例子是下面的井字游戏指令(对于第一个玩家来说是最有利的) :
| |
− | | |
− | | |
− | | |
− | | |
− | # If someone has a "threat" (that is, two in a row), take the remaining square. Otherwise,
| |
− | | |
− | If someone has a "threat" (that is, two in a row), take the remaining square. Otherwise,
| |
− | 如果某人产生了一个“威胁”(也就是说有两个棋子连续了) ,把下一步棋下在两个棋外剩下的方上块。否则,
| |
− | | |
− | # if a move "forks" to create two threats at once, play that move. Otherwise,
| |
− | | |
− | if a move "forks" to create two threats at once, play that move. Otherwise,
| |
− | | |
− | 如果某一步棋可以制造“叉子”棋阵同时制造两个威胁,那就下那一步。否则,
| |
− | | |
− | # take the center square if it is free. Otherwise,
| |
− | | |
− | take the center square if it is free. Otherwise,
| |
− | | |
− | 如果中心的格子还是空的话,就走中间的格子。否则,
| |
− | | |
− | # if your opponent has played in a corner, take the opposite corner. Otherwise,
| |
− | | |
− | if your opponent has played in a corner, take the opposite corner. Otherwise,
| |
− | | |
− | 如果你的对手在一个角落里布子,那就走另一个角落。否则,
| |
− | | |
− | # take an empty corner if one exists. Otherwise,
| |
− | | |
− | take an empty corner if one exists. Otherwise,
| |
− | | |
− | 如果有空角落,就下在空角落上。否则,
| |
− | | |
− | # take any empty square.
| |
− | | |
− | take any empty square.
| |
− | | |
− | 随便找个空格子。
| |
− | | |
− | | |
− | | |
− | | |
− | | |
− | Many AI algorithms are capable of learning from data; they can enhance themselves by learning new [[heuristic (computer science)|heuristics]] (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any [[function (mathematics)|function]], including which combination of mathematical functions would best describe the world{{citation needed|date=June 2019}}. These learners could therefore, derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of "[[combinatorial explosion]]", where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad range of possibilities that are unlikely to be beneficial.<ref name="Intractability"/>{{sfn|Domingos|2015|loc=Chapter 2, Chapter 3}} For example, when viewing a map and looking for the shortest driving route from [[Denver]] to [[New York City|New York]] in the East, one can in most cases skip looking at any path through [[San Francisco]] or other areas far to the West; thus, an AI wielding a pathfinding algorithm like [[A* search algorithm|A*]] can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.<ref>{{cite journal
| |
− | | |
− | Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world. These learners could therefore, derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of "combinatorial explosion", where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad range of possibilities that are unlikely to be beneficial. For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding a pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.<ref>{{cite journal
| |
− | | |
− | 许多AI算法可以从数据中学习;他们可以通过学习新的启发(过去起作用的策略,或“经验法则”) 或者自己编写其他算法来强化自己。下面的一些“学习者”,包括'''<font color=#ff8000>贝叶斯网络 Bayesian Networks</font>'''、'''<font color=#ff8000>决策树 Decision Trees</font>'''和'''<font color=#ff8000>最近邻 Nearest-neighbor</font>''',在理论上(给定无限的数据、时间和记忆)可以学习近似任何函数,包括数学函数如何组合可以最好地描述世界。因此,这些学习者可以通过考虑每一种可能的假设,并将它们与数据进行匹配,从而获得所有可能的知识。实际上考虑所有的可能性几乎是不可能,因为很可能会导致“'''<font color=#ff8000>组合爆炸 Combinatorial Explosion</font>'''”,即解决一个问题所需的时间呈指数级增长。很多AI研究都在探索如何识别和避免考虑广泛且无益的可能性。例如,当看地图寻找从东边丹佛到纽约的最短行驶路线时,大部分人都不会去看通过西边的旧金山或其他地理位置的路径;因此,一个使用像A*这样的寻路算法的AI可以避免每条可能的路径都必须依次考虑以造成组合爆炸的情况。
| |
− | | |
− |
| |
− | | |
− | The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): "If an otherwise healthy adult has a fever, then they may have [[influenza]]". A second, more general, approach is [[Bayesian inference]]: "If the current patient has a fever, adjust the probability they have influenza in such-and-such way". The third major approach, extremely popular in routine business AI applications, are analogizers such as [[Support vector machine|SVM]] and [[K-nearest neighbor algorithm|nearest-neighbor]]: "After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza". A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the [[artificial neural network]] approach uses artificial "[[neurons]]" that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to "reinforce" connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.{{sfn|Domingos|2015|loc=Chapter 2, Chapter 4, Chapter 6}}<!-- The influenza example is expanded from Domingos chapter 6; feel free to put in a better example if you have one --><ref>{{cite news|title=Can neural network computers learn from experience, and if so, could they ever become what we would call 'smart'?|url=https://www.scientificamerican.com/article/can-neural-network-comput/|accessdate=24 March 2018|work=Scientific American|date=2018|language=en}}</ref>
| |
− | | |
− | The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): "If an otherwise healthy adult has a fever, then they may have influenza". A second, more general, approach is Bayesian inference: "If the current patient has a fever, adjust the probability they have influenza in such-and-such way". The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: "After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza". A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the artificial neural network approach uses artificial "neurons" that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to "reinforce" connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.<!-- The influenza example is expanded from Domingos chapter 6; feel free to put in a better example if you have one -->
| |
| | | |
− | AI最早的(也是最容易理解的)研究方法是'''<font color=#ff8000>符号主义 Symbolism </font>'''(比如形式逻辑):“如果一个原本健康的成年人发烧了,那么他们可能患上了流感。”第二种更普遍的方法是贝叶斯推断: “如果这个患者发烧了,会考虑多个方面判断他们感染流感的可能性”。第三个主要的方法:类比,在日常商业AI应用中非常常见,例如'''<font color=#ff8000>支持向量机 Support Vector Machine, SVM</font>'''和'''<font color=#ff8000>最近邻 Nearest-neighbor </font>''': “在考量了过去体温、症状、年龄和其他因素与现在的病人匹配的病人的记录,这些病人中x%患有流感”。第四种方法相较来说更难以直观理解,它受到大脑工作机制的启发: 人工神经网络方法使用人工“神经元” ,这种神经元可以通过将自身与期望的输出进行比较,并改变内部神经元之间的连接强度,以“强化”似乎有用的连接,从而进行学习。这四种主要的方法可以有交叉,也可以与进化系统交叉; 例如,神经网络可以学习做推论、概括和进行类比。一些系统隐式或显式地使用这些方法中的多个,以及许多其他AI和非AI算法; 根据问题不同往往最佳解决方案也不同.
| |
| | | |
| + | AI 应用包括高级网络搜索引擎、推荐系统(YouTube、亚马逊和Netflix 使用)、理解人类语音(例如Siri或Alexa)、自动驾驶汽车(例如特斯拉)以及在战略游戏系统中进行最高级别的竞争(例如国际象棋和围棋),<ref name="bbc-alphago">"AlphaGo – Google DeepMind". Archived from the original on 10 March 2016.</ref>随着机器的能力越来越强,被认为需要“智能”的任务往往从人工智能的定义中删除,这种现象被称为人工智能效应。<ref>McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1, p. 204</ref>例如,光学字符识别经常被排除在人工智能之外, <ref>Ashok83 (10 September 2019). "How AI Is Getting Groundbreaking Changes In Talent Management And HR Tech". Hackernoon. Archived from the original on 11 September 2019. Retrieved 14 February 2020.</ref>已成为一项常规技术。<ref>Schank, Roger C. (1991). "Where's the AI". AI magazine. Vol. 12 no. 4, p. 38</ref> |
| | | |
| | | |
| + | 1956年AI作为一门学科被建立起来,后来经历过几段乐观时期<ref name="Optimism of early AI">Optimism of early AI: * Herbert Simon quote: Simon 1965, p. 96 quoted in Crevier 1993, p. 109. * Marvin Minsky quote: Minsky 1967, p. 2 quoted in Crevier 1993, p. 109.</ref><ref name="AI in the 80s">Boom of the 1980s: rise of expert systems, Fifth Generation Project, Alvey, MCC, SCI: * McCorduck 2004, pp. 426–441 * Crevier 1993, pp. 161–162,197–203, 211, 240 * Russell & Norvig 2003, p. 24 * NRC 1999, pp. 210–211 * Newquist 1994, pp. 235–248</ref>与紧随而来的亏损以及缺乏资金的困境(也就是“AI寒冬”<ref name="First AI winter">First AI Winter, Mansfield Amendment, Lighthill report * Crevier 1993, pp. 115–117 * Russell & Norvig 2003, p. 22 * NRC 1999, pp. 212–213 * Howe 1994 * Newquist 1994, pp. 189–201</ref><ref name="Second AI winter">Second AI winter: * McCorduck 2004, pp. 430–435 * Crevier 1993, pp. 209–210 * NRC 1999, pp. 214–216 * Newquist 1994, pp. 301–318</ref>),每次又找到了新的出路,取得了新的成果和新的投资<ref name="AI in the 80s"/><ref name="AI in 2000s">AI becomes hugely successful in the early 21st century * Clark 2015b</ref>。AI 研究在其一生中尝试并放弃了许多不同的方法,包括模拟大脑、模拟人类问题解决、形式逻辑、大型知识数据库和模仿动物行为。在 21 世纪的头几十年,高度数学的统计机器学习已经主导了该领域,并且该技术已被证明非常成功,有助于解决整个工业界和学术界的许多具有挑战性的问题。<ref name="AI widely used">AI applications widely used behind the scenes: * Russell & Norvig 2003, p. 28 * Kurzweil 2005, p. 265 * NRC 1999, pp. 216–222 * Newquist 1994, pp. 189–201</ref> |
| | | |
− | Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as "since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well". They can be nuanced, such as "X% of [[Family (biology)|families]] have geographically separate species with color variants, so there is a Y% chance that undiscovered [[black swan theory|black swans]] exist". Learners also work on the basis of "[[Occam's razor#Probability theory and statistics|Occam's razor]]": The simplest theory that explains the data is the likeliest. Therefore, according to Occam's razor principle, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better.
| |
| | | |
− | Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as "since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well". They can be nuanced, such as "X% of families have geographically separate species with color variants, so there is a Y% chance that undiscovered black swans exist". Learners also work on the basis of "Occam's razor": The simplest theory that explains the data is the likeliest. Therefore, according to Occam's razor principle, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better.
| + | AI 研究的各个子领域都围绕特定目标和特定工具的使用。人工智能研究的传统目标包括推理、知识表示、规划、学习、自然语言处理、感知以及移动和操纵物体的能力。<ref name="Problems of AI">This list of intelligent traits is based on the topics covered by the major AI textbooks, including: * Russell & Norvig 2003 * Luger & Stubblefield 2004 * Poole, Mackworth & Goebel 1998 * Nilsson 1998</ref>通用智能(解决任意问题的能力)是该领域的长期目标之一。<ref name="General intelligence"> General intelligence (strong AI) is discussed in popular introductions to AI: * Kurzweil 1999 and Kurzweil 2005</ref>为了解决这些问题,人工智能研究人员使用了各种版本的搜索和数学优化、形式逻辑、人工神经网络以及基于统计的方法,概率和经济学。AI 还借鉴了计算机科学、心理学、语言学、哲学和许多其他领域。 |
| | | |
− | 学习算法的工作基础是策略、算法和推论,如果这些基础在之前运行良好,那么在未来可能能继续很好地运作。这些推论是显而易见的,例如“在过去的10000天里,太阳每天早上都升起,明天早上也可能升起”。他们可能会有细微差别,比如“ x% 的科有颜色变异且地理隔离的物种,所以有 y% 的可能性未被发现的黑天鹅是存在的”。学习者也在“'''<font color=#ff8000>奥卡姆剃刀 Occam's Razor</font>'''”的基础上学习: 最简单的可以解释数据的理论是最有可能的。因此,根据奥卡姆剃刀原则,一个学习者必须被设计成更倾向于简单的理论而不是复杂的理论,除非复杂的理论被证明实质上更好。
| |
| | | |
| + | 这一领域是建立在人类智能“可以被精确描述从而使机器可以模拟”的观点上的。这一观点引出了关于思维的本质和造具有类人智能AI的伦理方面的哲学争论,于是自古以来<ref name="Newquist 1994">Newquist, HP (1994). The Brain Makers: Genius, Ego, And Greed In The Quest For Machines That Think. New York: Macmillan/SAMS. ISBN 978-0-672-30412-5.</ref>就有一些神话、小说以及哲学对此类问题展开过探讨。一些人认为AI的发展不会威胁人类生存;<ref>Spadafora, Anthony (21 October 2016). "Stephen Hawking believes AI could be mankind's last accomplishment". BetaNews. Archived from the original on 28 August 2017.</ref><ref>Lombardo, P; Boehm, I; Nairz, K (2020). "RadioComics – Santa Claus and the future of radiology". Eur J Radiol. 122 (1): 108771. doi:10.1016/j.ejrad.2019.108771. PMID 31835078.</ref>但另一些人认为AI与以前的技术革命不同,它将带来大规模失业的风险。<ref name="guardian jobs debate">Ford, Martin; Colvin, Geoff (6 September 2015). "Will robots create more jobs than they destroy?". The Guardian. Archived from the original on 16 June 2018. Retrieved 13 January 2018.</ref> |
| | | |
| | | |
− | [[File:Overfitted Data.png|thumb|The blue line could be an example of [[overfitting]] a linear function due to random noise.]蓝线是[[由于随机噪声过拟合线性函数]的一个例子。]
| + | == 历史 == |
− | ]
| |
| | | |
− | The blue line could be an example of [[overfitting a linear function due to random noise.]]
| + | [[File:Didrachm Phaistos obverse CdM.jpg|thumb|来自克里特岛的描绘塔罗斯的银色狄拉克马,一种古代神话中的具有人工智能的自动机]] |
| | | |
| | | |
− | Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as [[overfitting]]. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is.{{sfn|Domingos|2015|loc=Chapter 6, Chapter 7}} Besides classic overfitting, learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.{{sfn|Domingos|2015|p=286}} A real-world example is that, unlike humans, current image classifiers don't determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an "adversarial" image that the system misclassifies.{{efn|Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. Some systems are so brittle that changing a single adversarial pixel predictably induces misclassification.}}<ref>{{cite news|title=Single pixel change fools AI programs|url=https://www.bbc.com/news/technology-41845878|accessdate=12 March 2018|work=BBC News|date=3 November 2017}}</ref><ref>{{cite news|title=AI Has a Hallucination Problem That's Proving Tough to Fix|url=https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/|accessdate=12 March 2018|work=WIRED|date=2018}}</ref><ref>{{cite arxiv|eprint=1412.6572|last1=Goodfellow|first1=Ian J.|last2=Shlens|first2=Jonathon|last3=Szegedy|first3=Christian|title=Explaining and Harnessing Adversarial Examples|class=stat.ML|year=2014}}</ref>
| + | 具有思维能力的人造生物在古代以故事讲述者的方式出现,<ref name="AI in myth"/> 在小说中也很常见。比如 Mary Shelley的《弗兰肯斯坦 Frankenstein 》和 Karel Čapek的《罗素姆的万能机器人 Rossum's Universal Robots,R.U.R.》<ref name="AI in early science fiction">AI in early science fiction. * McCorduck 2004, pp. 17–25</ref> ——小说中的角色和他们的命运向人们提出了许多现在在人工智能伦理学中讨论的同样的问题。 |
| | | |
− | Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don't determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an "adversarial" image that the system misclassifies.
| |
| | | |
− | 用一个错误且过于复杂的理论拟合过去所有的训练数据,这被称为'''<font color=#ff8000>过拟合 Overfitting</font>'''。许多系统试图通过依据理论拟合数据的程度奖励和复杂程度惩罚来减少过拟合。除了常见的过拟合外,学习者也会因为“走错方向”而失望。一个简单的例子是,一个图像分类器训练时只用棕马和黑猫的图片,那么它就很可能得出所有的棕色斑块都是马的结论。一个现实世界的例子是,目前的图像分类器并与人类不同,他们不厘清图像各部分间的空间关系; 相反,它们学习人类察觉不到的抽象,但与某类真实物体图像线性相关的像素图案。将这种图案稍微叠加在原本正确的图像上,就会导致系统将图像错误分类。
| + | 机械化或者说“形式化”推理的研究始于古代的哲学家和数学家。这些数理逻辑的研究直接催生了图灵的计算理论,即机器可以通过移动如“0”和“1”的简单的符号,就能模拟任何通过数学推演可以想到的过程,这一观点被称为'''邱奇-图灵论题 Church–Turing Thesis'''<ref name="Formal reasoning">Formal reasoning: * Berlinski, David (2000). The Advent of the Algorithm. Harcourt Books. ISBN 978-0-15-601391-8. OCLC 46890682. Archived from the original on 26 July 2020. Retrieved 22 August 2020.</ref>。图灵提出“如果人类无法区分机器和人类的回应,那么机器可以被认为是“智能的”。</ref>{{Citation | last = Turing | first = Alan | authorlink=Alan Turing | year=1948 | chapter=Machine Intelligence | title = The Essential Turing: The ideas that gave birth to the computer age | editor=Copeland, B. Jack | isbn = 978-0-19-825080-7 | publisher = Oxford University Press | location = Oxford | page = 412 }}</ref>目前人们公认的最早的AI工作是由McCullouch和Pitts 在1943年正式设计的图灵完备“人工神经元”。<ref>Russell, Stuart J.; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, New Jersey: Prentice Hall. ISBN 978-0-13-604259-4..</ref> |
| | | |
| | | |
| + | AI研究于1956年起源于在达特茅斯学院举办的一个研讨会,<ref>Dartmouth conference: * McCorduck 2004, pp. 111–136 * Crevier 1993, pp. 47–49, who writes "the conference is generally recognized as the official birthdate of the new science." * Russell & Norvig 2003, p. 17, who call the conference "the birth of artificial intelligence." * NRC 1999, pp. 200–201</ref>其中术语“人工智能”是由约翰麦卡锡创造的,目的是将该领域与控制论区分开来,并摆脱控制论主义者诺伯特维纳的影响。<ref>McCarthy, John (1988). "Review of The Question of Artificial Intelligence". Annals of the History of Computing. 10 (3): 224–229., collected in McCarthy, John (1996). "10. Review of The Question of Artificial Intelligence". Defending AI Research: A Collection of Essays and Reviews. CSLI., p. 73</ref>与会者Allen Newell(CMU),[[赫伯特·西蒙 Herbert Simon]](CMU),[[约翰·麦卡锡 John McCarthy]](MIT),[[马文•明斯基 Marvin Minsky]](MIT)和[[阿瑟·塞缪尔Arthur Samuel]](IBM)成为了AI研究的创始人和领导者。他们和他们的学生做了一个新闻表述为“叹为观止”的计算机学习策略(以及在1959年就被报道达到人类的平均水平之上) ,解决代数应用题,证明逻辑理论以及用英语进行表达。到20世纪60年代中期,美国国防高级研究计划局斥重资支持研究,世界各地纷纷建立研究室。AI的创始人对未来充满乐观: Herbert Simon预测“二十年内,机器将能完成人能做到的一切工作。”。Marvin Minsky对此表示同意,他写道: “在一代人的时间里... ... 创造‘AI’的问题将得到实质性的解决。” |
| | | |
| | | |
− | [[File:Détection de personne - exemple 3.jpg|thumb|A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.自动驾驶汽车系统可以使用神经网络来确定图像的哪些部分与先前训练数据里的行人图像匹配,然后将这些区域建模为移动缓慢但有点不可预测,且必须避让的矩形棱柱。<ref>{{cite book|last1=Matti|first1=D.|last2=Ekenel|first2=H. K.|last3=Thiran|first3=J. P.|title=Combining LiDAR space clustering and convolutional neural networks for pedestrian detection|journal=2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)|date=2017|pages=1–6|doi=10.1109/AVSS.2017.8078512|isbn=978-1-5386-2939-0|arxiv=1710.06160}}</ref><ref>{{cite book|last1=Ferguson|first1=Sarah|last2=Luders|first2=Brandon|last3=Grande|first3=Robert C.|last4=How|first4=Jonathan P.|title=Real-Time Predictive Modeling and Robust Avoidance of Pedestrians with Uncertain, Changing Intentions|journal=Algorithmic Foundations of Robotics XI|volume=107|date=2015|pages=161–177|doi=10.1007/978-3-319-16595-0_10|publisher=Springer, Cham|language=en|series=Springer Tracts in Advanced Robotics|isbn=978-3-319-16594-3|arxiv=1405.5581}}</ref>]]
| + | 他们没有意识到现存任务的一些困难。研究进程放缓,在1974年,由于ir James Lighthill的指责以及美国国会需要分拨基金给其他有成效的项目,美国和英国政府都削减了探索性AI研究项目。接下来的几年被称为“AI寒冬”<ref name="First AI winter"/>,在这一时期AI研究很难得到经费。 |
| | | |
− | A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.
| |
| | | |
| + | 在20世纪80年代初期,由于专家系统在商业上取得的成功,AI研究迎来了复兴,<ref name="Expert systems"> Expert systems: * ACM 1998, I.2.1 * Russell & Norvig 2003, pp. 22–24 * Luger & Stubblefield 2004, pp. 227–331 * Nilsson 1998, chpt. 17.4 * McCorduck 2004, pp. 327–335, 434–435 * Crevier 1993, pp. 145–62, 197–203 * Newquist 1994, pp. 155–183</ref>专家系统是一种能够模拟人类专家的知识和分析能力的程序。到1985年,AI市场超过了10亿美元。与此同时,日本的第五代计算机项目促使了美国和英国政府恢复对学术研究的资助。<ref name="AI in the 80s"/> 然而,随着1987年 Lisp 机器市场的崩溃,AI再一次遭遇低谷,并陷入了第二次持续更长时间的停滞。<ref name="Second AI winter"/> |
| | | |
| | | |
− | Compared with humans, existing AI lacks several features of human "[[commonsense reasoning]]"; most notably, humans have powerful mechanisms for reasoning about "[[naïve physics]]" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "[[folk psychology]]" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence". (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators.)<ref>{{cite news|title=Cultivating Common Sense {{!}} DiscoverMagazine.com|url=http://discovermagazine.com/2017/april-2017/cultivating-common-sense|accessdate=24 March 2018|work=Discover Magazine|date=2017}}</ref><ref>{{cite journal|last1=Davis|first1=Ernest|last2=Marcus|first2=Gary|title=Commonsense reasoning and commonsense knowledge in artificial intelligence|journal=Communications of the ACM|date=24 August 2015|volume=58|issue=9|pages=92–103|doi=10.1145/2701413|url=https://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/}}</ref><ref>{{cite journal|last1=Winograd|first1=Terry|title=Understanding natural language|journal=Cognitive Psychology|date=January 1972|volume=3|issue=1|pages=1–191|doi=10.1016/0010-0285(72)90002-3}}</ref> This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.<ref>{{cite news|title=Don't worry: Autonomous cars aren't coming tomorrow (or next year)|url=http://autoweek.com/article/technology/fully-autonomous-vehicles-are-more-decade-down-road|accessdate=24 March 2018|work=Autoweek|date=2016}}</ref><ref>{{cite news|last1=Knight|first1=Will|title=Boston may be famous for bad drivers, but it's the testing ground for a smarter self-driving car|url=https://www.technologyreview.com/s/608871/finally-a-driverless-car-with-some-common-sense/|accessdate=27 March 2018|work=MIT Technology Review|date=2017|language=en}}</ref><ref>{{cite journal|last1=Prakken|first1=Henry|title=On the problem of making autonomous vehicles conform to traffic law|journal=Artificial Intelligence and Law|date=31 August 2017|volume=25|issue=3|pages=341–363|doi=10.1007/s10506-017-9210-0|doi-access=free}}</ref>
| + | 人工智能在 1990 年代末和 21 世纪初通过寻找特定问题的具体解决方案,例如物流、数据挖掘或医疗诊断,逐渐恢复了声誉。到 2000 年,人工智能解决方案被广泛应用于幕后。<ref name="AI widely used" /> 狭窄的焦点使研究人员能够产生可验证的结果,开发更多的数学方法,并与其他领域(如统计学、经济学和数学)合作。<ref name="Formal methods in AI" >Formal methods are now preferred ("Victory of the neats"): * Russell & Norvig 2003, pp. 25–26 * McCorduck 2004, pp. 486–487</ref> |
| | | |
− | Compared with humans, existing AI lacks several features of human "commonsense reasoning"; most notably, humans have powerful mechanisms for reasoning about "naïve physics" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "folk psychology" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence". (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators.) This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.
| |
| | | |
− | 与人类相比,现有的AI缺少人类“常识推理”的几个特征; 最值得注意的是,人类拥有强大的对如空间、时间和物理相互作用等“自然物理”推理机制。这使得即使是小孩子也能够轻易地做出推论,比如“如果我把这支笔从桌子上滚下来,它就会掉到地板上”。人类还有一种强大的“人群心理”机制,帮助他们理解诸如“市议员因为示威者鼓吹暴力而拒绝给予许可”的自然语言,但一般的AI难以辨别被指控鼓吹暴力的人是议员还是示威者。这种“常识”的缺乏意味着AI经常会犯一些与人类不同的错误,这些错误看起来是难以理解的。例如,现在的自动驾驶汽车不能像人类那样准确推理方位和行人的意图,而只能使用非人类的推理模式来避免事故。
| + | 更快的计算机、算法改进和对大量数据的访问使机器学习和感知取得进步;数据饥渴的深度学习方法在 2012 年左右开始主导准确性基准。<ref>{{cite web|title=Ask the AI experts: What's driving today's progress in AI?|url=https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/ask-the-ai-experts-whats-driving-todays-progress-in-ai|website=McKinsey & Company|access-date=13 April 2018|archive-date=13 April 2018 |archive-url=https://web.archive.org/web/20180413190018/https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/ask-the-ai-experts-whats-driving-todays-progress-in-ai|url-status=live}}</ref>据彭博社的Jack Clark称,2015 年是人工智能具有里程碑意义的一年,谷歌内部使用人工智能的软件项目数量从 2012 年的“零星使用”增加到 2700 多个项目。克拉克还提供了事实数据,表明自 2012 年以来 AI 的改进受到图像处理任务中较低错误率的支持。<ref name="AI 2015" />他将此归因于可负担得起的神经网络的增加,这是由于云计算基础设施的增加以及研究工具和数据集的增加。<ref name="AI in 2000s" />在 2017 年的一项调查中,五分之一的公司表示他们“在某些产品或流程中加入了人工智能”。<ref>{{cite web|title=Reshaping Business With Artificial Intelligence|url=https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/|website=MIT Sloan Management Review |access-date=2 May 2018|archive-date=19 May 2018|archive-url=https://web.archive.org/web/20180519171905/https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/|url-status=live}}</ref><ref>{{cite web |last1=Lorica|first1=Ben|title=The state of AI adoption|url=https://www.oreilly.com/ideas/the-state-of-ai-adoption|website=O'Reilly Media|access-date=2 May 2018|date=18 December 2017|archive-date=2 May 2018|archive-url=https://web.archive.org/web/20180502140700/https://www.oreilly.com/ideas/the-state-of-ai-adoption|url-status=live}}</ref> |
| | | |
| | | |