更改

跳到导航 跳到搜索
删除15,486字节 、 2021年7月24日 (六) 23:33
第45行: 第45行:  
更快的计算机、算法改进和对大量数据的访问使机器学习和感知取得进步;数据饥渴的深度学习方法在 2012 年左右开始主导准确性基准。<ref>{{cite web|title=Ask the AI experts: What's driving today's progress in AI?|url=https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/ask-the-ai-experts-whats-driving-todays-progress-in-ai|website=McKinsey & Company|access-date=13 April 2018|archive-date=13 April 2018 |archive-url=https://web.archive.org/web/20180413190018/https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/ask-the-ai-experts-whats-driving-todays-progress-in-ai|url-status=live}}</ref>据彭博社的Jack Clark称,2015 年是人工智能具有里程碑意义的一年,谷歌内部使用人工智能的软件项目数量从 2012 年的“零星使用”增加到 2700 多个项目。克拉克还提供了事实数据,表明自 2012 年以来 AI 的改进受到图像处理任务中较低错误率的支持。<ref name="AI 2015">Clark, Jack (8 December 2015b). "Why 2015 Was a Breakthrough Year in Artificial Intelligence". Bloomberg.com. Archived from the original on 23 November 2016.</ref>他将此归因于可负担得起的神经网络的增加,这是由于云计算基础设施的增加以及研究工具和数据集的增加。<ref name="AI in 2000s" />在 2017 年的一项调查中,五分之一的公司表示他们“在某些产品或流程中加入了人工智能”。<ref>{{cite web|title=Reshaping Business With Artificial Intelligence|url=https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/|website=MIT Sloan Management Review |access-date=2 May 2018|archive-date=19 May 2018|archive-url=https://web.archive.org/web/20180519171905/https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/|url-status=live}}</ref><ref>{{cite web |last1=Lorica|first1=Ben|title=The state of AI adoption|url=https://www.oreilly.com/ideas/the-state-of-ai-adoption|website=O'Reilly Media|access-date=2 May 2018|date=18 December 2017|archive-date=2 May 2018|archive-url=https://web.archive.org/web/20180502140700/https://www.oreilly.com/ideas/the-state-of-ai-adoption|url-status=live}}</ref>
 
更快的计算机、算法改进和对大量数据的访问使机器学习和感知取得进步;数据饥渴的深度学习方法在 2012 年左右开始主导准确性基准。<ref>{{cite web|title=Ask the AI experts: What's driving today's progress in AI?|url=https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/ask-the-ai-experts-whats-driving-todays-progress-in-ai|website=McKinsey & Company|access-date=13 April 2018|archive-date=13 April 2018 |archive-url=https://web.archive.org/web/20180413190018/https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/ask-the-ai-experts-whats-driving-todays-progress-in-ai|url-status=live}}</ref>据彭博社的Jack Clark称,2015 年是人工智能具有里程碑意义的一年,谷歌内部使用人工智能的软件项目数量从 2012 年的“零星使用”增加到 2700 多个项目。克拉克还提供了事实数据,表明自 2012 年以来 AI 的改进受到图像处理任务中较低错误率的支持。<ref name="AI 2015">Clark, Jack (8 December 2015b). "Why 2015 Was a Breakthrough Year in Artificial Intelligence". Bloomberg.com. Archived from the original on 23 November 2016.</ref>他将此归因于可负担得起的神经网络的增加,这是由于云计算基础设施的增加以及研究工具和数据集的增加。<ref name="AI in 2000s" />在 2017 年的一项调查中,五分之一的公司表示他们“在某些产品或流程中加入了人工智能”。<ref>{{cite web|title=Reshaping Business With Artificial Intelligence|url=https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/|website=MIT Sloan Management Review |access-date=2 May 2018|archive-date=19 May 2018|archive-url=https://web.archive.org/web/20180519171905/https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/|url-status=live}}</ref><ref>{{cite web |last1=Lorica|first1=Ben|title=The state of AI adoption|url=https://www.oreilly.com/ideas/the-state-of-ai-adoption|website=O'Reilly Media|access-date=2 May 2018|date=18 December 2017|archive-date=2 May 2018|archive-url=https://web.archive.org/web/20180502140700/https://www.oreilly.com/ideas/the-state-of-ai-adoption|url-status=live}}</ref>
   −
== 挑战 Challenges ==
+
== 目标 ==
    +
模拟(或创造)智能的一般问题已被分解为若干子问题。这些问题中涉及到的特征或能力是研究人员期望智能系统展示的。受到了最多的关注的是下面描述的几个特征。<ref name="Problems of AI" />
    +
=== 推理,解决问题===
    +
早期的研究人员开发了一种算法,这种算法模仿了人类在解决谜题或进行逻辑推理时所使用的循序渐进的推理。<ref name="Reasoning">Problem solving, puzzle solving, game playing and deduction: * Russell & Norvig 2003, chpt. 3–9, * Poole, Mackworth & Goebel 1998, chpt. 2,3,7,9, * Luger & Stubblefield 2004, chpt. 3,4,6,8, * Nilsson 1998, chpt. 7–12</ref>到20世纪80年代末和90年代,AI研究使用概率论和经济学的理论开发出了处理不确定或不完全信息的方法。<ref name="Uncertain reasoning">Uncertain reasoning: * Russell & Norvig 2003, pp. 452–644, * Poole, Mackworth & Goebel 1998, pp. 345–395, * Luger & Stubblefield 2004, pp. 333–381, * Nilsson 1998, chpt. 19</ref>
      −
<!--- This is linked to in the introduction to the article and to the "AI research" section -->
+
这些算法被证明不足以解决大型推理问题,因为它们经历了一个“组合爆炸” : 随着问题规模变得越来越大,它们的处理效率呈指数级下降。<ref name="Intractability"> Intractability and efficiency and the combinatorial explosion: * Russell & Norvig 2003, pp. 9, 21–22</ref>事实上,即使是人类也很少使用早期AI研究建模的逐步推理。人们通过快速、直觉的判断来解决大多数问题。<ref name="Psychological evidence of sub-symbolic reasoning">Wason, P. C.; Shapiro, D. (1966). "Reasoning". In Foss, B. M. (ed.). New horizons in psychology. Harmondsworth: Penguin. Archived from the original on 26 July 2020. Retrieved 18 November 2019.</ref>
   −
<!--- This is linked to in the introduction to the article and to the "AI research" section -->
     −
! ——这跟文章的导言和“人工智能研究”部分有关——
      +
=== 知识表示 ===
 +
[[File:GFO taxonomy tree.png|right|thumb|本体将知识表示为领域中的一组概念以及这些概念之间的关系。]]
       +
[[Knowledge representation]]<ref name="Knowledge representation"/> and [[knowledge engineering]]<ref name="Knowledge engineering"/> are central to classical AI research. Some "expert systems" attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the "commonsense knowledge" known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;<ref name="Representing categories and relations"/> situations, events, states and time;<ref name="Representing time"/> causes and effects;<ref name="Representing causation"/> knowledge about knowledge (what we know about what other people know)
    +
传统的AI研究的重点是'''知识表示 Knowledge Representation'''<ref name="Knowledge representation">Knowledge representation: * ACM 1998, I.2.4, * Russell & Norvig 2003, pp. 320–363, * Poole, Mackworth & Goebel 1998, pp. 23–46, 69–81, 169–196, 235–277, 281–298, 319–345, * Luger & Stubblefield 2004, pp. 227–243, * Nilsson 1998, chpt. 18</ref>和'''知识工程 Knowledge Engineering'''<ref name="Knowledge engineering">Knowledge engineering: * Russell & Norvig 2003, pp. 260–266, * Poole, Mackworth & Goebel 1998, pp. 199–233, * Nilsson 1998, chpt. ≈17.1–17.4</ref>。有些“专家系统”试图将某一小领域的专家所拥有的知识收集起来。此外,一些项目试图将普通人的“常识”收集到一个包含对世界的认知的知识的大数据库中。这些常识包括:对象、属性、类别和对象之间的关系;<ref name="Representing categories and relations">Representing categories and relations: Semantic networks, description logics, inheritance (including frames and scripts): * Russell & Norvig 2003, pp. 349–354, * Poole, Mackworth & Goebel 1998, pp. 174–177, * Luger & Stubblefield 2004, pp. 248–258, * Nilsson 1998, chpt. 18.3</ref>情景、事件、状态和时间;<ref name="Representing time">Representing events and time:Situation calculus, event calculus, fluent calculus (including solving the frame problem): * Russell & Norvig 2003, pp. 328–341, * Poole, Mackworth & Goebel 1998, pp. 281–298, * Nilsson 1998, chpt. 18.2</ref>原因和结果;<ref name="Representing causation">Causal calculus: * Poole, Mackworth & Goebel 1998, pp. 335–337</ref>关于知识的知识(我们知道别人知道什么);和许多其他研究较少的领域。“存在的东西”的表示是本体,本体是被正式描述的对象、关系、概念和属性的集合,这样的形式可以让软件智能体能够理解它。本体的语义描述了逻辑概念、角色和个体,通常在Web本体语言中以类、属性和个体的形式实现。<ref>{{cite book |last=Sikos |first=Leslie F. |date=June 2017 |title=Description Logics in Multimedia Reasoning |url=https://www.springer.com/us/book/9783319540658 |location=Cham |publisher=Springer |isbn=978-3-319-54066-5 |doi=10.1007/978-3-319-54066-5 |url-status=live |archiveurl=https://web.archive.org/web/20170829120912/https://www.springer.com/us/book/9783319540658 |archivedate=29 August 2017 |df=dmy-all }}</ref>最常见的本体称为'''<font color=#ff8000>上本体 Upper Ontology</font>''',它试图为所有其他知识提供一个基础,<ref name="Ontology"/>它充当涵盖有关特定知识领域(兴趣领域或关注领域)的特定知识的领域本体之间的中介。这种形式化的知识表示可以用于基于内容的索引和检索,<ref>{{cite journal|last1=Smoliar|first1=Stephen W.|last2=Zhang|first2=HongJiang|title=Content based video indexing and retrieval|journal=IEEE Multimedia|date=1994|volume=1|issue=2|pages=62–72|doi=10.1109/93.311653}}</ref>场景解释,<ref>{{cite journal|last1=Neumann|first1=Bernd|last2=Möller|first2=Ralf|title=On scene interpretation with description logics|journal=Image and Vision Computing|date=January 2008|volume=26|issue=1|pages=82–101|doi=10.1016/j.imavis.2007.08.013}}</ref>临床决策,<ref>{{cite journal|last1=Kuperman|first1=G. J.|last2=Reichley|first2=R. M.|last3=Bailey|first3=T. C.|title=Using Commercial Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations|journal=Journal of the American Medical Informatics Association|date=1 July 2006|volume=13|issue=4|pages=369–371|doi=10.1197/jamia.M2055|pmid=16622160|pmc=1513681}}</ref>知识发现(从大型数据库中挖掘“有趣的”和可操作的推论)<ref>{{cite journal|last1=MCGARRY|first1=KEN|title=A survey of interestingness measures for knowledge discovery|journal=The Knowledge Engineering Review|date=1 December 2005|volume=20|issue=1|page=39|doi=10.1017/S0269888905000408|url=https://semanticscholar.org/paper/baf7f99e1b567868a6dc6238cc5906881242da01}}</ref>等领域。<ref>{{cite conference |url= |title=Automatic annotation and semantic retrieval of video sequences using multimedia ontologies |last1=Bertini |first1=M |last2=Del Bimbo |first2=A |last3=Torniai |first3=C |date=2006 |publisher=ACM |book-title=MM '06 Proceedings of the 14th ACM international conference on Multimedia |pages=679–682 |location=Santa Barbara |conference=14th ACM international conference on Multimedia}}</ref>
   −
The cognitive capabilities of current architectures are very limited, using only a simplified version of what intelligence is really capable of. For instance, the human mind has come up with ways to reason beyond measure and logical explanations to different occurrences in life. What would have been otherwise straightforward, an equivalently difficult problem may be challenging to solve computationally as opposed to using the human mind. This gives rise to two classes of models: structuralist and functionalist. The structural models aim to loosely mimic the basic intelligence operations of the mind such as reasoning and logic. The functional model refers to the correlating data to its computed counterpart.<ref>{{Cite journal|last=Lieto|first=Antonio|date=May 2018|title=The knowledge level in cognitive architectures: Current limitations and possible developments|journal=Cognitive Systems Research|volume=48|pages=39–55|doi=10.1016/j.cogsys.2017.05.001|hdl=2318/1665207|hdl-access=free}}</ref>
  −
  −
The cognitive capabilities of current architectures are very limited, using only a simplified version of what intelligence is really capable of. For instance, the human mind has come up with ways to reason beyond measure and logical explanations to different occurrences in life. What would have been otherwise straightforward, an equivalently difficult problem may be challenging to solve computationally as opposed to using the human mind. This gives rise to two classes of models: structuralist and functionalist. The structural models aim to loosely mimic the basic intelligence operations of the mind such as reasoning and logic. The functional model refers to the correlating data to its computed counterpart.
  −
  −
当前架构的认知能力非常有限,只做到了智能真正能够做到的事情的冰山一角。例如,人类的大脑已经想出了各种方法来推理生活中难以度量且不太合逻辑的事件。原本直截了当且困难程度相差不大的问题,与使用人类思维相比,对于计算机可能是具有挑战性的。这就产生了两类模型: '''<font color=#ff8000>结构主义 Structuralist</font>'''和'''<font color=#ff8000>功能主义 Functionalist</font>'''。结构模型旨在大致模拟大脑的基本认知功能,如推理和逻辑;功能模型是指与其计算的数据相关联的数据。
  −
  −
  −
  −
The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.<ref name="Problems of AI" />
  −
  −
The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.
  −
  −
AI的总体研究目标是创造能够使计算机和机器以智能方式运行的技术。模拟(或创造)智能的一般问题已被分解为若干子问题。这些问题中涉及到的特征或能力是研究人员期望智能系统展示的。受到了最多的关注的是下面描述的几个特征。
  −
  −
  −
  −
=== 推理,解决问题 Reasoning, problem solving ===
  −
  −
  −
  −
  −
  −
<!-- This is linked to in the introduction --><!-- SOLVED PROBLEMS -->
  −
  −
<!-- This is linked to in the introduction --><!-- SOLVED PROBLEMS -->
  −
  −
! ——这在介绍中有关——解决问题——
  −
  −
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.<ref name="Reasoning"/> By the late 1980s and 1990s, AI research had developed methods for dealing with [[uncertainty|uncertain]] or incomplete information, employing concepts from [[probability]] and [[economics]].<ref name="Uncertain reasoning"/>
  −
  −
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.
  −
  −
早期的研究人员开发了一种算法,这种算法模仿了人类在解决谜题或进行逻辑推理时所使用的循序渐进的推理。到20世纪80年代末和90年代,AI研究使用概率论和经济学的理论开发出了处理不确定或不完全信息的方法。
  −
  −
  −
  −
These algorithms proved to be insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger.<ref name="Intractability"/> In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgments.<ref name="Psychological evidence of sub-symbolic reasoning"/>
  −
  −
These algorithms proved to be insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew larger. In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgments.
  −
  −
这些算法被证明不足以解决大型推理问题,因为它们经历了一个“组合爆炸” : 随着问题规模变得越来越大,它们的处理效率呈指数级下降。事实上,即使是人类也很少使用早期AI研究建模的逐步推理。人们通过快速、直觉的判断来解决大多数问题。
  −
  −
  −
  −
=== 知识表示 Knowledge representation ===
  −
  −
  −
  −
  −
  −
<!-- This is linked to in the introduction -->
  −
  −
<!-- This is linked to in the introduction -->
  −
  −
! ——这个链接在介绍中——
  −
  −
[[File:GFO taxonomy tree.png|right|thumb|An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.本体将知识表示为领域中的一组概念以及这些概念之间的关系。]]
  −
  −
An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
  −
  −
  −
  −
{{Main|Knowledge representation|Commonsense knowledge}}
  −
  −
  −
  −
  −
  −
  −
  −
[[Knowledge representation]]<ref name="Knowledge representation"/> and [[knowledge engineering]]<ref name="Knowledge engineering"/> are central to classical AI research. Some "expert systems" attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the "commonsense knowledge" known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;<ref name="Representing categories and relations"/> situations, events, states and time;<ref name="Representing time"/> causes and effects;<ref name="Representing causation"/> knowledge about knowledge (what we know about what other people know);<ref name="Representing knowledge about knowledge"/> and many other, less well researched domains. A representation of "what exists" is an [[ontology (computer science)|ontology]]: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The [[semantics]] of these are captured as [[description logic]] concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the [[Web Ontology Language]].<ref>{{cite book |last=Sikos |first=Leslie F. |date=June 2017 |title=Description Logics in Multimedia Reasoning |url=https://www.springer.com/us/book/9783319540658 |location=Cham |publisher=Springer |isbn=978-3-319-54066-5 |doi=10.1007/978-3-319-54066-5 |url-status=live |archiveurl=https://web.archive.org/web/20170829120912/https://www.springer.com/us/book/9783319540658 |archivedate=29 August 2017 |df=dmy-all }}</ref> The most general ontologies are called [[upper ontology|upper ontologies]], which attempt to provide a foundation for all other knowledge<ref name="Ontology"/> by acting as mediators between [[Domain ontology|domain ontologies]] that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,<ref>{{cite journal|last1=Smoliar|first1=Stephen W.|last2=Zhang|first2=HongJiang|title=Content based video indexing and retrieval|journal=IEEE Multimedia|date=1994|volume=1|issue=2|pages=62–72|doi=10.1109/93.311653}}</ref> scene interpretation,<ref>{{cite journal|last1=Neumann|first1=Bernd|last2=Möller|first2=Ralf|title=On scene interpretation with description logics|journal=Image and Vision Computing|date=January 2008|volume=26|issue=1|pages=82–101|doi=10.1016/j.imavis.2007.08.013}}</ref> clinical decision support,<ref>{{cite journal|last1=Kuperman|first1=G. J.|last2=Reichley|first2=R. M.|last3=Bailey|first3=T. C.|title=Using Commercial Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations|journal=Journal of the American Medical Informatics Association|date=1 July 2006|volume=13|issue=4|pages=369–371|doi=10.1197/jamia.M2055|pmid=16622160|pmc=1513681}}</ref> knowledge discovery (mining "interesting" and actionable inferences from large databases),<ref>{{cite journal|last1=MCGARRY|first1=KEN|title=A survey of interestingness measures for knowledge discovery|journal=The Knowledge Engineering Review|date=1 December 2005|volume=20|issue=1|page=39|doi=10.1017/S0269888905000408|url=https://semanticscholar.org/paper/baf7f99e1b567868a6dc6238cc5906881242da01}}</ref> and other areas.<ref>{{cite conference |url= |title=Automatic annotation and semantic retrieval of video sequences using multimedia ontologies |last1=Bertini |first1=M |last2=Del Bimbo |first2=A |last3=Torniai |first3=C |date=2006 |publisher=ACM |book-title=MM '06 Proceedings of the 14th ACM international conference on Multimedia |pages=679–682 |location=Santa Barbara |conference=14th ACM international conference on Multimedia}}</ref>
  −
  −
Knowledge representation The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge scene interpretation, clinical decision support, knowledge discovery (mining "interesting" and actionable inferences from large databases), and other areas.
  −
  −
传统的AI研究的重点是'''<font color=#ff8000>知识表示 Knowledge Representation</font>'''和'''<font color=#ff8000>知识工程 Knowledge Engineering</font>'''。有些“专家系统”试图将某一小领域的专家所拥有的知识收集起来。此外,一些项目试图将普通人的“常识”收集到一个包含对世界的认知的知识的大数据库中。这些常识包括:情景、事件、状态和时间;原因和结果;关于知识的知识(我们知道别人知道什么);和许多其他研究较少的领域。“存在的东西”的表示是本体,本体是被正式描述的对象、关系、概念和属性的集合,这样的形式可以让软件智能体能够理解它。本体的语义描述了逻辑概念、角色和个体,通常在Web本体语言中以类、属性和个体的形式实现。最常见的本体称为'''<font color=#ff8000>上本体 Upper Ontology</font>''',它试图为所有其他知识提供一个基础,它充当涵盖有关特定知识领域(兴趣领域或关注领域)的特定知识的领域本体之间的中介。这种形式化的知识表示可以用于基于内容的索引和检索,场景解释,临床决策,知识发现(从大型数据库中挖掘“有趣的”和可操作的推论)等领域。
  −
  −
  −
  −
  −
Among the most difficult problems in knowledge representation are:
  −
  −
Among the most difficult problems in knowledge representation are:
      
知识表示中最困难的问题是:
 
知识表示中最困难的问题是:
第150行: 第73行:  
Default reasoning and the qualification problem: Many of the things people know take the form of "working assumptions". For example, if a bird comes up in conversation, people typically picture an animal that is fist-sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
 
Default reasoning and the qualification problem: Many of the things people know take the form of "working assumptions". For example, if a bird comes up in conversation, people typically picture an animal that is fist-sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
   −
'''<font color=#ff8000>缺省推理 Default Reasoning</font>''' 和'''<font color=#ff8000> 限定性问题 Qualification Problem</font>''': 人们对事物的认知常常基于一个可行的假设。提到鸟,人们通常会想象一只拳头大小、会唱歌、会飞的动物,但并不是所有鸟类都有这样的特性。1969年约翰 · 麦卡锡将其归咎于限定性问题: 对于AI研究人员所关心的任何常识性规则来说,大量的例外往往存在。几乎没有什么在抽象逻辑角度是完全真或完全假。AI研究探索了许多解决这个问题的方法。
+
'''默认推理和资格问题''' : 人们对事物的认知常常基于一个可行的假设。提到鸟,人们通常会想象一只拳头大小、会唱歌、会飞的动物,但并不是所有鸟类都有这样的特性。1969年John McCarthy<ref>McCarthy, John; Hayes, P. J. (1969). "Some philosophical problems from the standpoint of artificial intelligence". Machine Intelligence. 4: 463–502. CiteSeerX 10.1.1.85.5082.</ref><ref>Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.</ref>将其归咎于限定性问题:对于AI研究人员所关心的任何常识性规则来说,大量的例外往往存在。几乎没有什么在抽象逻辑角度是完全真或完全假。AI研究探索了许多解决这个问题的方法。<ref>Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press. ISBN 978-0-19-510270-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020.</ref><ref>Luger, George; Stubblefield, William (2004). Artificial Intelligence: Structures and Strategies for Complex Problem Solving (5th ed.). Benjamin/Cummings. ISBN 978-0-8053-4780-7.</ref><ref>Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4. Archived from the original on 26 July 2020. Retrieved 18 November 2019.</ref>
   −
;Breadth of commonsense knowledge: The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of [[commonsense knowledge]] (e.g., [[Cyc]]) require enormous amounts of laborious [[ontology engineering|ontological engineering]]—they must be built, by hand, one complicated concept at a time.<ref name="Breadth of commonsense knowledge"/>
     −
Breadth of commonsense knowledge: The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time.
+
'''常识的广度''': 常人掌握的“元常识”的数量是非常大的。试图建立一个完整的常识知识库(例如Cyc)的研究项目需要大量费力的本体工程——它们必须一次手工构建一个复杂的概念。<ref>Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2</ref><ref>Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3</ref><ref>Moravec, Hans (1988). Mind Children. Harvard University Press. ISBN 978-0-674-57616-2.</ref><ref>Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4. </ref>
   −
常识的广度: 常人掌握的“元常识”的数量是非常大的。一个像Cyc一样的完整的常识库若能建成,其需要大量劳动密集的本体工程ーー这些复杂的常识概念必须由人工一个一个地构建。
  −
  −
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])。想要建立一个像Cyc一样的完整的常识库  一句为省译
      
;Subsymbolic form of some commonsense knowledge: Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed"{{sfn|Dreyfus|Dreyfus|1986}} or an art critic can take one look at a statue and realize that it is a fake.{{sfn|Gladwell|2005}} These are non-conscious and sub-symbolic intuitions or tendencies in the human brain.<ref name="Intuition"/> Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that [[situated artificial intelligence|situated AI]], [[computational intelligence]], or [[#Statistical|statistical AI]] will provide ways to represent this kind of knowledge.<ref name="Intuition"/>
 
;Subsymbolic form of some commonsense knowledge: Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed"{{sfn|Dreyfus|Dreyfus|1986}} or an art critic can take one look at a statue and realize that it is a fake.{{sfn|Gladwell|2005}} These are non-conscious and sub-symbolic intuitions or tendencies in the human brain.<ref name="Intuition"/> Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that [[situated artificial intelligence|situated AI]], [[computational intelligence]], or [[#Statistical|statistical AI]] will provide ways to represent this kind of knowledge.<ref name="Intuition"/>
第164行: 第83行:  
Subsymbolic form of some commonsense knowledge: Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed" or an art critic can take one look at a statue and realize that it is a fake. These are non-conscious and sub-symbolic intuitions or tendencies in the human brain. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.
 
Subsymbolic form of some commonsense knowledge: Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed" or an art critic can take one look at a statue and realize that it is a fake. These are non-conscious and sub-symbolic intuitions or tendencies in the human brain. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.
   −
常识的'''<font color=#ff8000>亚符号 Subsymbolic</font>''' 形式: 人们所知道的许多东西必不能用可以口头表达的“事实”或“陈述”描述。例如,一个国际象棋大师会避免下某个位置,因为觉得这步棋“感觉上太激进”;或者一个艺术评论家可以看一眼雕像,就知道它是假的。这些是人类大脑中无意识和亚符号的直觉。这种知识为符号化的、有意识的知识提供信息和语境。与亚符号推理的相关问题一样,我们希望情境AI、计算智能或统计AI能够表示这类知识。
+
'''一些常识性知识的子符号形式''': 人们所知道的许多东西必不能用可以口头表达的“事实”或“陈述”描述。例如,一个国际象棋大师会避免下某个位置,因为觉得这步棋“感觉上太激进”;<ref>Dreyfus, Hubert; Dreyfus, Stuart (1986). Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford, UK: Blackwell. ISBN 978-0-02-908060-3.</ref>或者一个艺术评论家可以看一眼雕像,就知道它是假的。<ref>Gladwell, Malcolm (2005). Blink. New York: Little, Brown and Co. ISBN 978-0-316-17232-5.</ref>这些是人类大脑中无意识和亚符号的直觉。这种知识为符号化的、有意识的知识提供信息和语境。<ref>Dreyfus, Hubert; Dreyfus, Stuart (1986). Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford, UK: Blackwell. ISBN 978-0-02-908060-3.</ref><ref>Hawkins, Jeff; Blakeslee, Sandra (2005). On Intelligence. New York, NY: Owl Books. ISBN 978-0-8050-7853-4.</ref>与子符号推理的相关问题一样,我们希望情境AI、计算智能或统计AI能够表示这类知识。
 
  −
--[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]]) 这种知识为符号化的、有意识的知识提供信息和语境 一句为省译
  −
 
  −
 
  −
=== 规划 Planning ===
  −
 
  −
 
  −
 
  −
 
  −
 
  −
<!-- This is linked to in the introduction -->
  −
 
  −
<!-- This is linked to in the introduction -->
  −
 
  −
! ——这个链接在介绍中——
  −
 
  −
[[File:Hierarchical-control-system.svg|thumb| A [[hierarchical control system]] is a form of [[control system]] in which a set of devices and governing software is arranged in a hierarchy分层控制系统是控制系统的一种形式,在这种控制系统中,一组设备和控制软件被放在一个层次结构中.]]
  −
 
  −
A [[hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.]]
  −
 
  −
 
  −
 
  −
 
  −
{{Main|Automated planning and scheduling}}
  −
 
  −
 
  −
 
            +
=== 规划 ===
 +
[[File:Hierarchical-control-system.svg|thumb| 分层控制系统是控制系统的一种形式,在这种控制系统中,一组设备和控制软件被放在一个层次结构中.]]
    
Intelligent agents must be able to set goals and achieve them.<ref name="Planning"/> They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the [[utility]] (or "value") of available choices.<ref name="Information value theory"/>
 
Intelligent agents must be able to set goals and achieve them.<ref name="Planning"/> They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the [[utility]] (or "value") of available choices.<ref name="Information value theory"/>
第205行: 第99行:     
In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.<ref name="Classical planning"/> However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions but also evaluate its predictions and adapt based on its assessment.<ref name="Non-deterministic planning"/>
 
In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.<ref name="Classical planning"/> However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions but also evaluate its predictions and adapt based on its assessment.<ref name="Non-deterministic planning"/>
  −
In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions. However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions but also evaluate its predictions and adapt based on its assessment.
      
在经典的规划问题中,智能体可以假设它是世界上唯一运行着的系统,以便于智能体确定其做出某个行为带来的后果。然而,如果智能体不是唯一的参与者,这就要求智能体能够在不确定的情况下进行推理。这需要一智能体不仅能够评估其环境和作出预测,而且还评估其预测和根据其预测做出调整。
 
在经典的规划问题中,智能体可以假设它是世界上唯一运行着的系统,以便于智能体确定其做出某个行为带来的后果。然而,如果智能体不是唯一的参与者,这就要求智能体能够在不确定的情况下进行推理。这需要一智能体不仅能够评估其环境和作出预测,而且还评估其预测和根据其预测做出调整。
  −
         
[[Multi-agent planning]] uses the [[cooperation]] and competition of many agents to achieve a given goal. [[Emergent behavior]] such as this is used by [[evolutionary algorithms]] and [[swarm intelligence]].<ref name="Multi-agent planning"/>
 
[[Multi-agent planning]] uses the [[cooperation]] and competition of many agents to achieve a given goal. [[Emergent behavior]] such as this is used by [[evolutionary algorithms]] and [[swarm intelligence]].<ref name="Multi-agent planning"/>
   −
Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.
   
多智能体规划利用多个智能体之间的协作和竞争来达到目标。进化算法和群体智能会用到类似这样的涌现行为。
 
多智能体规划利用多个智能体之间的协作和竞争来达到目标。进化算法和群体智能会用到类似这样的涌现行为。
      −
===学习 Learning ===
+
===学习===
 
  −
 
  −
 
  −
 
  −
 
  −
<!-- This is linked to in the introduction -->
  −
 
  −
<!-- This is linked to in the introduction -->
  −
 
  −
! ——这个链接在介绍中——
  −
 
  −
{{Main|Machine learning}}
  −
 
  −
 
  −
 
  −
 
  −
 
  −
 
      
Machine learning (ML), a fundamental concept of AI research since the field's inception,<ref>[[Alan Turing]] discussed the centrality of learning as early as 1950, in his classic paper "[[Computing Machinery and Intelligence]]".{{Harv|Turing|1950}} In 1956, at the original Dartmouth AI summer conference, [[Ray Solomonoff]] wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".{{Harv|Solomonoff|1956}}</ref> is the study of computer algorithms that improve automatically through experience.<ref>This is a form of [[Tom M. Mitchell|Tom Mitchell]]'s widely quoted definition of machine learning: "A computer program is set to learn from an experience ''E'' with respect to some task ''T'' and some performance measure ''P'' if its performance on ''T'' as measured by ''P'' improves with experience ''E''."</ref><ref name="Machine learning"/>
 
Machine learning (ML), a fundamental concept of AI research since the field's inception,<ref>[[Alan Turing]] discussed the centrality of learning as early as 1950, in his classic paper "[[Computing Machinery and Intelligence]]".{{Harv|Turing|1950}} In 1956, at the original Dartmouth AI summer conference, [[Ray Solomonoff]] wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".{{Harv|Solomonoff|1956}}</ref> is the study of computer algorithms that improve automatically through experience.<ref>This is a form of [[Tom M. Mitchell|Tom Mitchell]]'s widely quoted definition of machine learning: "A computer program is set to learn from an experience ''E'' with respect to some task ''T'' and some performance measure ''P'' if its performance on ''T'' as measured by ''P'' improves with experience ''E''."</ref><ref name="Machine learning"/>
  −
Machine learning (ML), a fundamental concept of AI research since the field's inception, is the study of computer algorithms that improve automatically through experience.
      
机器学习'''<font color=#ff8000> Machine Learning,ML</font>'''是自AI诞生以来就有的一个基本概念,它研究如何通过经验自动改进计算机算法。
 
机器学习'''<font color=#ff8000> Machine Learning,ML</font>'''是自AI诞生以来就有的一个基本概念,它研究如何通过经验自动改进计算机算法。
第247行: 第116行:     
[[Unsupervised learning]] is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. [[Supervised learning]] includes both [[statistical classification|classification]] and numerical [[Regression analysis|regression]], which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.<ref name="Machine learning"/> Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". [[Computational learning theory]] can assess learners by [[computational complexity]], by [[sample complexity]] (how much data is required), or by other notions of [[optimization theory|optimization]].<ref>{{cite journal|last1=Jordan|first1=M. I.|last2=Mitchell|first2=T. M.|title=Machine learning: Trends, perspectives, and prospects|journal=Science|date=16 July 2015|volume=349|issue=6245|pages=255–260|doi=10.1126/science.aaa8415|pmid=26185243|bibcode=2015Sci...349..255J}}</ref> In [[reinforcement learning]]<ref name="Reinforcement learning"/> the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.
 
[[Unsupervised learning]] is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. [[Supervised learning]] includes both [[statistical classification|classification]] and numerical [[Regression analysis|regression]], which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.<ref name="Machine learning"/> Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". [[Computational learning theory]] can assess learners by [[computational complexity]], by [[sample complexity]] (how much data is required), or by other notions of [[optimization theory|optimization]].<ref>{{cite journal|last1=Jordan|first1=M. I.|last2=Mitchell|first2=T. M.|title=Machine learning: Trends, perspectives, and prospects|journal=Science|date=16 July 2015|volume=349|issue=6245|pages=255–260|doi=10.1126/science.aaa8415|pmid=26185243|bibcode=2015Sci...349..255J}}</ref> In [[reinforcement learning]]<ref name="Reinforcement learning"/> the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.
  −
Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.
      
'''<font color=#ff8000>无监督学习 Unsupervised Learning</font>'''可以从数据流中发现某种模式,而不需要人类提前标注数据。'''<font color=#ff8000>有监督学习 Supervised Learning</font>'''包括分类和回归,这需要人类首先标注数据。分类被用于确定某物属于哪个类别,这需要把大量来自多个类别的例子投入程序;回归用来产生一个描述输入和输出之间的关系的函数,并预测输出会如何随着输入的变化而变化。在强化学习中,智能体会因为好的回应而受到奖励,因为坏的回应而受到惩罚;智能体通过一系列的奖励和惩罚形成了一个在其问题空间中可施行的策略。
 
'''<font color=#ff8000>无监督学习 Unsupervised Learning</font>'''可以从数据流中发现某种模式,而不需要人类提前标注数据。'''<font color=#ff8000>有监督学习 Supervised Learning</font>'''包括分类和回归,这需要人类首先标注数据。分类被用于确定某物属于哪个类别,这需要把大量来自多个类别的例子投入程序;回归用来产生一个描述输入和输出之间的关系的函数,并预测输出会如何随着输入的变化而变化。在强化学习中,智能体会因为好的回应而受到奖励,因为坏的回应而受到惩罚;智能体通过一系列的奖励和惩罚形成了一个在其问题空间中可施行的策略。
第254行: 第121行:       −
===自然语言处理 Natural language processing ===
+
===自然语言处理===
 
  −
 
  −
 
  −
 
  −
 
  −
<!-- This is linked to in the introduction -->
  −
 
  −
<!-- This is linked to in the introduction -->
  −
 
  −
! ——这个链接在介绍中——
  −
 
  −
[[File:ParseTree.svg|thumb| A [[parse tree]] represents the [[syntax|syntactic]] structure of a sentence according to some [[formal grammar]].]一个[[根据某种形式语法,解析树表示一个句子的句法结构]]
  −
 
  −
A [[parse tree represents the syntactic structure of a sentence according to some formal grammar.]]
  −
 
  −
 
  −
 
  −
{{Main|Natural language processing}}
  −
 
  −
 
  −
 
  −
 
  −
 
      +
[[File:ParseTree.svg|thumb|一个代表了语法根据一些句子的结构形式文法的解析树。]]
    
[[Natural language processing]]<ref name="Natural language processing"/> (NLP) gives machines the ability to read and [[natural language understanding|understand]] human language. A sufficiently powerful natural language processing system would enable [[natural-language user interface]]s and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include [[information retrieval]], [[text mining]], [[question answering]]<ref>[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis "Versatile question answering systems: seeing in synthesis"] {{webarchive|url=https://web.archive.org/web/20160201125047/http://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis |date=1 February 2016 }}, Mittal et al., IJIIDS, 5(2), 119–142, 2011
 
[[Natural language processing]]<ref name="Natural language processing"/> (NLP) gives machines the ability to read and [[natural language understanding|understand]] human language. A sufficiently powerful natural language processing system would enable [[natural-language user interface]]s and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include [[information retrieval]], [[text mining]], [[question answering]]<ref>[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis "Versatile question answering systems: seeing in synthesis"] {{webarchive|url=https://web.archive.org/web/20160201125047/http://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis |date=1 February 2016 }}, Mittal et al., IJIIDS, 5(2), 119–142, 2011
  −
Natural language processing (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of NLP include information retrieval, text mining, question answering<ref>[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis "Versatile question answering systems: seeing in synthesis"] , Mittal et al., IJIIDS, 5(2), 119–142, 2011
      
自然语言处理(NLP)赋予机器阅读和理解人类语言的能力。一个足够强大的自然语言处理系统可以提供自然语言用户界面,并能直接从如新闻专线文本的人类文字中获取知识。一些简单的自然语言处理的应用包括信息检索、文本挖掘、问答和机器翻译。目前许多方法使用词的共现频率来构建文本的句法表示。用“关键词定位”策略进行搜索很常见且可扩展,但很粗糙;搜索“狗”可能只匹配与含“狗”字的文档,而漏掉与“犬”匹配的文档。“词汇相关性”策略使用如“事故”这样的词出现的频次,评估文本想表达的情感。现代统计NLP方法可以结合所有这些策略以及其他策略,在以页或段落为单位的处理上获得还能让人接受的准确度,但仍然缺乏对单独的句子进行分类所需的语义理解。除了编码语义常识常见的困难外,现有的语义NLP有时可扩展性太差,无法应用到在商业中。而“叙述性”NLP除了达到语义NLP的功能之外,还想最终能做到充分理解常识推理。
 
自然语言处理(NLP)赋予机器阅读和理解人类语言的能力。一个足够强大的自然语言处理系统可以提供自然语言用户界面,并能直接从如新闻专线文本的人类文字中获取知识。一些简单的自然语言处理的应用包括信息检索、文本挖掘、问答和机器翻译。目前许多方法使用词的共现频率来构建文本的句法表示。用“关键词定位”策略进行搜索很常见且可扩展,但很粗糙;搜索“狗”可能只匹配与含“狗”字的文档,而漏掉与“犬”匹配的文档。“词汇相关性”策略使用如“事故”这样的词出现的频次,评估文本想表达的情感。现代统计NLP方法可以结合所有这些策略以及其他策略,在以页或段落为单位的处理上获得还能让人接受的准确度,但仍然缺乏对单独的句子进行分类所需的语义理解。除了编码语义常识常见的困难外,现有的语义NLP有时可扩展性太差,无法应用到在商业中。而“叙述性”NLP除了达到语义NLP的功能之外,还想最终能做到充分理解常识推理。
   −
</ref> and [[machine translation]].<ref name="Applications of natural language processing"/> Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. "Keyword spotting" strategies for search are popular and scalable but dumb; a search query for "dog" might only match documents with the literal word "dog" and miss a document with the word "poodle". "Lexical affinity" strategies use the occurrence of words such as "accident" to [[sentiment analysis|assess the sentiment]] of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of "narrative" NLP is to embody a full understanding of commonsense reasoning.<ref>{{cite journal|last1=Cambria|first1=Erik|last2=White|first2=Bebo|title=Jumping NLP Curves: A Review of Natural Language Processing Research [Review Article]|journal=IEEE Computational Intelligence Magazine|date=May 2014|volume=9|issue=2|pages=48–57|doi=10.1109/MCI.2014.2307227}}</ref>
     −
</ref> and machine translation.
     −
/ ref 和机器翻译。
+
===知觉===
    +
[[File:Ääretuvastuse näide.png|thumb|特征检测(如图:边缘检测)帮助人工智能从原始数据中组合出信息丰富的抽象结构]]
       +
[Machine perception]]<ref name="Machine perception"/> is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active [[lidar]], sonar, radar, and [[tactile sensor]]s) to deduce aspects of the world. Applications include [[speech recognition]],<ref name="Speech recognition"/> [[facial recognition system|facial recognition]], and [[object recognition]].<ref name="Object recognition"/> [[Computer vision]] is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its "object model" to assess that fifty-meter pedestrians do not exist.<ref name="Computer vision"/>
   −
  −
===知觉 Perception ===
  −
  −
  −
  −
  −
  −
<!-- This is linked to in the introduction -->
  −
  −
<!-- This is linked to in the introduction -->
  −
  −
! ——这个链接在介绍中——
  −
  −
{{Main|Machine perception|Computer vision|Speech recognition}}
  −
  −
  −
  −
  −
  −
  −
  −
[[File:Ääretuvastuse näide.png|thumb|[[Feature detection (computer vision)|Feature detection]] (pictured: [[edge detection]]) helps AI compose informative abstract structures out of raw data.Feature detection (pictured: edge detection) helps AI compose informative abstract structures out of raw data.]]]]
  −
  −
  −
  −
[图片: 边缘检测特征提取]帮助AI从原始数据中合成有信息量的抽象结构
  −
  −
  −
  −
  −
  −
[[Machine perception]]<ref name="Machine perception"/> is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active [[lidar]], sonar, radar, and [[tactile sensor]]s) to deduce aspects of the world. Applications include [[speech recognition]],<ref name="Speech recognition"/> [[facial recognition system|facial recognition]], and [[object recognition]].<ref name="Object recognition"/> [[Computer vision]] is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its "object model" to assess that fifty-meter pedestrians do not exist.<ref name="Computer vision"/>
  −
  −
Machine perception is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition, facial recognition, and object recognition. Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its "object model" to assess that fifty-meter pedestrians do not exist.
      
机器感知是利用传感器(如可见光或红外线摄像头、麦克风、无线信号、激光雷达、声纳、雷达和触觉传感器)的输入来推断世界的不同角度的能力。应用包括语音识别、面部识别和物体识别。计算机视觉是分析可视化输入的能力。这种输入通常是模糊的; 一个在远处50米高的巨人可能会与近处正常大小的行人占据完全相同的像素,这就要求AI判断不同解释的相对可能性和合理性,例如使用”物体模型”来判断50米高的巨人其实是不存在的。
 
机器感知是利用传感器(如可见光或红外线摄像头、麦克风、无线信号、激光雷达、声纳、雷达和触觉传感器)的输入来推断世界的不同角度的能力。应用包括语音识别、面部识别和物体识别。计算机视觉是分析可视化输入的能力。这种输入通常是模糊的; 一个在远处50米高的巨人可能会与近处正常大小的行人占据完全相同的像素,这就要求AI判断不同解释的相对可能性和合理性,例如使用”物体模型”来判断50米高的巨人其实是不存在的。
第335行: 第144行:       −
=== 运动和操作 Motion and manipulation ===
+
=== 运动和操作 ===
 
  −
 
  −
 
  −
 
  −
 
  −
<!-- This is linked to in the introduction -->
  −
 
  −
<!-- This is linked to in the introduction -->
  −
 
  −
! ——这个链接在介绍中——
  −
 
  −
{{Main|Robotics}}
  −
 
  −
 
  −
 
  −
 
  −
 
  −
 
  −
 
   
AI is heavily used in [[robotics]].<ref name="Robotics"/> Advanced [[robotic arm]]s and other [[industrial robot]]s, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.<ref name="Configuration space"/> A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and [[robotic mapping|map]] its environment; however, dynamic environments, such as (in [[endoscopy]]) the interior of a patient's breathing body, pose a greater challenge. [[Motion planning]] is the process of breaking down a movement task into "primitives" such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.{{sfn|Tecuci|2012}}<ref name="Robotic mapping"/><ref>{{cite journal|last1=Cadena|first1=Cesar|last2=Carlone|first2=Luca|last3=Carrillo|first3=Henry|last4=Latif|first4=Yasir|last5=Scaramuzza|first5=Davide|last6=Neira|first6=Jose|last7=Reid|first7=Ian|last8=Leonard|first8=John J.|title=Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age|journal=IEEE Transactions on Robotics|date=December 2016|volume=32|issue=6|pages=1309–1332|doi=10.1109/TRO.2016.2624754|arxiv=1606.05830|bibcode=2016arXiv160605830C}}</ref> [[Moravec's paradox]] generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after [[Hans Moravec]], who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".<ref>{{Cite book| first = Hans | last = Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press | author-link =Hans Moravec| p=15}}</ref><ref>{{cite news|last1=Chan|first1=Szu Ping|title=This is what will happen when robots take over the world|url=https://www.telegraph.co.uk/finance/economics/11994694/Heres-what-will-happen-when-robots-take-over-the-world.html|accessdate=23 April 2018|date=15 November 2015}}</ref> This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of [[natural selection]] for millions of years.<ref name="The Economist">{{cite news|title=IKEA furniture and the limits of AI|url=https://www.economist.com/news/leaders/21740735-humans-have-had-good-run-most-recent-breakthrough-robotics-it-clear|accessdate=24 April 2018|work=The Economist|date=2018|language=en}}</ref>
 
AI is heavily used in [[robotics]].<ref name="Robotics"/> Advanced [[robotic arm]]s and other [[industrial robot]]s, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.<ref name="Configuration space"/> A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and [[robotic mapping|map]] its environment; however, dynamic environments, such as (in [[endoscopy]]) the interior of a patient's breathing body, pose a greater challenge. [[Motion planning]] is the process of breaking down a movement task into "primitives" such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.{{sfn|Tecuci|2012}}<ref name="Robotic mapping"/><ref>{{cite journal|last1=Cadena|first1=Cesar|last2=Carlone|first2=Luca|last3=Carrillo|first3=Henry|last4=Latif|first4=Yasir|last5=Scaramuzza|first5=Davide|last6=Neira|first6=Jose|last7=Reid|first7=Ian|last8=Leonard|first8=John J.|title=Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age|journal=IEEE Transactions on Robotics|date=December 2016|volume=32|issue=6|pages=1309–1332|doi=10.1109/TRO.2016.2624754|arxiv=1606.05830|bibcode=2016arXiv160605830C}}</ref> [[Moravec's paradox]] generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after [[Hans Moravec]], who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".<ref>{{Cite book| first = Hans | last = Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press | author-link =Hans Moravec| p=15}}</ref><ref>{{cite news|last1=Chan|first1=Szu Ping|title=This is what will happen when robots take over the world|url=https://www.telegraph.co.uk/finance/economics/11994694/Heres-what-will-happen-when-robots-take-over-the-world.html|accessdate=23 April 2018|date=15 November 2015}}</ref> This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of [[natural selection]] for millions of years.<ref name="The Economist">{{cite news|title=IKEA furniture and the limits of AI|url=https://www.economist.com/news/leaders/21740735-humans-have-had-good-run-most-recent-breakthrough-robotics-it-clear|accessdate=24 April 2018|work=The Economist|date=2018|language=en}}</ref>
  −
AI is heavily used in robotics. Moravec's paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility". This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.
      
AI在机器人学中应用广泛。在现代工厂中广泛使用的高级机械臂和其他工业机器人,可以从经验中学习如何在存在摩擦和齿轮滑移的情况下有效地移动。当处在一个静态且可见的小环境中时,现代移动机器人可以很容易地确定自己的位置并绘制环境地图;然而如果是动态环境,比如用内窥镜检查病人呼吸的身体的内部,难度就会更高。运动规划是将一个运动任务分解为如单个的关节运动这样的“基本任务”的过程。这种运动通常包括顺应运动,在这个过程中需要与物体保持物理接触。'''<font color=#ff8000>莫拉维克悖论 Moravec's Paradox</font>''' 概括了人类理所当然认为低水平的感知运动技能很难在编程给机器人的事实,这个悖论是以汉斯 · 莫拉维克的名字命名的,他在1988年表示: “让计算机在智力测试或下跳棋中展现出成人水平的表现相对容易,但要让计算机拥有一岁小孩的感知和移动能力却很难,甚至不可能。”这是因为,身体灵巧性在数百万年的自然选择中一直作为一个直接的目标以增强人类的生存能力;而与此相比,跳棋技能则很奢侈,“擅长跳棋”的基因并不被生存导向的自然选择所偏好与富集。
 
AI在机器人学中应用广泛。在现代工厂中广泛使用的高级机械臂和其他工业机器人,可以从经验中学习如何在存在摩擦和齿轮滑移的情况下有效地移动。当处在一个静态且可见的小环境中时,现代移动机器人可以很容易地确定自己的位置并绘制环境地图;然而如果是动态环境,比如用内窥镜检查病人呼吸的身体的内部,难度就会更高。运动规划是将一个运动任务分解为如单个的关节运动这样的“基本任务”的过程。这种运动通常包括顺应运动,在这个过程中需要与物体保持物理接触。'''<font color=#ff8000>莫拉维克悖论 Moravec's Paradox</font>''' 概括了人类理所当然认为低水平的感知运动技能很难在编程给机器人的事实,这个悖论是以汉斯 · 莫拉维克的名字命名的,他在1988年表示: “让计算机在智力测试或下跳棋中展现出成人水平的表现相对容易,但要让计算机拥有一岁小孩的感知和移动能力却很难,甚至不可能。”这是因为,身体灵巧性在数百万年的自然选择中一直作为一个直接的目标以增强人类的生存能力;而与此相比,跳棋技能则很奢侈,“擅长跳棋”的基因并不被生存导向的自然选择所偏好与富集。
  −
  −
--[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])这是因为,与跳棋不同,身体灵巧性一直在数百万年的自然选择后才形成的。一句为意译
  −
--[[用户:Paradoxist-Paradoxer|Paradoxist@Paradoxer]]([[用户讨论:Paradoxist-Paradoxer|讨论]])应强调自然选择的目标是如何的——修改为如上更佳。
  −
  −
  −
=== 社会智能 Social intelligence ===
  −
  −
  −
  −
  −
  −
<!-- This is linked to in the introduction -->
  −
  −
<!-- This is linked to in the introduction -->
  −
  −
! ——这个链接在介绍中——
  −
  −
{{Main|Affective computing}}
  −
  −
  −
  −
[[File:Kismet robot at MIT Museum.jpg|thumb|[[Kismet (robot)|Kismet]], a robot with rudimentary social skills{{sfn|''Kismet''}}Kismet,一个具有基本社交技能的机器人]]]]
  −
  −
Kismet, a robot with rudimentary social skills]]
  −
  −
  −
            +
=== 社会智能  ===
 +
[[File:Kismet robot at MIT Museum.jpg|thumb|Kismet,一个具有基本社交技能的机器人]]
    
Moravec's paradox can be extended to many forms of social intelligence.<ref>{{cite magazine |last1=Thompson|first1=Derek|title=What Jobs Will the Robots Take?|url=https://www.theatlantic.com/business/archive/2014/01/what-jobs-will-the-robots-take/283239/|accessdate=24 April 2018|magazine=The Atlantic|date=2018}}</ref><ref>{{cite journal|last1=Scassellati|first1=Brian|title=Theory of mind for a humanoid robot|journal=Autonomous Robots|volume=12|issue=1|year=2002|pages=13–24|doi=10.1023/A:1013298507114}}</ref> Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.<ref>{{cite journal|last1=Cao|first1=Yongcan|last2=Yu|first2=Wenwu|last3=Ren|first3=Wei|last4=Chen|first4=Guanrong|title=An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination|journal=IEEE Transactions on Industrial Informatics|date=February 2013|volume=9|issue=1|pages=427–438|doi=10.1109/TII.2012.2219061|arxiv=1207.3231}}</ref> [[Affective computing]] is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human [[Affect (psychology)|affects]].{{sfn|Thro|1993}}{{sfn|Edelson|1991}}{{sfn|Tao|Tan|2005}} Moderate successes related to affective computing include textual [[sentiment analysis]] and, more recently, multimodal affect analysis (see [[multimodal sentiment analysis]]), wherein AI classifies the affects displayed by a videotaped subject.<ref>{{cite journal|last1=Poria|first1=Soujanya|last2=Cambria|first2=Erik|last3=Bajpai|first3=Rajiv|last4=Hussain|first4=Amir|title=A review of affective computing: From unimodal analysis to multimodal fusion|journal=Information Fusion|date=September 2017|volume=37|pages=98–125|doi=10.1016/j.inffus.2017.02.003|hdl=1893/25490|hdl-access=free}}</ref>
 
Moravec's paradox can be extended to many forms of social intelligence.<ref>{{cite magazine |last1=Thompson|first1=Derek|title=What Jobs Will the Robots Take?|url=https://www.theatlantic.com/business/archive/2014/01/what-jobs-will-the-robots-take/283239/|accessdate=24 April 2018|magazine=The Atlantic|date=2018}}</ref><ref>{{cite journal|last1=Scassellati|first1=Brian|title=Theory of mind for a humanoid robot|journal=Autonomous Robots|volume=12|issue=1|year=2002|pages=13–24|doi=10.1023/A:1013298507114}}</ref> Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.<ref>{{cite journal|last1=Cao|first1=Yongcan|last2=Yu|first2=Wenwu|last3=Ren|first3=Wei|last4=Chen|first4=Guanrong|title=An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination|journal=IEEE Transactions on Industrial Informatics|date=February 2013|volume=9|issue=1|pages=427–438|doi=10.1109/TII.2012.2219061|arxiv=1207.3231}}</ref> [[Affective computing]] is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human [[Affect (psychology)|affects]].{{sfn|Thro|1993}}{{sfn|Edelson|1991}}{{sfn|Tao|Tan|2005}} Moderate successes related to affective computing include textual [[sentiment analysis]] and, more recently, multimodal affect analysis (see [[multimodal sentiment analysis]]), wherein AI classifies the affects displayed by a videotaped subject.<ref>{{cite journal|last1=Poria|first1=Soujanya|last2=Cambria|first2=Erik|last3=Bajpai|first3=Rajiv|last4=Hussain|first4=Amir|title=A review of affective computing: From unimodal analysis to multimodal fusion|journal=Information Fusion|date=September 2017|volume=37|pages=98–125|doi=10.1016/j.inffus.2017.02.003|hdl=1893/25490|hdl-access=free}}</ref>
  −
Moravec's paradox can be extended to many forms of social intelligence. Distributed multi-agent coordination of autonomous vehicles remains a difficult problem. Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.
      
莫拉维克悖论可以扩展到社会智能的许多形式。自动驾驶汽车分布式多智能体协调一直是一个难题。情感计算是一个跨学科交叉领域,包括了识别、解释、处理、模拟人的情感的系统。与情感计算相关的一些还算成功的领域有文本情感分析,以及最近的'''<font color=#ff8000>多模态情感分析 Multimodal Affect Analysis</font>''' ,多模态情感分析中AI可以做到将录像中被试表现出的情感进行分类。
 
莫拉维克悖论可以扩展到社会智能的许多形式。自动驾驶汽车分布式多智能体协调一直是一个难题。情感计算是一个跨学科交叉领域,包括了识别、解释、处理、模拟人的情感的系统。与情感计算相关的一些还算成功的领域有文本情感分析,以及最近的'''<font color=#ff8000>多模态情感分析 Multimodal Affect Analysis</font>''' ,多模态情感分析中AI可以做到将录像中被试表现出的情感进行分类。
第400行: 第160行:     
In the long run, social skills and an understanding of human emotion and [[game theory]] would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate [[human–computer interaction]].<ref name="Emotion and affective computing"/> Similarly, some [[virtual assistant]]s are programmed to speak conversationally or even to banter humorously; this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are.<ref>{{cite magazine|last1=Waddell|first1=Kaveh|title=Chatbots Have Entered the Uncanny Valley|url=https://www.theatlantic.com/technology/archive/2017/04/uncanny-valley-digital-assistants/523806/|accessdate=24 April 2018|magazine=The Atlantic|date=2018}}</ref>
 
In the long run, social skills and an understanding of human emotion and [[game theory]] would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate [[human–computer interaction]].<ref name="Emotion and affective computing"/> Similarly, some [[virtual assistant]]s are programmed to speak conversationally or even to banter humorously; this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are.<ref>{{cite magazine|last1=Waddell|first1=Kaveh|title=Chatbots Have Entered the Uncanny Valley|url=https://www.theatlantic.com/technology/archive/2017/04/uncanny-valley-digital-assistants/523806/|accessdate=24 April 2018|magazine=The Atlantic|date=2018}}</ref>
  −
In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.
      
从长远来看,社交技巧以及对人类情感和博弈论的理解对社会智能体的价值很高。能够通过理解他人的动机和情绪状态来预测他人的行为,会让智能体做出更好的决策。有些计算机系统模仿人类的情感和表情,有利于对人类交互的情感动力学更敏感,或利于促进人机交互。
 
从长远来看,社交技巧以及对人类情感和博弈论的理解对社会智能体的价值很高。能够通过理解他人的动机和情绪状态来预测他人的行为,会让智能体做出更好的决策。有些计算机系统模仿人类的情感和表情,有利于对人类交互的情感动力学更敏感,或利于促进人机交互。
   −
 
+
=== 通用智能 ===
 
  −
 
  −
=== 通用智能 General intelligence ===
  −
 
  −
 
  −
 
  −
 
  −
 
  −
<!-- This is linked to in the introduction -->
  −
 
  −
<!-- This is linked to in the introduction -->
  −
 
  −
! ——这个链接在介绍中——
  −
 
  −
{{Main|Artificial general intelligence|AI-complete}}
  −
 
  −
 
  −
 
  −
 
  −
 
  −
 
      
Historically, projects such as the Cyc knowledge base (1984–) and the massive Japanese [[Fifth generation computer|Fifth Generation Computer Systems]] initiative (1982–1992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable "narrow AI" applications (such as medical diagnosis or automobile navigation).<ref name="contemporary agi">{{cite book|last1=Pennachin|first1=C.|last2=Goertzel|first2=B.|title=Contemporary Approaches to Artificial General Intelligence|journal=Artificial General Intelligence. Cognitive Technologies|date=2007|doi=10.1007/978-3-540-68677-4_1|publisher=Springer|location=Berlin, Heidelberg|series=Cognitive Technologies|isbn=978-3-540-23733-4}}</ref> Many researchers predict that such "narrow AI" work in different individual domains will eventually be incorporated into a machine with [[artificial general intelligence]] (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.<ref name="General intelligence"/><ref name="Roberts">{{cite magazine|last1=Roberts|first1=Jacob|title=Thinking Machines: The Search for Artificial Intelligence|magazine=Distillations|date=2016|volume=2|issue=2|pages=14–23|url=https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence|accessdate=20 March 2018|archive-url=https://web.archive.org/web/20180819152455/https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence|archive-date=19 August 2018|url-status=dead}}</ref> Many advances have general, cross-domain significance. One high-profile example is that [[DeepMind]] in the 2010s developed a "generalized artificial intelligence" that could learn many diverse [[Atari 2600|Atari]] games on its own, and later developed a variant of the system which succeeds at [[Catastrophic interference#The Sequential Learning Problem: McCloskey and Cohen (1989)|sequential learning]].<ref>{{cite news|title=The superhero of artificial intelligence: can this genius keep it in check?|url=https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificial-intelligence-deepmind-alphago|accessdate=26 April 2018|work=the Guardian|date=16 February 2016|language=en}}</ref><ref>{{cite journal|last1=Mnih|first1=Volodymyr|last2=Kavukcuoglu|first2=Koray|last3=Silver|first3=David|last4=Rusu|first4=Andrei A.|last5=Veness|first5=Joel|last6=Bellemare|first6=Marc G.|last7=Graves|first7=Alex|last8=Riedmiller|first8=Martin|last9=Fidjeland|first9=Andreas K.|last10=Ostrovski|first10=Georg|last11=Petersen|first11=Stig|last12=Beattie|first12=Charles|last13=Sadik|first13=Amir|last14=Antonoglou|first14=Ioannis|last15=King|first15=Helen|last16=Kumaran|first16=Dharshan|last17=Wierstra|first17=Daan|last18=Legg|first18=Shane|last19=Hassabis|first19=Demis|title=Human-level control through deep reinforcement learning|journal=Nature|date=26 February 2015|volume=518|issue=7540|pages=529–533|doi=10.1038/nature14236|pmid=25719670|bibcode=2015Natur.518..529M}}</ref><ref>{{cite news|last1=Sample|first1=Ian|title=Google's DeepMind makes AI program that can learn like a human|url=https://www.theguardian.com/global/2017/mar/14/googles-deepmind-makes-ai-program-that-can-learn-like-a-human|accessdate=26 April 2018|work=the Guardian|date=14 March 2017|language=en}}</ref> Besides [[transfer learning]],<ref>{{cite news|title=From not working to neural networking|url=https://www.economist.com/news/special-report/21700756-artificial-intelligence-boom-based-old-idea-modern-twist-not|accessdate=26 April 2018|work=The Economist|date=2016|language=en}}</ref> hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to "slurp up" a comprehensive knowledge base from the entire unstructured [[World Wide Web|Web]].{{sfn|Russell|Norvig|2009|chapter=27. AI: The Present and Future}} Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, "Master Algorithm" could lead to AGI.{{sfn|Domingos|2015|chapter=9. The Pieces of the Puzzle Fall into Place}} Finally, a few "emergent" approaches look to simulating human intelligence extremely closely, and believe that [[anthropomorphism|anthropomorphic]] features like an [[artificial brain]] or simulated [[developmental robotics|child development]] may someday reach a critical point where general intelligence emerges.<ref name="Brain simulation"/><ref>{{cite journal|last1=Goertzel|first1=Ben|last2=Lian|first2=Ruiting|last3=Arel|first3=Itamar|last4=de Garis|first4=Hugo|last5=Chen|first5=Shuo|title=A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures|journal=Neurocomputing|date=December 2010|volume=74|issue=1–3|pages=30–49|doi=10.1016/j.neucom.2010.08.012}}</ref>
 
Historically, projects such as the Cyc knowledge base (1984–) and the massive Japanese [[Fifth generation computer|Fifth Generation Computer Systems]] initiative (1982–1992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable "narrow AI" applications (such as medical diagnosis or automobile navigation).<ref name="contemporary agi">{{cite book|last1=Pennachin|first1=C.|last2=Goertzel|first2=B.|title=Contemporary Approaches to Artificial General Intelligence|journal=Artificial General Intelligence. Cognitive Technologies|date=2007|doi=10.1007/978-3-540-68677-4_1|publisher=Springer|location=Berlin, Heidelberg|series=Cognitive Technologies|isbn=978-3-540-23733-4}}</ref> Many researchers predict that such "narrow AI" work in different individual domains will eventually be incorporated into a machine with [[artificial general intelligence]] (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.<ref name="General intelligence"/><ref name="Roberts">{{cite magazine|last1=Roberts|first1=Jacob|title=Thinking Machines: The Search for Artificial Intelligence|magazine=Distillations|date=2016|volume=2|issue=2|pages=14–23|url=https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence|accessdate=20 March 2018|archive-url=https://web.archive.org/web/20180819152455/https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence|archive-date=19 August 2018|url-status=dead}}</ref> Many advances have general, cross-domain significance. One high-profile example is that [[DeepMind]] in the 2010s developed a "generalized artificial intelligence" that could learn many diverse [[Atari 2600|Atari]] games on its own, and later developed a variant of the system which succeeds at [[Catastrophic interference#The Sequential Learning Problem: McCloskey and Cohen (1989)|sequential learning]].<ref>{{cite news|title=The superhero of artificial intelligence: can this genius keep it in check?|url=https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificial-intelligence-deepmind-alphago|accessdate=26 April 2018|work=the Guardian|date=16 February 2016|language=en}}</ref><ref>{{cite journal|last1=Mnih|first1=Volodymyr|last2=Kavukcuoglu|first2=Koray|last3=Silver|first3=David|last4=Rusu|first4=Andrei A.|last5=Veness|first5=Joel|last6=Bellemare|first6=Marc G.|last7=Graves|first7=Alex|last8=Riedmiller|first8=Martin|last9=Fidjeland|first9=Andreas K.|last10=Ostrovski|first10=Georg|last11=Petersen|first11=Stig|last12=Beattie|first12=Charles|last13=Sadik|first13=Amir|last14=Antonoglou|first14=Ioannis|last15=King|first15=Helen|last16=Kumaran|first16=Dharshan|last17=Wierstra|first17=Daan|last18=Legg|first18=Shane|last19=Hassabis|first19=Demis|title=Human-level control through deep reinforcement learning|journal=Nature|date=26 February 2015|volume=518|issue=7540|pages=529–533|doi=10.1038/nature14236|pmid=25719670|bibcode=2015Natur.518..529M}}</ref><ref>{{cite news|last1=Sample|first1=Ian|title=Google's DeepMind makes AI program that can learn like a human|url=https://www.theguardian.com/global/2017/mar/14/googles-deepmind-makes-ai-program-that-can-learn-like-a-human|accessdate=26 April 2018|work=the Guardian|date=14 March 2017|language=en}}</ref> Besides [[transfer learning]],<ref>{{cite news|title=From not working to neural networking|url=https://www.economist.com/news/special-report/21700756-artificial-intelligence-boom-based-old-idea-modern-twist-not|accessdate=26 April 2018|work=The Economist|date=2016|language=en}}</ref> hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to "slurp up" a comprehensive knowledge base from the entire unstructured [[World Wide Web|Web]].{{sfn|Russell|Norvig|2009|chapter=27. AI: The Present and Future}} Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, "Master Algorithm" could lead to AGI.{{sfn|Domingos|2015|chapter=9. The Pieces of the Puzzle Fall into Place}} Finally, a few "emergent" approaches look to simulating human intelligence extremely closely, and believe that [[anthropomorphism|anthropomorphic]] features like an [[artificial brain]] or simulated [[developmental robotics|child development]] may someday reach a critical point where general intelligence emerges.<ref name="Brain simulation"/><ref>{{cite journal|last1=Goertzel|first1=Ben|last2=Lian|first2=Ruiting|last3=Arel|first3=Itamar|last4=de Garis|first4=Hugo|last5=Chen|first5=Shuo|title=A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures|journal=Neurocomputing|date=December 2010|volume=74|issue=1–3|pages=30–49|doi=10.1016/j.neucom.2010.08.012}}</ref>
  −
Historically, projects such as the Cyc knowledge base (1984–) and the massive Japanese Fifth Generation Computer Systems initiative (1982–1992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable "narrow AI" applications (such as medical diagnosis or automobile navigation). Many researchers predict that such "narrow AI" work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas. Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning. Besides transfer learning, hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to "slurp up" a comprehensive knowledge base from the entire unstructured Web. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, "Master Algorithm" could lead to AGI. Finally, a few "emergent" approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.
      
历史上,诸如 Cyc 知识库(1984 -)和大规模的日本第五代计算机系统倡议(1982-1992)等项目试图涵盖人类的所有认知。这些早期的项目未能逃脱非定量符号逻辑模型的限制,现在回过头看,这些项目大大低估了实现跨领域AI的难度。当下绝大多数AI研究人员主要研究易于处理的“狭义AI”应用(如医疗诊断或汽车导航)。许多研究人员预测,不同领域的“狭义AI”工作最终将被整合到一台具有人工通用智能(AGI)的机器中,结合上文提到的大多数狭义功能,甚至在某种程度上在大多数或所有这些领域都超过人类。许多进展具有普遍的、跨领域的意义。一个著名的例子是,21世纪一零年代,DeepMind开发了一种“'''<font color=#ff8000>通用人工智能 Generalized Artificial Intelligence</font>'''” ,它可以自己学习许多不同的 Atari 游戏,后来又开发了这种系统的升级版,在序贯学习方面取得了成功。除了迁移学习,未来AGI 的突破可能包括开发能够进行决策理论元推理的反射架构,以及从整个非结构化的网页中整合一个全面的知识库。一些人认为,某种(目前尚未发现的)概念简单,但在数学上困难的“终极算法”可以产生AGI。最后,一些“涌现”的方法着眼于尽可能地模拟人类智能,并相信如人工大脑或模拟儿童发展等拟人方案,有一天会达到一个临界点,通用智能在此涌现。
 
历史上,诸如 Cyc 知识库(1984 -)和大规模的日本第五代计算机系统倡议(1982-1992)等项目试图涵盖人类的所有认知。这些早期的项目未能逃脱非定量符号逻辑模型的限制,现在回过头看,这些项目大大低估了实现跨领域AI的难度。当下绝大多数AI研究人员主要研究易于处理的“狭义AI”应用(如医疗诊断或汽车导航)。许多研究人员预测,不同领域的“狭义AI”工作最终将被整合到一台具有人工通用智能(AGI)的机器中,结合上文提到的大多数狭义功能,甚至在某种程度上在大多数或所有这些领域都超过人类。许多进展具有普遍的、跨领域的意义。一个著名的例子是,21世纪一零年代,DeepMind开发了一种“'''<font color=#ff8000>通用人工智能 Generalized Artificial Intelligence</font>'''” ,它可以自己学习许多不同的 Atari 游戏,后来又开发了这种系统的升级版,在序贯学习方面取得了成功。除了迁移学习,未来AGI 的突破可能包括开发能够进行决策理论元推理的反射架构,以及从整个非结构化的网页中整合一个全面的知识库。一些人认为,某种(目前尚未发现的)概念简单,但在数学上困难的“终极算法”可以产生AGI。最后,一些“涌现”的方法着眼于尽可能地模拟人类智能,并相信如人工大脑或模拟儿童发展等拟人方案,有一天会达到一个临界点,通用智能在此涌现。
       +
Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like [[machine translation]], require that a machine read and write in both languages ([[#Natural language processing|NLP]]), follow the author's argument ([[#Deduction, reasoning, problem solving|reason]]), know what is being talked about ([[#Knowledge representation|knowledge]]), and faithfully reproduce the author's original intent ([[#Social intelligence|social intelligence]]). A problem like machine translation is considered "[[AI-complete]]", because all of these problems need to be solved simultaneously in order to reach human-level machine performance.
      −
Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like [[machine translation]], require that a machine read and write in both languages ([[#Natural language processing|NLP]]), follow the author's argument ([[#Deduction, reasoning, problem solving|reason]]), know what is being talked about ([[#Knowledge representation|knowledge]]), and faithfully reproduce the author's original intent ([[#Social intelligence|social intelligence]]). A problem like machine translation is considered "[[AI-complete]]", because all of these problems need to be solved simultaneously in order to reach human-level machine performance.
  −
  −
Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence). A problem like machine translation is considered "AI-complete", because all of these problems need to be solved simultaneously in order to reach human-level machine performance.
      
如果机器要像人一样解决问题,那么本文中的许多问题也可能需要通用智能。例如,即使是特定的如机器翻译的直接任务,也要求机器用两种语言进行读写(NLP) ,符合作者的观点(推理) ,知道谈论的内容(知识) ,并忠实地再现作者的原始意图(社会智能)。像机器翻译这样的问题被认为是“AI完备”的,因为需要同时解决所有这些问题,机器性能才能达到人类水平。
 
如果机器要像人一样解决问题,那么本文中的许多问题也可能需要通用智能。例如,即使是特定的如机器翻译的直接任务,也要求机器用两种语言进行读写(NLP) ,符合作者的观点(推理) ,知道谈论的内容(知识) ,并忠实地再现作者的原始意图(社会智能)。像机器翻译这样的问题被认为是“AI完备”的,因为需要同时解决所有这些问题,机器性能才能达到人类水平。
  −
      
== 方法 ==
 
== 方法 ==
7,129

个编辑

导航菜单