更改

跳到导航 跳到搜索
删除114字节 、 2020年12月4日 (五) 16:13
无编辑摘要
第405行: 第405行:  
Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world. These learners could therefore, derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of "combinatorial explosion", where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad range of possibilities that are unlikely to be beneficial. For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding a pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.<ref>{{cite journal
 
Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world. These learners could therefore, derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of "combinatorial explosion", where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad range of possibilities that are unlikely to be beneficial. For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding a pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.<ref>{{cite journal
   −
许多AI算法可以从数据中学习;他们可以通过学习新的启发式(过去起作用的策略,或“经验法则”) 或者自己编写其他算法来强化自己。下面的一些“学习者”,包括'''<font color=#ff8000>贝叶斯网络 Bayesian Networks</font>'''、'''<font color=#ff8000>决策树 Decision Trees</font>'''和'''<font color=#ff8000>最近邻插值 Nearest-neighbor</font>''',在理论上(给定无限的数据、时间和记忆)可以学习近似任何函数,包括数学函数如何组合可以最好地描述世界。因此,这些学习者可以通过考虑每一种可能的假设,并将它们与数据进行匹配,从而获得所有可能的知识。实际上考虑所有的可能性几乎是不可能,因为很可能会导致“'''<font color=#ff8000>组合爆炸 Combinatorial Explosion</font>'''”,即解决一个问题所需的时间呈指数级增长。很多AI研究都在探索如何识别和避免考虑广泛且无益的可能性。例如,当看地图寻找从丹佛到东边纽约的最短行驶路线时,大部分人都不会去看通过西边的旧金山或其他领域的路径;因此,一个使用像A*这样的寻路算法的AI可以避免每条可能的路径都必须依次考虑的的情况。
+
许多AI算法可以从数据中学习;他们可以通过学习新的启发(过去起作用的策略,或“经验法则”) 或者自己编写其他算法来强化自己。下面的一些“学习者”,包括'''<font color=#ff8000>贝叶斯网络 Bayesian Networks</font>'''、'''<font color=#ff8000>决策树 Decision Trees</font>'''和'''<font color=#ff8000>最近邻 Nearest-neighbor</font>''',在理论上(给定无限的数据、时间和记忆)可以学习近似任何函数,包括数学函数如何组合可以最好地描述世界。因此,这些学习者可以通过考虑每一种可能的假设,并将它们与数据进行匹配,从而获得所有可能的知识。实际上考虑所有的可能性几乎是不可能,因为很可能会导致“'''<font color=#ff8000>组合爆炸 Combinatorial Explosion</font>'''”,即解决一个问题所需的时间呈指数级增长。很多AI研究都在探索如何识别和避免考虑广泛且无益的可能性。例如,当看地图寻找从东边丹佛到纽约的最短行驶路线时,大部分人都不会去看通过西边的旧金山或其他地理位置的路径;因此,一个使用像A*这样的寻路算法的AI可以避免每条可能的路径都必须依次考虑以造成组合爆炸的情况。
    
   
 
   
第451行: 第451行:  
Compared with humans, existing AI lacks several features of human "commonsense reasoning"; most notably, humans have powerful mechanisms for reasoning about "naïve physics" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "folk psychology" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence". (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators.) This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.
 
Compared with humans, existing AI lacks several features of human "commonsense reasoning"; most notably, humans have powerful mechanisms for reasoning about "naïve physics" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "folk psychology" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence". (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators.) This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.
   −
与人类相比,现有的AI缺少人类“常识推理”的几个特征; 最值得注意的是,人类拥有强大的对如空间、时间和物理交互等“自然物理”推理机制。这使得即使是小孩子也能够轻易地做出推论,比如“如果我把这支笔从桌子上滚下来,它就会掉到地板上”。人类还有一种强大的“人群心理”机制,帮助他们理解诸如“市议员因为示威者鼓吹暴力而拒绝给予许可”的自然语言,但一般的AI难以辨别被指控鼓吹暴力的人是议员还是示威者。这种“常识”的缺乏意味着AI经常会犯一些与人类不同的错误,这些错误看起来是难以理解的。例如,现在的自动驾驶汽车不能像人类那样准确推理方位和行人的意图,而只能使用非人类的推理模式来避免事故。
+
与人类相比,现有的AI缺少人类“常识推理”的几个特征; 最值得注意的是,人类拥有强大的对如空间、时间和物理相互作用等“自然物理”推理机制。这使得即使是小孩子也能够轻易地做出推论,比如“如果我把这支笔从桌子上滚下来,它就会掉到地板上”。人类还有一种强大的“人群心理”机制,帮助他们理解诸如“市议员因为示威者鼓吹暴力而拒绝给予许可”的自然语言,但一般的AI难以辨别被指控鼓吹暴力的人是议员还是示威者。这种“常识”的缺乏意味着AI经常会犯一些与人类不同的错误,这些错误看起来是难以理解的。例如,现在的自动驾驶汽车不能像人类那样准确推理方位和行人的意图,而只能使用非人类的推理模式来避免事故。
      第474行: 第474行:  
The cognitive capabilities of current architectures are very limited, using only a simplified version of what intelligence is really capable of. For instance, the human mind has come up with ways to reason beyond measure and logical explanations to different occurrences in life. What would have been otherwise straightforward, an equivalently difficult problem may be challenging to solve computationally as opposed to using the human mind. This gives rise to two classes of models: structuralist and functionalist. The structural models aim to loosely mimic the basic intelligence operations of the mind such as reasoning and logic. The functional model refers to the correlating data to its computed counterpart.
 
The cognitive capabilities of current architectures are very limited, using only a simplified version of what intelligence is really capable of. For instance, the human mind has come up with ways to reason beyond measure and logical explanations to different occurrences in life. What would have been otherwise straightforward, an equivalently difficult problem may be challenging to solve computationally as opposed to using the human mind. This gives rise to two classes of models: structuralist and functionalist. The structural models aim to loosely mimic the basic intelligence operations of the mind such as reasoning and logic. The functional model refers to the correlating data to its computed counterpart.
   −
当前架构的认知能力非常有限,只做到了智能真正能够做到的事情的冰山一角。例如,人类的大脑已经想出了各种方法来推理生活中难以度量且不太合逻辑的事件。原本直截了当且困难程度相差不大的问题,与使用人类思维相比,对于计算机可能是具有挑战性的。这就产生了两类模型: '''<font color=#ff8000>结构主义 Structuralist</font>'''和'''<font color=#ff8000>功能主义 Functionalist</font>'''。结构模型旨在大致模拟大脑的基本认知功能,如推理和逻辑。函数模型是指与其计算的数据相关联的数据。
+
当前架构的认知能力非常有限,只做到了智能真正能够做到的事情的冰山一角。例如,人类的大脑已经想出了各种方法来推理生活中难以度量且不太合逻辑的事件。原本直截了当且困难程度相差不大的问题,与使用人类思维相比,对于计算机可能是具有挑战性的。这就产生了两类模型: '''<font color=#ff8000>结构主义 Structuralist</font>'''和'''<font color=#ff8000>功能主义 Functionalist</font>'''。结构模型旨在大致模拟大脑的基本认知功能,如推理和逻辑;功能模型是指与其计算的数据相关联的数据。
      第559行: 第559行:  
Default reasoning and the qualification problem: Many of the things people know take the form of "working assumptions". For example, if a bird comes up in conversation, people typically picture an animal that is fist-sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
 
Default reasoning and the qualification problem: Many of the things people know take the form of "working assumptions". For example, if a bird comes up in conversation, people typically picture an animal that is fist-sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
   −
'''<font color=#ff8000>缺省推理 Default Reasoning</font>''' 和'''<font color=#ff8000> 限定性问题 Qualification Problem</font>''': 人们对事物的认知常常基于一个可行的假设。提到鸟,人们通常会想象一只拳头大小、会唱歌、会飞的动物。但并不是所有鸟类都有这样的特性。1969年约翰 · 麦卡锡将其归咎于资格问题: 对于AI研究人员所关心的任何常识性规则来说,往往存在大量的例外。几乎没有什么在逻辑角度是完全真或完全假。AI研究探索了许多解决这个问题的方法。
+
'''<font color=#ff8000>缺省推理 Default Reasoning</font>''' 和'''<font color=#ff8000> 限定性问题 Qualification Problem</font>''': 人们对事物的认知常常基于一个可行的假设。提到鸟,人们通常会想象一只拳头大小、会唱歌、会飞的动物,但并不是所有鸟类都有这样的特性。1969年约翰 · 麦卡锡将其归咎于限定性问题: 对于AI研究人员所关心的任何常识性规则来说,大量的例外往往存在。几乎没有什么在抽象逻辑角度是完全真或完全假。AI研究探索了许多解决这个问题的方法。
    
;Breadth of commonsense knowledge: The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of [[commonsense knowledge]] (e.g., [[Cyc]]) require enormous amounts of laborious [[ontology engineering|ontological engineering]]—they must be built, by hand, one complicated concept at a time.<ref name="Breadth of commonsense knowledge"/>
 
;Breadth of commonsense knowledge: The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of [[commonsense knowledge]] (e.g., [[Cyc]]) require enormous amounts of laborious [[ontology engineering|ontological engineering]]—they must be built, by hand, one complicated concept at a time.<ref name="Breadth of commonsense knowledge"/>
第565行: 第565行:  
Breadth of commonsense knowledge: The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time.
 
Breadth of commonsense knowledge: The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time.
   −
常识的广度: 常人掌握的“元常识”的数量是非常大的。想要建立一个像Cyc一样的完整的常识库,需要大量耗精力的本体工程ーー这些常识必须由人工一个一个地构建。
+
常识的广度: 常人掌握的“元常识”的数量是非常大的。一个像Cyc一样的完整的常识库若能建成,其需要大量劳动密集的本体工程ーー这些复杂的常识概念必须由人工一个一个地构建。
    
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])。想要建立一个像Cyc一样的完整的常识库  一句为省译
 
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])。想要建立一个像Cyc一样的完整的常识库  一句为省译
第573行: 第573行:  
Subsymbolic form of some commonsense knowledge: Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed" or an art critic can take one look at a statue and realize that it is a fake. These are non-conscious and sub-symbolic intuitions or tendencies in the human brain. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.
 
Subsymbolic form of some commonsense knowledge: Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed" or an art critic can take one look at a statue and realize that it is a fake. These are non-conscious and sub-symbolic intuitions or tendencies in the human brain. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.
   −
常识的'''<font color=#ff8000>亚符号 Subsymbolic</font>''' 形式: 人们所知道的许多东西必不能用可以口头表达的“事实”或“陈述”描述。例如,一个国际象棋大师会避免下某个位置,因为觉得这步棋“感觉太激进” ,或者一个艺术评论家可以看一眼雕像,就知道它是假的。这些是人类大脑中无意识和亚符号的直觉。这种知识为符号化的、有意识的知识提供信息和语境。与亚符号推理的相关问题一样,我们希望情境AI、计算智能或统计AI能够表示这类知识。
+
常识的'''<font color=#ff8000>亚符号 Subsymbolic</font>''' 形式: 人们所知道的许多东西必不能用可以口头表达的“事实”或“陈述”描述。例如,一个国际象棋大师会避免下某个位置,因为觉得这步棋“感觉上太激进”;或者一个艺术评论家可以看一眼雕像,就知道它是假的。这些是人类大脑中无意识和亚符号的直觉。这种知识为符号化的、有意识的知识提供信息和语境。与亚符号推理的相关问题一样,我们希望情境AI、计算智能或统计AI能够表示这类知识。
    
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]]) 这种知识为符号化的、有意识的知识提供信息和语境 一句为省译
 
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]]) 这种知识为符号化的、有意识的知识提供信息和语境 一句为省译
第609行: 第609行:  
Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or "value") of available choices.
 
Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or "value") of available choices.
   −
智能体必须能够设定并实现目标。他们就需要将未来可视化——这是一种对其所处环境状况的表述,并能够预测他们的行动将如何改变环境——依此能够选择使效用(或者“价值”)最大化的选项。
+
智能体必须能够设定并实现目标。他们需要能够有设想未来的办法——这是一种对其所处环境状况的表述,并能够预测他们的行动将如何改变环境——依此能够选择使效用(或者“价值”)最大化的选项。
    
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]]) 并能够预测他们的行动将如何改变环境——依此能够选择使效用(或者“价值”)最大化的选项 一句为省译
 
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]]) 并能够预测他们的行动将如何改变环境——依此能够选择使效用(或者“价值”)最大化的选项 一句为省译
第659行: 第659行:  
Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.
 
Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.
   −
'''<font color=#ff8000>无监督学习 Unsupervised Learning</font>'''可以从输入流中发现某种模式,而不需要人类提前标注输入。'''<font color=#ff8000>监督式学习 Supervised Learning</font>'''包括分类和数值回归,这需要人类首先标注输入数据。分类被用于确定某物属于哪个类别,这需要把大量来自多个类别的例子输入程序。回归用来产生一个描述输入和输出之间的关系的函数,并预测输出会如何随着输入的变化而变化。在强化学习中,智能体会因为好的回应而受到奖励,因为坏的回应而受到惩罚。智能体通过一系列的奖励和惩罚形成了一个在其问题空间中可施行的策略。
+
'''<font color=#ff8000>无监督学习 Unsupervised Learning</font>'''可以从数据流中发现某种模式,而不需要人类提前标注数据。'''<font color=#ff8000>有监督学习 Supervised Learning</font>'''包括分类和回归,这需要人类首先标注数据。分类被用于确定某物属于哪个类别,这需要把大量来自多个类别的例子投入程序;回归用来产生一个描述输入和输出之间的关系的函数,并预测输出会如何随着输入的变化而变化。在强化学习中,智能体会因为好的回应而受到奖励,因为坏的回应而受到惩罚;智能体通过一系列的奖励和惩罚形成了一个在其问题空间中可施行的策略。
      第691行: 第691行:  
[[Natural language processing]]<ref name="Natural language processing"/> (NLP) gives machines the ability to read and [[natural language understanding|understand]] human language. A sufficiently powerful natural language processing system would enable [[natural-language user interface]]s and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include [[information retrieval]], [[text mining]], [[question answering]]<ref>[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis "Versatile question answering systems: seeing in synthesis"] {{webarchive|url=https://web.archive.org/web/20160201125047/http://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis |date=1 February 2016 }}, Mittal et al., IJIIDS, 5(2), 119–142, 2011
 
[[Natural language processing]]<ref name="Natural language processing"/> (NLP) gives machines the ability to read and [[natural language understanding|understand]] human language. A sufficiently powerful natural language processing system would enable [[natural-language user interface]]s and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include [[information retrieval]], [[text mining]], [[question answering]]<ref>[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis "Versatile question answering systems: seeing in synthesis"] {{webarchive|url=https://web.archive.org/web/20160201125047/http://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis |date=1 February 2016 }}, Mittal et al., IJIIDS, 5(2), 119–142, 2011
   −
Natural language processing (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering<ref>[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis "Versatile question answering systems: seeing in synthesis"] , Mittal et al., IJIIDS, 5(2), 119–142, 2011
+
Natural language processing (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of NLP include information retrieval, text mining, question answering<ref>[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis "Versatile question answering systems: seeing in synthesis"] , Mittal et al., IJIIDS, 5(2), 119–142, 2011
    
自然语言处理(NLP)赋予机器阅读和理解人类语言的能力。一个足够强大的自然语言处理系统可以提供自然语言用户界面,并能直接从如新闻专线文本的人类文字中获取知识。一些简单的自然语言处理的应用包括信息检索、文本挖掘、问答和机器翻译。目前许多方法使用词的共现频率来构建文本的句法表示。用“关键词定位”策略进行搜索很常见且可扩展,但很粗糙;搜索“狗”可能只匹配与含“狗”字的文档,而漏掉与“犬”匹配的文档。“词汇相关性”策略使用如“事故”这样的词出现的频次,评估文本想表达的情感。现代统计NLP方法可以结合所有这些策略以及其他策略,在以页或段落为单位的处理上获得还能让人接受的准确度,但仍然缺乏对单独的句子进行分类所需的语义理解。除了编码语义常识常见的困难外,现有的语义NLP有时可扩展性太差,无法应用到在商业中。而“叙述性”NLP除了达到语义NLP的功能之外,还想最终能做到充分理解常识推理。
 
自然语言处理(NLP)赋予机器阅读和理解人类语言的能力。一个足够强大的自然语言处理系统可以提供自然语言用户界面,并能直接从如新闻专线文本的人类文字中获取知识。一些简单的自然语言处理的应用包括信息检索、文本挖掘、问答和机器翻译。目前许多方法使用词的共现频率来构建文本的句法表示。用“关键词定位”策略进行搜索很常见且可扩展,但很粗糙;搜索“狗”可能只匹配与含“狗”字的文档,而漏掉与“犬”匹配的文档。“词汇相关性”策略使用如“事故”这样的词出现的频次,评估文本想表达的情感。现代统计NLP方法可以结合所有这些策略以及其他策略,在以页或段落为单位的处理上获得还能让人接受的准确度,但仍然缺乏对单独的句子进行分类所需的语义理解。除了编码语义常识常见的困难外,现有的语义NLP有时可扩展性太差,无法应用到在商业中。而“叙述性”NLP除了达到语义NLP的功能之外,还想最终能做到充分理解常识推理。
第768行: 第768行:  
AI is heavily used in robotics. Moravec's paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility". This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.
 
AI is heavily used in robotics. Moravec's paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility". This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.
   −
AI在机器人技术中应用广泛。在现代工厂中广泛使用的高级机械臂和其他工业机器人,可以从经验中学习如何在存在摩擦和齿轮滑移的情况下有效地移动。当处在一个静态且可见的小环境中时,现代移动机器人可以很容易地确定自己的位置并绘制环境地图。然而如果是动态环境,比如用内窥镜检查病人呼吸的身体的内部,难度就会更高。运动规划是将一个运动任务分解为如单个的关节运动这样的“基本任务”的过程。这种运动通常包括顺应运动,在这个过程中需要与物体保持物理接触。
+
AI在机器人学中应用广泛。在现代工厂中广泛使用的高级机械臂和其他工业机器人,可以从经验中学习如何在存在摩擦和齿轮滑移的情况下有效地移动。当处在一个静态且可见的小环境中时,现代移动机器人可以很容易地确定自己的位置并绘制环境地图;然而如果是动态环境,比如用内窥镜检查病人呼吸的身体的内部,难度就会更高。运动规划是将一个运动任务分解为如单个的关节运动这样的“基本任务”的过程。这种运动通常包括顺应运动,在这个过程中需要与物体保持物理接触。'''<font color=#ff8000>莫拉维克悖论 Moravec's Paradox</font>''' 概括了人类理所当然认为低水平的感知运动技能很难在编程给机器人的事实,这个悖论是以汉斯 · 莫拉维克的名字命名的,他在1988年表示: “让计算机在智力测试或下跳棋中展现出成人水平的表现相对容易,但要让计算机拥有一岁小孩的感知和移动能力却很难,甚至不可能。”这是因为,身体灵巧性在数百万年的自然选择中一直作为一个直接的目标以增强人类的生存能力;而与此相比,跳棋技能则很奢侈,“擅长跳棋”的基因并不被生存导向的自然选择所偏好与富集。
'''<font color=#ff8000>莫拉维克悖论 Moravec's Paradox</font>''' 低级感觉运动技能的概括,人类理所当然,相反,很难计划到一个机器人;这个悖论是以汉斯•莫拉维克的名字命名的,他在1988年表示:“让电脑在智力测试或跳棋中表现出成人水平的表现相对容易,但让它们掌握一岁大的感知和行动能力则很难或不可能。”这归因于这样一个事实:与跳棋不同,数百万年来,身体灵巧性一直是自然选择的直接目标。
  −
莫拉维克的悖论概括了人类理所当然认为低水平的感知运动技能很难在编程给机器人的事实,这个悖论是以汉斯 · 莫拉维克的名字命名的,他在1988年表示: “让计算机在智力测试或下跳棋中展现出成人水平的表现相对容易,但要让计算机拥有一岁小孩的感知和移动能力却很难,甚至不可能。”这是因为,与跳棋不同,身体灵巧性一直在数百万年的自然选择后才形成的。(意译)
         
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])这是因为,与跳棋不同,身体灵巧性一直在数百万年的自然选择后才形成的。一句为意译
 
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])这是因为,与跳棋不同,身体灵巧性一直在数百万年的自然选择后才形成的。一句为意译
 +
--[[用户:Paradoxist-Paradoxer|Paradoxist@Paradoxer]]([[用户讨论:Paradoxist-Paradoxer|讨论]])应强调自然选择的目标是如何的——修改为如上更佳。
      第806行: 第805行:  
Moravec's paradox can be extended to many forms of social intelligence. Distributed multi-agent coordination of autonomous vehicles remains a difficult problem. Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.
 
Moravec's paradox can be extended to many forms of social intelligence. Distributed multi-agent coordination of autonomous vehicles remains a difficult problem. Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.
   −
莫拉维克悖论可以扩展到社会智能的许多形式。自动汽车分布式多智能体协调一直是一个难题。情感计算是一个跨学科交叉领域,包括了识别、解释、处理、模拟人的情感的系统。与情感计算相关的一些还算成功的领域有文本情感分析,以及最近的'''<font color=#ff8000>多模态情感分析 Multimodal Affect Analysis</font>''' ,多模态情感分析中AI可以左到将录像中被试表现出的情感分类。
+
莫拉维克悖论可以扩展到社会智能的许多形式。自动驾驶汽车分布式多智能体协调一直是一个难题。情感计算是一个跨学科交叉领域,包括了识别、解释、处理、模拟人的情感的系统。与情感计算相关的一些还算成功的领域有文本情感分析,以及最近的'''<font color=#ff8000>多模态情感分析 Multimodal Affect Analysis</font>''' ,多模态情感分析中AI可以做到将录像中被试表现出的情感进行分类。
      第813行: 第812行:  
In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.
 
In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.
   −
从长远来看,社交技巧以及对人类情感和博弈论的理解对社会智能体的价值很高。能够通过理解他人的动机和情绪状态来预测他人的行为,会让智能体做出更好的决策。有些计算机系统模仿人类的情感和表情,有利于对人类交互的情感动力更敏感,或利于促进人机交互。
+
从长远来看,社交技巧以及对人类情感和博弈论的理解对社会智能体的价值很高。能够通过理解他人的动机和情绪状态来预测他人的行为,会让智能体做出更好的决策。有些计算机系统模仿人类的情感和表情,有利于对人类交互的情感动力学更敏感,或利于促进人机交互。
      第842行: 第841行:  
Historically, projects such as the Cyc knowledge base (1984–) and the massive Japanese Fifth Generation Computer Systems initiative (1982–1992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable "narrow AI" applications (such as medical diagnosis or automobile navigation). Many researchers predict that such "narrow AI" work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas. Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning. Besides transfer learning, hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to "slurp up" a comprehensive knowledge base from the entire unstructured Web. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, "Master Algorithm" could lead to AGI. Finally, a few "emergent" approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.
 
Historically, projects such as the Cyc knowledge base (1984–) and the massive Japanese Fifth Generation Computer Systems initiative (1982–1992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable "narrow AI" applications (such as medical diagnosis or automobile navigation). Many researchers predict that such "narrow AI" work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas. Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning. Besides transfer learning, hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to "slurp up" a comprehensive knowledge base from the entire unstructured Web. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, "Master Algorithm" could lead to AGI. Finally, a few "emergent" approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.
   −
历史上,诸如 Cyc 知识库(1984 -)和大规模的日本第五代计算机系统倡议(1982-1992)等项目试图涵盖人类的所有认知。这些早期的项目未能逃脱非定量符号逻辑模型的限制,现在回过头看,这些项目大大低估了实现跨领域AI的难度。当下绝大多数AI研究人员主要研究易于处理的“狭义AI”应用(如医疗诊断或汽车导航)。许多研究人员预测,不同领域的“狭义AI”工作最终将被整合到一台具有人工通用智能(AGI)的机器中,结合上文提到的大多数狭义功能,甚至在某种程度上在大多数或所有这些领域都超过人类。许多进展具有普遍的、跨领域的意义。一个著名的例子是,21世纪一零年代,DeepMind开发了一种“'''<font color=#ff8000>通用人工智能 Generalized Artificial Intelligence</font>'''” ,它可以自己学习许多不同的 Atari 游戏,后来又开发了这种系统的升级版,在序贯学习方面取得了成功。除了迁移学习,未来AGI 的突破可能包括开发能够进行决策理论元推理的反射架构,以及从整个非结构化的网络中整合一个全面的知识库。一些人认为,某种(目前尚未发现的)概念简单,但在数学上困难的“主算法”可以导致 AGI。最后,一些“涌现”的方法着眼于尽可能地模拟人类智能,并相信如人工大脑或模拟儿童发展等拟人特征,有一天会达到一个临界点,通用智能从此出现。
+
历史上,诸如 Cyc 知识库(1984 -)和大规模的日本第五代计算机系统倡议(1982-1992)等项目试图涵盖人类的所有认知。这些早期的项目未能逃脱非定量符号逻辑模型的限制,现在回过头看,这些项目大大低估了实现跨领域AI的难度。当下绝大多数AI研究人员主要研究易于处理的“狭义AI”应用(如医疗诊断或汽车导航)。许多研究人员预测,不同领域的“狭义AI”工作最终将被整合到一台具有人工通用智能(AGI)的机器中,结合上文提到的大多数狭义功能,甚至在某种程度上在大多数或所有这些领域都超过人类。许多进展具有普遍的、跨领域的意义。一个著名的例子是,21世纪一零年代,DeepMind开发了一种“'''<font color=#ff8000>通用人工智能 Generalized Artificial Intelligence</font>'''” ,它可以自己学习许多不同的 Atari 游戏,后来又开发了这种系统的升级版,在序贯学习方面取得了成功。除了迁移学习,未来AGI 的突破可能包括开发能够进行决策理论元推理的反射架构,以及从整个非结构化的网页中整合一个全面的知识库。一些人认为,某种(目前尚未发现的)概念简单,但在数学上困难的“终极算法”可以导致 AGI。最后,一些“涌现”的方法着眼于尽可能地模拟人类智能,并相信如人工大脑或模拟儿童发展等拟人方案,有一天会达到一个临界点,通用智能在此涌现。
     

导航菜单