更改

删除10,556字节 、 2021年7月25日 (日) 16:04
无编辑摘要
第104行: 第104行:  
[[Unsupervised learning]] is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. [[Supervised learning]] includes both [[statistical classification|classification]] and numerical [[Regression analysis|regression]], which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.<ref name="Machine learning"/> Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". [[Computational learning theory]] can assess learners by [[computational complexity]], by [[sample complexity]] (how much data is required), or by other notions of [[optimization theory|optimization]].<ref>{{cite journal|last1=Jordan|first1=M. I.|last2=Mitchell|first2=T. M.|title=Machine learning: Trends, perspectives, and prospects|journal=Science|date=16 July 2015|volume=349|issue=6245|pages=255–260|doi=10.1126/science.aaa8415|pmid=26185243|bibcode=2015Sci...349..255J}}</ref> In [[reinforcement learning]]<ref name="Reinforcement learning"/> the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.
 
[[Unsupervised learning]] is the ability to find patterns in a stream of input, without requiring a human to label the inputs first. [[Supervised learning]] includes both [[statistical classification|classification]] and numerical [[Regression analysis|regression]], which requires a human to label the input data first. Classification is used to determine what category something belongs in, and occurs after a program sees a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.<ref name="Machine learning"/> Both classifiers and regression learners can be viewed as "function approximators" trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, "spam" or "not spam". [[Computational learning theory]] can assess learners by [[computational complexity]], by [[sample complexity]] (how much data is required), or by other notions of [[optimization theory|optimization]].<ref>{{cite journal|last1=Jordan|first1=M. I.|last2=Mitchell|first2=T. M.|title=Machine learning: Trends, perspectives, and prospects|journal=Science|date=16 July 2015|volume=349|issue=6245|pages=255–260|doi=10.1126/science.aaa8415|pmid=26185243|bibcode=2015Sci...349..255J}}</ref> In [[reinforcement learning]]<ref name="Reinforcement learning"/> the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.
   −
'''<font color=#ff8000>无监督学习 Unsupervised Learning</font>'''可以从数据流中发现某种模式,而不需要人类提前标注数据。'''<font color=#ff8000>有监督学习 Supervised Learning</font>'''包括分类和回归,这需要人类首先标注数据。分类被用于确定某物属于哪个类别,这需要把大量来自多个类别的例子投入程序;回归用来产生一个描述输入和输出之间的关系的函数,并预测输出会如何随着输入的变化而变化。在强化学习中,智能体会因为好的回应而受到奖励,因为坏的回应而受到惩罚;智能体通过一系列的奖励和惩罚形成了一个在其问题空间中可施行的策略。
+
'''无监督学习 Unsupervised Learning'''可以从数据流中发现某种模式,而不需要人类提前标注数据。'''有监督学习 Supervised Learning'''包括分类和回归,这需要人类首先标注数据。分类被用于确定某物属于哪个类别,这需要把大量来自多个类别的例子投入程序;回归用来产生一个描述输入和输出之间的关系的函数,并预测输出会如何随着输入的变化而变化<ref name="Machine learning"/><ref>{{cite journal|last1=Jordan|first1=M. I.|last2=Mitchell|first2=T. M.|title=Machine learning: Trends, perspectives, and prospects|journal=Science|date=16 July 2015|volume=349|issue=6245|pages=255–260|doi=10.1126/science.aaa8415|pmid=26185243|bibcode=2015Sci...349..255J}}</ref>。在强化学习<ref name="Reinforcement learning"/>中,智能体会因为好的回应而受到奖励,因为坏的回应而受到惩罚;智能体通过一系列的奖励和惩罚形成了一个在其问题空间中可施行的策略。
      第114行: 第114行:  
[[Natural language processing]]<ref name="Natural language processing"/> (NLP) gives machines the ability to read and [[natural language understanding|understand]] human language. A sufficiently powerful natural language processing system would enable [[natural-language user interface]]s and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include [[information retrieval]], [[text mining]], [[question answering]]<ref>[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis "Versatile question answering systems: seeing in synthesis"] {{webarchive|url=https://web.archive.org/web/20160201125047/http://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis |date=1 February 2016 }}, Mittal et al., IJIIDS, 5(2), 119–142, 2011
 
[[Natural language processing]]<ref name="Natural language processing"/> (NLP) gives machines the ability to read and [[natural language understanding|understand]] human language. A sufficiently powerful natural language processing system would enable [[natural-language user interface]]s and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include [[information retrieval]], [[text mining]], [[question answering]]<ref>[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis "Versatile question answering systems: seeing in synthesis"] {{webarchive|url=https://web.archive.org/web/20160201125047/http://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis |date=1 February 2016 }}, Mittal et al., IJIIDS, 5(2), 119–142, 2011
   −
自然语言处理(NLP)赋予机器阅读和理解人类语言的能力。一个足够强大的自然语言处理系统可以提供自然语言用户界面,并能直接从如新闻专线文本的人类文字中获取知识。一些简单的自然语言处理的应用包括信息检索、文本挖掘、问答和机器翻译。目前许多方法使用词的共现频率来构建文本的句法表示。用“关键词定位”策略进行搜索很常见且可扩展,但很粗糙;搜索“狗”可能只匹配与含“狗”字的文档,而漏掉与“犬”匹配的文档。“词汇相关性”策略使用如“事故”这样的词出现的频次,评估文本想表达的情感。现代统计NLP方法可以结合所有这些策略以及其他策略,在以页或段落为单位的处理上获得还能让人接受的准确度,但仍然缺乏对单独的句子进行分类所需的语义理解。除了编码语义常识常见的困难外,现有的语义NLP有时可扩展性太差,无法应用到在商业中。而“叙述性”NLP除了达到语义NLP的功能之外,还想最终能做到充分理解常识推理。
+
自然语言处理(NLP)<ref name="Natural language processing"/>赋予机器阅读和理解人类语言的能力。一个足够强大的自然语言处理系统可以提供自然语言用户界面,并能直接从如新闻专线文本的人类文字中获取知识。一些简单的自然语言处理的应用包括信息检索、文本挖掘、问答和机器翻译<ref>[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis "Versatile question answering systems: seeing in synthesis"] {{webarchive|url=https://web.archive.org/web/20160201125047/http://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis |date=1 February 2016 }}, Mittal et al., IJIIDS, 5(2), 119–142, 2011。目前许多方法使用词的共现频率来构建文本的句法表示。用“关键词定位”策略进行搜索很常见且可扩展,但很粗糙;搜索“狗”可能只匹配与含“狗”字的文档,而漏掉与“犬”匹配的文档。“词汇相关性”策略使用如“事故”这样的词出现的频次,评估文本想表达的情感。现代统计NLP方法可以结合所有这些策略以及其他策略,在以页或段落为单位的处理上获得还能让人接受的准确度,但仍然缺乏对单独的句子进行分类所需的语义理解。除了编码语义常识常见的困难外,现有的语义NLP有时可扩展性太差,无法应用到在商业中。而“叙述性”NLP除了达到语义NLP的功能之外,还想最终能做到充分理解常识推理。
      第126行: 第126行:       −
机器感知是利用传感器(如可见光或红外线摄像头、麦克风、无线信号、激光雷达、声纳、雷达和触觉传感器)的输入来推断世界的不同角度的能力。应用包括语音识别、面部识别和物体识别。计算机视觉是分析可视化输入的能力。这种输入通常是模糊的; 一个在远处50米高的巨人可能会与近处正常大小的行人占据完全相同的像素,这就要求AI判断不同解释的相对可能性和合理性,例如使用”物体模型”来判断50米高的巨人其实是不存在的。
+
机器感知是利用传感器(如可见光或红外线摄像头、麦克风、无线信号、激光雷达、声纳<ref name="Speech recognition"/>、雷达和触觉传感器)的输入来推断世界的不同角度的能力。应用包括语音识别、面部识别和物体识别。计算机视觉<ref name="Object recognition"/> 是分析可视化输入的能力。这种输入通常是模糊的; 一个在远处50米高的巨人可能会与近处正常大小的行人占据完全相同的像素,这就要求AI判断不同解释的相对可能性和合理性,例如使用”物体模型”来判断50米高的巨人其实是不存在的。<ref name="Computer vision"/>
      第134行: 第134行:  
AI is heavily used in [[robotics]].<ref name="Robotics"/> Advanced [[robotic arm]]s and other [[industrial robot]]s, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.<ref name="Configuration space"/> A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and [[robotic mapping|map]] its environment; however, dynamic environments, such as (in [[endoscopy]]) the interior of a patient's breathing body, pose a greater challenge. [[Motion planning]] is the process of breaking down a movement task into "primitives" such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.{{sfn|Tecuci|2012}}<ref name="Robotic mapping"/><ref>{{cite journal|last1=Cadena|first1=Cesar|last2=Carlone|first2=Luca|last3=Carrillo|first3=Henry|last4=Latif|first4=Yasir|last5=Scaramuzza|first5=Davide|last6=Neira|first6=Jose|last7=Reid|first7=Ian|last8=Leonard|first8=John J.|title=Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age|journal=IEEE Transactions on Robotics|date=December 2016|volume=32|issue=6|pages=1309–1332|doi=10.1109/TRO.2016.2624754|arxiv=1606.05830|bibcode=2016arXiv160605830C}}</ref> [[Moravec's paradox]] generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after [[Hans Moravec]], who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".<ref>{{Cite book| first = Hans | last = Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press | author-link =Hans Moravec| p=15}}</ref><ref>{{cite news|last1=Chan|first1=Szu Ping|title=This is what will happen when robots take over the world|url=https://www.telegraph.co.uk/finance/economics/11994694/Heres-what-will-happen-when-robots-take-over-the-world.html|accessdate=23 April 2018|date=15 November 2015}}</ref> This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of [[natural selection]] for millions of years.<ref name="The Economist">{{cite news|title=IKEA furniture and the limits of AI|url=https://www.economist.com/news/leaders/21740735-humans-have-had-good-run-most-recent-breakthrough-robotics-it-clear|accessdate=24 April 2018|work=The Economist|date=2018|language=en}}</ref>
 
AI is heavily used in [[robotics]].<ref name="Robotics"/> Advanced [[robotic arm]]s and other [[industrial robot]]s, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.<ref name="Configuration space"/> A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and [[robotic mapping|map]] its environment; however, dynamic environments, such as (in [[endoscopy]]) the interior of a patient's breathing body, pose a greater challenge. [[Motion planning]] is the process of breaking down a movement task into "primitives" such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.{{sfn|Tecuci|2012}}<ref name="Robotic mapping"/><ref>{{cite journal|last1=Cadena|first1=Cesar|last2=Carlone|first2=Luca|last3=Carrillo|first3=Henry|last4=Latif|first4=Yasir|last5=Scaramuzza|first5=Davide|last6=Neira|first6=Jose|last7=Reid|first7=Ian|last8=Leonard|first8=John J.|title=Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age|journal=IEEE Transactions on Robotics|date=December 2016|volume=32|issue=6|pages=1309–1332|doi=10.1109/TRO.2016.2624754|arxiv=1606.05830|bibcode=2016arXiv160605830C}}</ref> [[Moravec's paradox]] generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after [[Hans Moravec]], who stated in 1988 that "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".<ref>{{Cite book| first = Hans | last = Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press | author-link =Hans Moravec| p=15}}</ref><ref>{{cite news|last1=Chan|first1=Szu Ping|title=This is what will happen when robots take over the world|url=https://www.telegraph.co.uk/finance/economics/11994694/Heres-what-will-happen-when-robots-take-over-the-world.html|accessdate=23 April 2018|date=15 November 2015}}</ref> This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of [[natural selection]] for millions of years.<ref name="The Economist">{{cite news|title=IKEA furniture and the limits of AI|url=https://www.economist.com/news/leaders/21740735-humans-have-had-good-run-most-recent-breakthrough-robotics-it-clear|accessdate=24 April 2018|work=The Economist|date=2018|language=en}}</ref>
   −
AI在机器人学中应用广泛。在现代工厂中广泛使用的高级机械臂和其他工业机器人,可以从经验中学习如何在存在摩擦和齿轮滑移的情况下有效地移动。当处在一个静态且可见的小环境中时,现代移动机器人可以很容易地确定自己的位置并绘制环境地图;然而如果是动态环境,比如用内窥镜检查病人呼吸的身体的内部,难度就会更高。运动规划是将一个运动任务分解为如单个的关节运动这样的“基本任务”的过程。这种运动通常包括顺应运动,在这个过程中需要与物体保持物理接触。'''<font color=#ff8000>莫拉维克悖论 Moravec's Paradox</font>''' 概括了人类理所当然认为低水平的感知运动技能很难在编程给机器人的事实,这个悖论是以汉斯 · 莫拉维克的名字命名的,他在1988年表示: “让计算机在智力测试或下跳棋中展现出成人水平的表现相对容易,但要让计算机拥有一岁小孩的感知和移动能力却很难,甚至不可能。”这是因为,身体灵巧性在数百万年的自然选择中一直作为一个直接的目标以增强人类的生存能力;而与此相比,跳棋技能则很奢侈,“擅长跳棋”的基因并不被生存导向的自然选择所偏好与富集。
+
AI在机器人学中应用广泛<ref name="Robotics"/>。在现代工厂中广泛使用的高级机械臂和其他工业机器人,可以从经验中学习如何在存在摩擦和齿轮滑移的情况下有效地移动。当处在一个静态且可见的小环境中时,现代移动机器人<ref name="Configuration space"/>可以很容易地确定自己的位置并绘制环境地图;然而如果是动态环境,比如用内窥镜检查病人呼吸的身体的内部,难度就会更高。运动规划是将一个运动任务分解为如单个的关节运动这样的“基本任务”的过程。这种运动通常包括顺应运动,在这个过程中需要与物体保持物理接触。'''莫拉维克悖论 Moravec's Paradox''' <ref name="Robotic mapping"/><ref>{{cite journal|last1=Cadena|first1=Cesar|last2=Carlone|first2=Luca|last3=Carrillo|first3=Henry|last4=Latif|first4=Yasir|last5=Scaramuzza|first5=Davide|last6=Neira|first6=Jose|last7=Reid|first7=Ian|last8=Leonard|first8=John J.|title=Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age|journal=IEEE Transactions on Robotics|date=December 2016|volume=32|issue=6|pages=1309–1332|doi=10.1109/TRO.2016.2624754|arxiv=1606.05830|bibcode=2016arXiv160605830C}}</ref>概括了人类理所当然认为低水平的感知运动技能很难在编程给机器人的事实,这个悖论是以汉斯 · 莫拉维克的名字命名的,他在1988年表示: “让计算机在智力测试或下跳棋中展现出成人水平的表现相对容易,但要让计算机拥有一岁小孩的感知和移动能力却很难,甚至不可能。”<ref>{{Cite book| first = Hans | last = Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press | author-link =Hans Moravec| p=15}}</ref><ref>{{cite news|last1=Chan|first1=Szu Ping|title=This is what will happen when robots take over the world|url=https://www.telegraph.co.uk/finance/economics/11994694/Heres-what-will-happen-when-robots-take-over-the-world.html|accessdate=23 April 2018|date=15 November 2015}}</ref>这是因为,身体灵巧性在数百万年的自然选择中一直作为一个直接的目标以增强人类的生存能力;而与此相比,跳棋技能则很奢侈,“擅长跳棋”的基因并不被生存导向的自然选择所偏好与富集。<ref name="The Economist">{{cite news|title=IKEA furniture and the limits of AI|url=https://www.economist.com/news/leaders/21740735-humans-have-had-good-run-most-recent-breakthrough-robotics-it-clear|accessdate=24 April 2018|work=The Economist|date=2018|language=en}}</ref>
      第143行: 第143行:  
Moravec's paradox can be extended to many forms of social intelligence.<ref>{{cite magazine |last1=Thompson|first1=Derek|title=What Jobs Will the Robots Take?|url=https://www.theatlantic.com/business/archive/2014/01/what-jobs-will-the-robots-take/283239/|accessdate=24 April 2018|magazine=The Atlantic|date=2018}}</ref><ref>{{cite journal|last1=Scassellati|first1=Brian|title=Theory of mind for a humanoid robot|journal=Autonomous Robots|volume=12|issue=1|year=2002|pages=13–24|doi=10.1023/A:1013298507114}}</ref> Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.<ref>{{cite journal|last1=Cao|first1=Yongcan|last2=Yu|first2=Wenwu|last3=Ren|first3=Wei|last4=Chen|first4=Guanrong|title=An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination|journal=IEEE Transactions on Industrial Informatics|date=February 2013|volume=9|issue=1|pages=427–438|doi=10.1109/TII.2012.2219061|arxiv=1207.3231}}</ref> [[Affective computing]] is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human [[Affect (psychology)|affects]].{{sfn|Thro|1993}}{{sfn|Edelson|1991}}{{sfn|Tao|Tan|2005}} Moderate successes related to affective computing include textual [[sentiment analysis]] and, more recently, multimodal affect analysis (see [[multimodal sentiment analysis]]), wherein AI classifies the affects displayed by a videotaped subject.<ref>{{cite journal|last1=Poria|first1=Soujanya|last2=Cambria|first2=Erik|last3=Bajpai|first3=Rajiv|last4=Hussain|first4=Amir|title=A review of affective computing: From unimodal analysis to multimodal fusion|journal=Information Fusion|date=September 2017|volume=37|pages=98–125|doi=10.1016/j.inffus.2017.02.003|hdl=1893/25490|hdl-access=free}}</ref>
 
Moravec's paradox can be extended to many forms of social intelligence.<ref>{{cite magazine |last1=Thompson|first1=Derek|title=What Jobs Will the Robots Take?|url=https://www.theatlantic.com/business/archive/2014/01/what-jobs-will-the-robots-take/283239/|accessdate=24 April 2018|magazine=The Atlantic|date=2018}}</ref><ref>{{cite journal|last1=Scassellati|first1=Brian|title=Theory of mind for a humanoid robot|journal=Autonomous Robots|volume=12|issue=1|year=2002|pages=13–24|doi=10.1023/A:1013298507114}}</ref> Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.<ref>{{cite journal|last1=Cao|first1=Yongcan|last2=Yu|first2=Wenwu|last3=Ren|first3=Wei|last4=Chen|first4=Guanrong|title=An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination|journal=IEEE Transactions on Industrial Informatics|date=February 2013|volume=9|issue=1|pages=427–438|doi=10.1109/TII.2012.2219061|arxiv=1207.3231}}</ref> [[Affective computing]] is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human [[Affect (psychology)|affects]].{{sfn|Thro|1993}}{{sfn|Edelson|1991}}{{sfn|Tao|Tan|2005}} Moderate successes related to affective computing include textual [[sentiment analysis]] and, more recently, multimodal affect analysis (see [[multimodal sentiment analysis]]), wherein AI classifies the affects displayed by a videotaped subject.<ref>{{cite journal|last1=Poria|first1=Soujanya|last2=Cambria|first2=Erik|last3=Bajpai|first3=Rajiv|last4=Hussain|first4=Amir|title=A review of affective computing: From unimodal analysis to multimodal fusion|journal=Information Fusion|date=September 2017|volume=37|pages=98–125|doi=10.1016/j.inffus.2017.02.003|hdl=1893/25490|hdl-access=free}}</ref>
   −
莫拉维克悖论可以扩展到社会智能的许多形式。自动驾驶汽车分布式多智能体协调一直是一个难题。情感计算是一个跨学科交叉领域,包括了识别、解释、处理、模拟人的情感的系统。与情感计算相关的一些还算成功的领域有文本情感分析,以及最近的'''<font color=#ff8000>多模态情感分析 Multimodal Affect Analysis</font>''' ,多模态情感分析中AI可以做到将录像中被试表现出的情感进行分类。
+
莫拉维克悖论可以扩展到社会智能的许多形式。自动驾驶汽车分布式多智能体协调一直是一个难题。<ref>{{cite magazine |last1=Thompson|first1=Derek|title=What Jobs Will the Robots Take?|url=https://www.theatlantic.com/business/archive/2014/01/what-jobs-will-the-robots-take/283239/|accessdate=24 April 2018|magazine=The Atlantic|date=2018}}</ref><ref>{{cite journal|last1=Scassellati|first1=Brian|title=Theory of mind for a humanoid robot|journal=Autonomous Robots|volume=12|issue=1|year=2002|pages=13–24|doi=10.1023/A:1013298507114}}</ref>情感计算是一个跨学科交叉领域,包括了识别、解释、处理、模拟人的情感的系统。与情感计算相关的一些还算成功的领域有文本情感分析,以及最近的'''多模态情感分析 Multimodal Affect Analysis''' <ref>{{cite journal|last1=Cao|first1=Yongcan|last2=Yu|first2=Wenwu|last3=Ren|first3=Wei|last4=Chen|first4=Guanrong|title=An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination|journal=IEEE Transactions on Industrial Informatics|date=February 2013|volume=9|issue=1|pages=427–438|doi=10.1109/TII.2012.2219061|arxiv=1207.3231}}</ref>,多模态情感分析中AI{{sfn|Thro|1993}}{{sfn|Edelson|1991}}{{sfn|Tao|Tan|2005}}可以做到将录像中被试表现出的情感进行分类。<ref>{{cite journal|last1=Poria|first1=Soujanya|last2=Cambria|first2=Erik|last3=Bajpai|first3=Rajiv|last4=Hussain|first4=Amir|title=A review of affective computing: From unimodal analysis to multimodal fusion|journal=Information Fusion|date=September 2017|volume=37|pages=98–125|doi=10.1016/j.inffus.2017.02.003|hdl=1893/25490|hdl-access=free}}</ref>
       
In the long run, social skills and an understanding of human emotion and [[game theory]] would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate [[human–computer interaction]].<ref name="Emotion and affective computing"/> Similarly, some [[virtual assistant]]s are programmed to speak conversationally or even to banter humorously; this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are.<ref>{{cite magazine|last1=Waddell|first1=Kaveh|title=Chatbots Have Entered the Uncanny Valley|url=https://www.theatlantic.com/technology/archive/2017/04/uncanny-valley-digital-assistants/523806/|accessdate=24 April 2018|magazine=The Atlantic|date=2018}}</ref>
 
In the long run, social skills and an understanding of human emotion and [[game theory]] would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate [[human–computer interaction]].<ref name="Emotion and affective computing"/> Similarly, some [[virtual assistant]]s are programmed to speak conversationally or even to banter humorously; this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are.<ref>{{cite magazine|last1=Waddell|first1=Kaveh|title=Chatbots Have Entered the Uncanny Valley|url=https://www.theatlantic.com/technology/archive/2017/04/uncanny-valley-digital-assistants/523806/|accessdate=24 April 2018|magazine=The Atlantic|date=2018}}</ref>
   −
从长远来看,社交技巧以及对人类情感和博弈论的理解对社会智能体的价值很高。能够通过理解他人的动机和情绪状态来预测他人的行为,会让智能体做出更好的决策。有些计算机系统模仿人类的情感和表情,有利于对人类交互的情感动力学更敏感,或利于促进人机交互。
+
从长远来看,社交技巧以及对人类情感和博弈论的理解对社会智能体的价值很高<ref name="Emotion and affective computing"/> 。能够通过理解他人的动机和情绪状态来预测他人的行为,会让智能体做出更好的决策。有些计算机系统模仿人类的情感和表情,有利于对人类交互的情感动力学更敏感,或利于促进人机交互。<ref>{{cite magazine|last1=Waddell|first1=Kaveh|title=Chatbots Have Entered the Uncanny Valley|url=https://www.theatlantic.com/technology/archive/2017/04/uncanny-valley-digital-assistants/523806/|accessdate=24 April 2018|magazine=The Atlantic|date=2018}}</ref>
    
=== 通用智能 ===
 
=== 通用智能 ===
第154行: 第154行:  
Historically, projects such as the Cyc knowledge base (1984–) and the massive Japanese [[Fifth generation computer|Fifth Generation Computer Systems]] initiative (1982–1992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable "narrow AI" applications (such as medical diagnosis or automobile navigation).<ref name="contemporary agi">{{cite book|last1=Pennachin|first1=C.|last2=Goertzel|first2=B.|title=Contemporary Approaches to Artificial General Intelligence|journal=Artificial General Intelligence. Cognitive Technologies|date=2007|doi=10.1007/978-3-540-68677-4_1|publisher=Springer|location=Berlin, Heidelberg|series=Cognitive Technologies|isbn=978-3-540-23733-4}}</ref> Many researchers predict that such "narrow AI" work in different individual domains will eventually be incorporated into a machine with [[artificial general intelligence]] (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.<ref name="General intelligence"/><ref name="Roberts">{{cite magazine|last1=Roberts|first1=Jacob|title=Thinking Machines: The Search for Artificial Intelligence|magazine=Distillations|date=2016|volume=2|issue=2|pages=14–23|url=https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence|accessdate=20 March 2018|archive-url=https://web.archive.org/web/20180819152455/https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence|archive-date=19 August 2018|url-status=dead}}</ref> Many advances have general, cross-domain significance. One high-profile example is that [[DeepMind]] in the 2010s developed a "generalized artificial intelligence" that could learn many diverse [[Atari 2600|Atari]] games on its own, and later developed a variant of the system which succeeds at [[Catastrophic interference#The Sequential Learning Problem: McCloskey and Cohen (1989)|sequential learning]].<ref>{{cite news|title=The superhero of artificial intelligence: can this genius keep it in check?|url=https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificial-intelligence-deepmind-alphago|accessdate=26 April 2018|work=the Guardian|date=16 February 2016|language=en}}</ref><ref>{{cite journal|last1=Mnih|first1=Volodymyr|last2=Kavukcuoglu|first2=Koray|last3=Silver|first3=David|last4=Rusu|first4=Andrei A.|last5=Veness|first5=Joel|last6=Bellemare|first6=Marc G.|last7=Graves|first7=Alex|last8=Riedmiller|first8=Martin|last9=Fidjeland|first9=Andreas K.|last10=Ostrovski|first10=Georg|last11=Petersen|first11=Stig|last12=Beattie|first12=Charles|last13=Sadik|first13=Amir|last14=Antonoglou|first14=Ioannis|last15=King|first15=Helen|last16=Kumaran|first16=Dharshan|last17=Wierstra|first17=Daan|last18=Legg|first18=Shane|last19=Hassabis|first19=Demis|title=Human-level control through deep reinforcement learning|journal=Nature|date=26 February 2015|volume=518|issue=7540|pages=529–533|doi=10.1038/nature14236|pmid=25719670|bibcode=2015Natur.518..529M}}</ref><ref>{{cite news|last1=Sample|first1=Ian|title=Google's DeepMind makes AI program that can learn like a human|url=https://www.theguardian.com/global/2017/mar/14/googles-deepmind-makes-ai-program-that-can-learn-like-a-human|accessdate=26 April 2018|work=the Guardian|date=14 March 2017|language=en}}</ref> Besides [[transfer learning]],<ref>{{cite news|title=From not working to neural networking|url=https://www.economist.com/news/special-report/21700756-artificial-intelligence-boom-based-old-idea-modern-twist-not|accessdate=26 April 2018|work=The Economist|date=2016|language=en}}</ref> hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to "slurp up" a comprehensive knowledge base from the entire unstructured [[World Wide Web|Web]].{{sfn|Russell|Norvig|2009|chapter=27. AI: The Present and Future}} Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, "Master Algorithm" could lead to AGI.{{sfn|Domingos|2015|chapter=9. The Pieces of the Puzzle Fall into Place}} Finally, a few "emergent" approaches look to simulating human intelligence extremely closely, and believe that [[anthropomorphism|anthropomorphic]] features like an [[artificial brain]] or simulated [[developmental robotics|child development]] may someday reach a critical point where general intelligence emerges.<ref name="Brain simulation"/><ref>{{cite journal|last1=Goertzel|first1=Ben|last2=Lian|first2=Ruiting|last3=Arel|first3=Itamar|last4=de Garis|first4=Hugo|last5=Chen|first5=Shuo|title=A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures|journal=Neurocomputing|date=December 2010|volume=74|issue=1–3|pages=30–49|doi=10.1016/j.neucom.2010.08.012}}</ref>
 
Historically, projects such as the Cyc knowledge base (1984–) and the massive Japanese [[Fifth generation computer|Fifth Generation Computer Systems]] initiative (1982–1992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable "narrow AI" applications (such as medical diagnosis or automobile navigation).<ref name="contemporary agi">{{cite book|last1=Pennachin|first1=C.|last2=Goertzel|first2=B.|title=Contemporary Approaches to Artificial General Intelligence|journal=Artificial General Intelligence. Cognitive Technologies|date=2007|doi=10.1007/978-3-540-68677-4_1|publisher=Springer|location=Berlin, Heidelberg|series=Cognitive Technologies|isbn=978-3-540-23733-4}}</ref> Many researchers predict that such "narrow AI" work in different individual domains will eventually be incorporated into a machine with [[artificial general intelligence]] (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.<ref name="General intelligence"/><ref name="Roberts">{{cite magazine|last1=Roberts|first1=Jacob|title=Thinking Machines: The Search for Artificial Intelligence|magazine=Distillations|date=2016|volume=2|issue=2|pages=14–23|url=https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence|accessdate=20 March 2018|archive-url=https://web.archive.org/web/20180819152455/https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence|archive-date=19 August 2018|url-status=dead}}</ref> Many advances have general, cross-domain significance. One high-profile example is that [[DeepMind]] in the 2010s developed a "generalized artificial intelligence" that could learn many diverse [[Atari 2600|Atari]] games on its own, and later developed a variant of the system which succeeds at [[Catastrophic interference#The Sequential Learning Problem: McCloskey and Cohen (1989)|sequential learning]].<ref>{{cite news|title=The superhero of artificial intelligence: can this genius keep it in check?|url=https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificial-intelligence-deepmind-alphago|accessdate=26 April 2018|work=the Guardian|date=16 February 2016|language=en}}</ref><ref>{{cite journal|last1=Mnih|first1=Volodymyr|last2=Kavukcuoglu|first2=Koray|last3=Silver|first3=David|last4=Rusu|first4=Andrei A.|last5=Veness|first5=Joel|last6=Bellemare|first6=Marc G.|last7=Graves|first7=Alex|last8=Riedmiller|first8=Martin|last9=Fidjeland|first9=Andreas K.|last10=Ostrovski|first10=Georg|last11=Petersen|first11=Stig|last12=Beattie|first12=Charles|last13=Sadik|first13=Amir|last14=Antonoglou|first14=Ioannis|last15=King|first15=Helen|last16=Kumaran|first16=Dharshan|last17=Wierstra|first17=Daan|last18=Legg|first18=Shane|last19=Hassabis|first19=Demis|title=Human-level control through deep reinforcement learning|journal=Nature|date=26 February 2015|volume=518|issue=7540|pages=529–533|doi=10.1038/nature14236|pmid=25719670|bibcode=2015Natur.518..529M}}</ref><ref>{{cite news|last1=Sample|first1=Ian|title=Google's DeepMind makes AI program that can learn like a human|url=https://www.theguardian.com/global/2017/mar/14/googles-deepmind-makes-ai-program-that-can-learn-like-a-human|accessdate=26 April 2018|work=the Guardian|date=14 March 2017|language=en}}</ref> Besides [[transfer learning]],<ref>{{cite news|title=From not working to neural networking|url=https://www.economist.com/news/special-report/21700756-artificial-intelligence-boom-based-old-idea-modern-twist-not|accessdate=26 April 2018|work=The Economist|date=2016|language=en}}</ref> hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to "slurp up" a comprehensive knowledge base from the entire unstructured [[World Wide Web|Web]].{{sfn|Russell|Norvig|2009|chapter=27. AI: The Present and Future}} Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, "Master Algorithm" could lead to AGI.{{sfn|Domingos|2015|chapter=9. The Pieces of the Puzzle Fall into Place}} Finally, a few "emergent" approaches look to simulating human intelligence extremely closely, and believe that [[anthropomorphism|anthropomorphic]] features like an [[artificial brain]] or simulated [[developmental robotics|child development]] may someday reach a critical point where general intelligence emerges.<ref name="Brain simulation"/><ref>{{cite journal|last1=Goertzel|first1=Ben|last2=Lian|first2=Ruiting|last3=Arel|first3=Itamar|last4=de Garis|first4=Hugo|last5=Chen|first5=Shuo|title=A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures|journal=Neurocomputing|date=December 2010|volume=74|issue=1–3|pages=30–49|doi=10.1016/j.neucom.2010.08.012}}</ref>
   −
历史上,诸如 Cyc 知识库(1984 -)和大规模的日本第五代计算机系统倡议(1982-1992)等项目试图涵盖人类的所有认知。这些早期的项目未能逃脱非定量符号逻辑模型的限制,现在回过头看,这些项目大大低估了实现跨领域AI的难度。当下绝大多数AI研究人员主要研究易于处理的“狭义AI”应用(如医疗诊断或汽车导航)。许多研究人员预测,不同领域的“狭义AI”工作最终将被整合到一台具有人工通用智能(AGI)的机器中,结合上文提到的大多数狭义功能,甚至在某种程度上在大多数或所有这些领域都超过人类。许多进展具有普遍的、跨领域的意义。一个著名的例子是,21世纪一零年代,DeepMind开发了一种“'''<font color=#ff8000>通用人工智能 Generalized Artificial Intelligence</font>'''” ,它可以自己学习许多不同的 Atari 游戏,后来又开发了这种系统的升级版,在序贯学习方面取得了成功。除了迁移学习,未来AGI 的突破可能包括开发能够进行决策理论元推理的反射架构,以及从整个非结构化的网页中整合一个全面的知识库。一些人认为,某种(目前尚未发现的)概念简单,但在数学上困难的“终极算法”可以产生AGI。最后,一些“涌现”的方法着眼于尽可能地模拟人类智能,并相信如人工大脑或模拟儿童发展等拟人方案,有一天会达到一个临界点,通用智能在此涌现。
+
历史上,诸如 Cyc 知识库(1984 -)和大规模的日本第五代计算机系统倡议(1982-1992)等项目试图涵盖人类的所有认知。这些早期的项目未能逃脱非定量符号逻辑模型的限制,现在回过头看,这些项目大大低估了实现跨领域AI的难度。当下绝大多数AI研究人员主要研究易于处理的“狭义AI”应用(如医疗诊断或汽车导航)<ref name="contemporary agi">{{cite book|last1=Pennachin|first1=C.|last2=Goertzel|first2=B.|title=Contemporary Approaches to Artificial General Intelligence|journal=Artificial General Intelligence. Cognitive Technologies|date=2007|doi=10.1007/978-3-540-68677-4_1|publisher=Springer|location=Berlin, Heidelberg|series=Cognitive Technologies|isbn=978-3-540-23733-4}}</ref>。许多研究人员预测,不同领域的“狭义AI”工作最终将被整合到一台具有人工通用智能(AGI)的机器中,结合上文提到的大多数狭义功能,甚至在某种程度上在大多数或所有这些领域都超过人类。许多进展具有普遍的、跨领域的意义。<ref name="General intelligence"/><ref name="Roberts">{{cite magazine|last1=Roberts|first1=Jacob|title=Thinking Machines: The Search for Artificial Intelligence|magazine=Distillations|date=2016|volume=2|issue=2|pages=14–23|url=https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence|accessdate=20 March 2018|archive-url=https://web.archive.org/web/20180819152455/https://www.sciencehistory.org/distillations/magazine/thinking-machines-the-search-for-artificial-intelligence|archive-date=19 August 2018|url-status=dead}}</ref>一个著名的例子是,21世纪一零年代,DeepMind开发了一种“'''<font color=#ff8000>通用人工智能 Generalized Artificial Intelligence</font>'''” ,它可以自己学习许多不同的 Atari 游戏,后来又开发了这种系统的升级版,在序贯学习方面取得了成功。<ref>{{cite news|title=The superhero of artificial intelligence: can this genius keep it in check?|url=https://www.theguardian.com/technology/2016/feb/16/demis-hassabis-artificial-intelligence-deepmind-alphago|accessdate=26 April 2018|work=the Guardian|date=16 February 2016|language=en}}</ref><ref>{{cite journal|last1=Mnih|first1=Volodymyr|last2=Kavukcuoglu|first2=Koray|last3=Silver|first3=David|last4=Rusu|first4=Andrei A.|last5=Veness|first5=Joel|last6=Bellemare|first6=Marc G.|last7=Graves|first7=Alex|last8=Riedmiller|first8=Martin|last9=Fidjeland|first9=Andreas K.|last10=Ostrovski|first10=Georg|last11=Petersen|first11=Stig|last12=Beattie|first12=Charles|last13=Sadik|first13=Amir|last14=Antonoglou|first14=Ioannis|last15=King|first15=Helen|last16=Kumaran|first16=Dharshan|last17=Wierstra|first17=Daan|last18=Legg|first18=Shane|last19=Hassabis|first19=Demis|title=Human-level control through deep reinforcement learning|journal=Nature|date=26 February 2015|volume=518|issue=7540|pages=529–533|doi=10.1038/nature14236|pmid=25719670|bibcode=2015Natur.518..529M}}</ref><ref>{{cite news|last1=Sample|first1=Ian|title=Google's DeepMind makes AI program that can learn like a human|url=https://www.theguardian.com/global/2017/mar/14/googles-deepmind-makes-ai-program-that-can-learn-like-a-human|accessdate=26 April 2018|work=the Guardian|date=14 March 2017|language=en}}</ref>除了迁移学习,未来AGI <ref>{{cite news|title=From not working to neural networking|url=https://www.economist.com/news/special-report/21700756-artificial-intelligence-boom-based-old-idea-modern-twist-not|accessdate=26 April 2018|work=The Economist|date=2016|language=en}}</ref> 的突破可能包括开发能够进行决策理论元推理的反射架构,以及从整个非结构化的网页中整合一个全面的知识库。{{sfn|Russell|Norvig|2009|chapter=27. AI: The Present and Future}} 一些人认为,某种(目前尚未发现的)概念简单,但在数学上困难的“终极算法”可以产生AGI。最后,一些“涌现”的方法着眼于尽可能地模拟人类智能,并相信如人工大脑或模拟儿童发展等拟人方案,有一天会达到一个临界点,通用智能在此涌现。<ref name="Brain simulation"/><ref>{{cite journal|last1=Goertzel|first1=Ben|last2=Lian|first2=Ruiting|last3=Arel|first3=Itamar|last4=de Garis|first4=Hugo|last5=Chen|first5=Shuo|title=A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures|journal=Neurocomputing|date=December 2010|volume=74|issue=1–3|pages=30–49|doi=10.1016/j.neucom.2010.08.012}}</ref>
      第182行: 第182行:       −
在20世纪四五十年代,许多研究人员探索了神经生物学、信息论和控制论之间的联系。他们中的一些人利用电子网络制造机器来表现基本的智能,比如 '''W·格雷·沃尔特 W. Grey Walter'''的乌龟和'''约翰·霍普金斯 Johns Hopkins'''的野兽。这些研究人员中的许多人参加了在普林斯顿大学的'''目的论学社'''和英格兰的'''比率俱乐部'''举办的集会。到了1960年,这种方法基本上被放弃了,直到二十世纪八十年代一些部分又被重新使用。
+
在20世纪四五十年代,许多研究人员探索了神经生物学、信息论和控制论之间的联系。他们中的一些人利用电子网络制造机器来表现基本的智能,比如 '''W·格雷·沃尔特 W. Grey Walter'''的乌龟和'''约翰·霍普金斯 Johns Hopkins'''的野兽。这些研究人员中的许多人参加了在普林斯顿大学的'''目的论学社'''和英格兰的'''比率俱乐部'''举办的集会<ref name="AI's immediate precursors"/> 。到了1960年,这种方法基本上被放弃了,直到二十世纪八十年代一些部分又被重新使用。
      第204行: 第204行:       −
经济学家[[赫伯特·西蒙]]和[[艾伦·纽厄尔]]研究了人类解决问题的技能,并试图将其形式化。他们的工作为AI、认知科学、运筹学和管理科学奠定了基础。他们的研究团队利用心理学实验的结果来开发程序,模拟人们用来解决问题的方法。以卡内基梅隆大学为中心,这种研究传统最终在20世纪80年代中期的SOAR架构开发过程中达到顶峰。
+
经济学家[[赫伯特·西蒙]]和[[艾伦·纽厄尔]]研究了人类解决问题的技能,并试图将其形式化。他们的工作为AI、认知科学、运筹学和管理科学奠定了基础。他们的研究团队利用心理学实验的结果来开发程序,模拟人们用来解决问题的方法。以卡内基梅隆大学为中心,这种研究传统最终在20世纪80年代中期的SOAR架构开发过程中达到顶峰。<ref name="AI at CMU in the 60s"/><ref name="Soar"/>
      第212行: 第212行:       −
与西蒙和纽厄尔不同,约翰·麦卡锡认为机器不需要模拟人类的思维,而是应该尝试寻找抽象推理和解决问题的本质,不管人们是否使用相同的算法。<ref name="Biological intelligence vs. intelligence in general"/> 他在斯坦福大学的实验室(SAIL)致力于使用形式逻辑来解决各种各样的问题,包括知识表示、规划和学习。逻辑也是爱丁堡大学和欧洲其他地方工作的重点,这促进了编程语言 Prolog 和逻辑编程科学的发展。
+
与西蒙和纽厄尔不同,约翰·麦卡锡认为机器不需要模拟人类的思维,而是应该尝试寻找抽象推理和解决问题的本质,不管人们是否使用相同的算法。<ref name="Biological intelligence vs. intelligence in general"/> 他在斯坦福大学的实验室(SAIL)致力于使用形式逻辑来解决各种各样的问题,包括知识表示、规划和学习<ref name="AI at Stanford in the 60s"/> 。逻辑也是爱丁堡大学和欧洲其他地方工作的重点,这促进了编程语言 Prolog 和逻辑编程科学的发展。
      第257行: 第257行:       −
上世纪80年代中期,大卫•鲁梅尔哈特等人重新激发了人们对神经网络和“'''<font color=#ff8000>连接主义 Connectionism</font>'''”的兴趣。人工神经网络是软计算的一个例子ーー它们解决不能完全用逻辑确定性地解决,但常常只需要近似解的问题。AI的其他软计算方法包括'''<font color=#ff8000>模糊系统 Fuzzy Systems </font>'''、'''<font color=#ff8000>灰度系统理论 Grey System Theory</font>'''、'''<font color=#ff8000>演化计算 Evolutionary Computation </font>'''和许多统计工具。软计算在AI中的应用是计算智能这一新兴学科的集中研究领域。
+
上世纪80年代中期,大卫•鲁梅尔哈特等人重新激发了人们对神经网络和“'''连接主义 Connectionism'''”的兴趣。人工神经网络是软计算的一个例子ーー它们解决不能完全用逻辑确定性地解决,但常常只需要近似解的问题。AI的其他软计算方法包括'''模糊系统 Fuzzy Systems '''、'''灰度系统理论 Grey System Theory'''、'''演化计算 Evolutionary Computation '''和许多统计工具。软计算在AI中的应用是计算智能这一新兴学科的集中研究领域。<ref name="Computational intelligence"/>
      第266行: 第266行:       −
许多传统的 GOFAI 在实验模型中行之有效,但不能推广到现实世界,陷入了需要不断给符号计算修补漏洞的困境中。然而,在20世纪90年代前后,AI研究人员采用了复杂的数学工具,如'''<font color=#ff8000>隐马尔可夫模型 Hidden Markov Model,HMM</font>'''、信息论和'''<font color=#ff8000>标准贝叶斯决策理论 Normative Bayesian Decision Theory</font>'''来比较或统一各种互相竞争的架构。共通的数学语言允许其与数学、经济学或运筹学等更成熟的领域进行高层次的融合。与 GOFAI 相比,隐马尔可夫模型和神经网络等新的“统计学习”技术在数据挖掘等许多实际领域中不必理解数据集的语义,却能得到更高的精度,随着现实世界数据的日益增加,人们越来越注重用不同的方法测试相同的数据,并进行比较,看哪种方法在比特殊实验室环境更广泛的背景下表现得更好; AI研究正变得更加科学。如今,实验结果一般是严格可测的,有时可以重现(但有难度)。不同的统计学习技术有不同的局限性,例如,基本的 HMM 不能为自然语言的无限可能的组合建模。评论者们指出,从 GOFAI 到统计学习的转变也经常是可解释AI的转变。在 [[通用人工智能]] 的研究中,一些学者警告不要过度依赖统计学习,并认为继续研究 GOFAI 仍然是实现通用智能的必要条件。
+
许多传统的 GOFAI 在实验模型中行之有效,但不能推广到现实世界,陷入了需要不断给符号计算修补漏洞的困境中。然而,在20世纪90年代前后,AI研究人员采用了复杂的数学工具,如'''<font color=#ff8000>隐马尔可夫模型 Hidden Markov Model,HMM</font>'''、信息论和'''<font color=#ff8000>标准贝叶斯决策理论 Normative Bayesian Decision Theory</font>'''来比较或统一各种互相竞争的架构。共通的数学语言允许其与数学、经济学或运筹学等更成熟的领域进行高层次的融合。与 GOFAI 相比,隐马尔可夫模型和神经网络等新的“统计学习”技术在数据挖掘等许多实际领域中不必理解数据集的语义,却能得到更高的精度,随着现实世界数据的日益增加,人们越来越注重用不同的方法测试相同的数据,并进行比较,看哪种方法在比特殊实验室环境更广泛的背景下表现得更好; AI研究正变得更加科学。如今,实验结果一般是严格可测的,有时可以重现(但有难度)<ref name="Formal methods in AI"/><ref>{{cite news|last1=Hutson|first1=Matthew|title=Artificial intelligence faces reproducibility crisis|url=http://science.sciencemag.org/content/359/6377/725|accessdate=28 April 2018|work=[[Science Magazine|Science]]|date=16 February 2018|pages=725–726|language=en|doi=10.1126/science.359.6377.725|bibcode=2018Sci...359..725H}}</ref> 。不同的统计学习技术有不同的局限性,例如,基本的 HMM 不能为自然语言的无限可能的组合建模。评论者们指出,从 GOFAI 到统计学习的转变也经常是可解释AI的转变。在 [[通用人工智能]] 的研究中,{{sfn|Norvig|2012}}一些学者警告不要过度依赖统计学习,并认为继续研究 GOFAI 仍然是实现通用智能的必要条件。{{sfn|Langley|2011}}{{sfn|Katz|2012}}
      第274行: 第274行:       −
;智能主体范式: 智能主体是一个感知其环境并采取行动,最大限度地提高其成功机会的系统。最简单的智能主体是解决特定问题的程序。更复杂的智能主体包括人类和人类组织(如公司)。这种范式使得研究人员能通过观察哪一个智能主体能最大化给定的“目标函数”,直接比较甚至结合不同的方法来解决孤立的问题。解决特定问题的智能主体可以使用任何有效的方法——可以是是符号化和逻辑化的,也可以是亚符号化的人工神经网络,还可以是新的方法。这种范式还为研究人员提供了一种与其他领域(如决策理论和经济学)进行交流的共同语言,因为这些领域也使用了抽象智能主体的概念。建立一个完整的智能主体需要研究人员解决现实的整合协调问题; 例如,由于传感系统提供关于环境的信息不确定,决策系统就必须在不确定性的条件下运作。智能体范式在20世纪90年代被广泛接受。
+
智能主体范式: 智能主体是一个感知其环境并采取行动,最大限度地提高其成功机会的系统。最简单的智能主体是解决特定问题的程序,更复杂的智能主体包括人类和人类组织(如公司)。这种范式使得研究人员能通过观察哪一个智能主体能最大化给定的“目标函数”,直接比较甚至结合不同的方法来解决孤立的问题。解决特定问题的智能主体可以使用任何有效的方法——可以是是符号化和逻辑化的,也可以是亚符号化的人工神经网络,还可以是新的方法。这种范式还为研究人员提供了一种与其他领域(如决策理论和经济学)进行交流的共同语言,因为这些领域也使用了抽象智能主体的概念。建立一个完整的智能主体需要研究人员解决现实的整合协调问题; 例如,由于传感系统提供关于环境的信息不确定,决策系统就必须在不确定性的条件下运作。智能体范式在20世纪90年代被广泛接受。<ref name="Intelligent agents"/>
      第280行: 第280行:  
;[[Agent architecture]]s and [[cognitive architecture]]s:Researchers have designed systems to build intelligent systems out of interacting [[intelligent agent]]s in a [[multi-agent system]].<ref name="Agent architectures"/> A [[hierarchical control system]] provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modeling.<ref name="Hierarchical control system"/> Some cognitive architectures are custom-built to solve a narrow problem; others, such as [[Soar (cognitive architecture)|Soar]], are designed to mimic human cognition and to provide insight into general intelligence. Modern extensions of Soar are [[hybrid intelligent system]]s that include both symbolic and sub-symbolic components.<ref>{{cite journal|last1=Laird|first1=John|title=Extending the Soar cognitive architecture|journal=Frontiers in Artificial Intelligence and Applications|date=2008|volume=171|page=224|citeseerx=10.1.1.77.2473}}</ref><ref>{{cite journal|last1=Lieto|first1=Antonio|last2=Lebiere|first2=Christian|last3=Oltramari|first3=Alessandro|title=The knowledge level in cognitive architectures: Current limitations and possibile developments|journal=Cognitive Systems Research|date=May 2018|volume=48|pages=39–55|doi=10.1016/j.cogsys.2017.05.001|hdl=2318/1665207|hdl-access=free}}</ref><ref>{{cite journal|last1=Lieto|first1=Antonio|last2=Bhatt|first2=Mehul|last3=Oltramari|first3=Alessandro|last4=Vernon|first4=David|title=The role of cognitive architectures in general artificial intelligence|journal=Cognitive Systems Research|date=May 2018|volume=48|pages=1–3|doi=10.1016/j.cogsys.2017.08.003|hdl=2318/1665249|hdl-access=free}}</ref>
 
;[[Agent architecture]]s and [[cognitive architecture]]s:Researchers have designed systems to build intelligent systems out of interacting [[intelligent agent]]s in a [[multi-agent system]].<ref name="Agent architectures"/> A [[hierarchical control system]] provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modeling.<ref name="Hierarchical control system"/> Some cognitive architectures are custom-built to solve a narrow problem; others, such as [[Soar (cognitive architecture)|Soar]], are designed to mimic human cognition and to provide insight into general intelligence. Modern extensions of Soar are [[hybrid intelligent system]]s that include both symbolic and sub-symbolic components.<ref>{{cite journal|last1=Laird|first1=John|title=Extending the Soar cognitive architecture|journal=Frontiers in Artificial Intelligence and Applications|date=2008|volume=171|page=224|citeseerx=10.1.1.77.2473}}</ref><ref>{{cite journal|last1=Lieto|first1=Antonio|last2=Lebiere|first2=Christian|last3=Oltramari|first3=Alessandro|title=The knowledge level in cognitive architectures: Current limitations and possibile developments|journal=Cognitive Systems Research|date=May 2018|volume=48|pages=39–55|doi=10.1016/j.cogsys.2017.05.001|hdl=2318/1665207|hdl-access=free}}</ref><ref>{{cite journal|last1=Lieto|first1=Antonio|last2=Bhatt|first2=Mehul|last3=Oltramari|first3=Alessandro|last4=Vernon|first4=David|title=The role of cognitive architectures in general artificial intelligence|journal=Cognitive Systems Research|date=May 2018|volume=48|pages=1–3|doi=10.1016/j.cogsys.2017.08.003|hdl=2318/1665249|hdl-access=free}}</ref>
   −
;智能主体体系结构和认知体系结构: 研究人员已经设计了一些在多智能体系统中利用相互作用的智能体构建智能系统的系统<ref name="Agent architectures"/>。分层控制系统为亚符号AI、反应层和符号AI提供了一座桥梁,亚符号AI在底层、反应层和符号AI在顶层<ref name="Hierarchical control system"/>。
+
智能主体体系结构和认知体系结构: 研究人员已经设计了一些在多智能体系统中利用相互作用的智能体构建智能系统的系统<ref name="Agent architectures"/>。分层控制系统为亚符号AI、反应层和符号AI提供了一座桥梁,亚符号AI在底层、反应层和符号AI在顶层<ref name="Hierarchical control system"/>。
 
一些认知架构是人为构造用来解决特定问题的;其他比如SOAR,是用来模仿人类的认知,向通用智能更进一步。现在SOAR的扩展是含有符号和亚符号部分的混合智能系统。<ref>{{cite journal|last1=Laird|first1=John|title=Extending the Soar cognitive architecture|journal=Frontiers in Artificial Intelligence and Applications|date=2008|volume=171|page=224|citeseerx=10.1.1.77.2473}}</ref><ref>{{cite journal|last1=Lieto|first1=Antonio|last2=Lebiere|first2=Christian|last3=Oltramari|first3=Alessandro|title=The knowledge level in cognitive architectures: Current limitations and possibile developments|journal=Cognitive Systems Research|date=May 2018|volume=48|pages=39–55|doi=10.1016/j.cogsys.2017.05.001|hdl=2318/1665207|hdl-access=free}}</ref><ref>{{cite journal|last1=Lieto|first1=Antonio|last2=Bhatt|first2=Mehul|last3=Oltramari|first3=Alessandro|last4=Vernon|first4=David|title=The role of cognitive architectures in general artificial intelligence|journal=Cognitive Systems Research|date=May 2018|volume=48|pages=1–3|doi=10.1016/j.cogsys.2017.08.003|hdl=2318/1665249|hdl-access=free}}</ref>
 
一些认知架构是人为构造用来解决特定问题的;其他比如SOAR,是用来模仿人类的认知,向通用智能更进一步。现在SOAR的扩展是含有符号和亚符号部分的混合智能系统。<ref>{{cite journal|last1=Laird|first1=John|title=Extending the Soar cognitive architecture|journal=Frontiers in Artificial Intelligence and Applications|date=2008|volume=171|page=224|citeseerx=10.1.1.77.2473}}</ref><ref>{{cite journal|last1=Lieto|first1=Antonio|last2=Lebiere|first2=Christian|last3=Oltramari|first3=Alessandro|title=The knowledge level in cognitive architectures: Current limitations and possibile developments|journal=Cognitive Systems Research|date=May 2018|volume=48|pages=39–55|doi=10.1016/j.cogsys.2017.05.001|hdl=2318/1665207|hdl-access=free}}</ref><ref>{{cite journal|last1=Lieto|first1=Antonio|last2=Bhatt|first2=Mehul|last3=Oltramari|first3=Alessandro|last4=Vernon|first4=David|title=The role of cognitive architectures in general artificial intelligence|journal=Cognitive Systems Research|date=May 2018|volume=48|pages=1–3|doi=10.1016/j.cogsys.2017.08.003|hdl=2318/1665249|hdl-access=free}}</ref>
   第299行: 第299行:  
Simple exhaustive searches<ref name="Uninformed search"/> are rarely sufficient for most real-world problems: the [[search algorithm|search space]] (the number of places to search) quickly grows to [[Astronomically large|astronomical numbers]]. The result is a search that is [[Computation time|too slow]] or never completes. The solution, for many problems, is to use "[[heuristics]]" or "rules of thumb" that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called "[[pruning (algorithm)|pruning]] the [[search tree]]"). [[Heuristics]] supply the program with a "best guess" for the path on which the solution lies.<ref name="Informed search"/> Heuristics limit the search for solutions into a smaller sample size.{{sfn|Tecuci|2012}}
 
Simple exhaustive searches<ref name="Uninformed search"/> are rarely sufficient for most real-world problems: the [[search algorithm|search space]] (the number of places to search) quickly grows to [[Astronomically large|astronomical numbers]]. The result is a search that is [[Computation time|too slow]] or never completes. The solution, for many problems, is to use "[[heuristics]]" or "rules of thumb" that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called "[[pruning (algorithm)|pruning]] the [[search tree]]"). [[Heuristics]] supply the program with a "best guess" for the path on which the solution lies.<ref name="Informed search"/> Heuristics limit the search for solutions into a smaller sample size.{{sfn|Tecuci|2012}}
   −
对于大多数真实世界的问题,简单的穷举搜索<ref name="Uninformed search"/>很难满足要求: 搜索空间(要搜索的位置数)很快就会增加到天文数字。结果就是搜索速度太慢或者永远不能完成。对于许多问题,解决方法是使用“'''<font color=#ff8000>启发式 Heuristics</font>''' ”或“'''<font color=#ff8000>经验法则 Rules of Thumb</font>''' ” ,优先考虑那些更有可能达到目标的选择,并且在较短的步骤内完成。在一些搜索方法中,启发式方法还可以完全移去一些不可能通向目标的选择(称为“修剪搜索树”)。启发式为程序提供了解决方案所在路径的“最佳猜测”。启发式把搜索限制在了更小的样本规模里。。
+
对于大多数真实世界的问题,简单的穷举搜索<ref name="Uninformed search"/>很难满足要求: 搜索空间(要搜索的位置数)很快就会增加到天文数字。结果就是搜索速度太慢或者永远不能完成。对于许多问题,解决方法是使用“'''启发式 Heuristics''' ”或“'''经验法则 Rules of Thumb''' ” ,优先考虑那些更有可能达到目标的选择,并且在较短的步骤内完成。在一些搜索方法中,启发式方法还可以完全移去一些不可能通向目标的选择(称为“修剪搜索树”)。<ref name="Informed search"/>启发式为程序提供了解决方案所在路径的“最佳猜测”。启发式把搜索限制在了更小的样本规模里。
       
A very different kind of search came to prominence in the 1990s, based on the mathematical theory of [[optimization (mathematics)|optimization]]. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind [[hill climbing]]: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are [[simulated annealing]], [[beam search]] and [[random optimization]].<ref name="Optimization search"/>
 
A very different kind of search came to prominence in the 1990s, based on the mathematical theory of [[optimization (mathematics)|optimization]]. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind [[hill climbing]]: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are [[simulated annealing]], [[beam search]] and [[random optimization]].<ref name="Optimization search"/>
   −
在20世纪90年代,一种非常不同的基于数学最优化理论的搜索引起了人们的注意。对于许多问题,可以从某种形式的猜测开始搜索,然后逐步细化猜测,直到无法进行更多的细化。这些算法可以喻为盲目地爬山: 我们从地形上的一个随机点开始搜索,然后,通过跳跃或登爬,我们把猜测点继续向山上移动,直到我们到达山顶。其他的优化算法有 '''<font color=#ff8000>模拟退火算法</font>''' 、'''<font color=#ff8000>定向搜索</font>''' 和'''<font color=#ff8000>随机优化</font>''' 。<ref name="Optimization search"/>
+
在20世纪90年代,一种非常不同的基于数学最优化理论的搜索引起了人们的注意。对于许多问题,可以从某种形式的猜测开始搜索,然后逐步细化猜测,直到无法进行更多的细化。这些算法可以喻为盲目地爬山: 我们从地形上的一个随机点开始搜索,然后,通过跳跃或登爬,我们把猜测点继续向山上移动,直到我们到达山顶。其他的优化算法有 '''模拟退火算法''' 、'''定向搜索''' 和'''随机优化''' 。<ref name="Optimization search"/>
      第334行: 第334行:       −
'''<font color=#ff8000>缺省逻辑 Default Logics</font>'''、'''<font color=#ff8000>非单调逻辑 Non-monotonic Logics</font>'''、'''<font color=#ff8000>限制逻辑 Circumscription</font>'''和'''<font color=#ff8000>模态逻辑 Modal Logics</font>'''<ref name="Default reasoning and non-monotonic logic"/>,都用逻辑形式来解决缺省推理和限定问题。一些逻辑扩展被用于处理特定的知识领域,例如:'''<font color=#ff8000>描述逻辑 Description Logics</font>'''<ref name="Representing categories and relations"/> 、情景演算、事件演算、'''<font color=#ff8000>流态演算 Fluent Calculus</font>'''(用于表示事件和时间)<ref name="Representing time"/>、因果演算<ref name="Representing causation"/>、信念演算(信念修正)<ref>"The Belief Calculus and Uncertain Reasoning", Yen-Teh Hsia</ref>、和模态逻辑<ref name="Representing knowledge about knowledge"/>。人们也设计了对多主体系统中出现的矛盾或不一致陈述进行建模的逻辑,如次协调逻辑。
+
'''缺省逻辑 Default Logics'''、'''非单调逻辑 Non-monotonic Logics'''、'''限制逻辑 Circumscription'''和'''模态逻辑 Modal Logics'''<ref name="Default reasoning and non-monotonic logic"/>,都用逻辑形式来解决缺省推理和限定问题。一些逻辑扩展被用于处理特定的知识领域,例如:'''描述逻辑 Description Logics'''<ref name="Representing categories and relations"/> 、情景演算、事件演算、'''流态演算 Fluent Calculus'''(用于表示事件和时间)<ref name="Representing time"/>、因果演算<ref name="Representing causation"/>、信念演算(信念修正)<ref>"The Belief Calculus and Uncertain Reasoning", Yen-Teh Hsia</ref>、和模态逻辑<ref name="Representing knowledge about knowledge"/>。人们也设计了对多主体系统中出现的矛盾或不一致陈述进行建模的逻辑,如次协调逻辑。
      第351行: 第351行:  
[[Bayesian network]]s<ref name="Bayesian networks"/> are a very general tool that can be used for various problems: reasoning (using the [[Bayesian inference]] algorithm),<ref name="Bayesian inference"/> [[Machine learning|learning]] (using the [[expectation-maximization algorithm]]),{{efn|Expectation-maximization, one of the most popular algorithms in machine learning, allows clustering in the presence of unknown [[latent variables]]{{sfn|Domingos|2015|p=210}}}}<ref name="Bayesian learning"/> [[Automated planning and scheduling|planning]] (using [[decision network]]s)<ref name="Bayesian decision networks"/> and [[machine perception|perception]] (using [[dynamic Bayesian network]]s).<ref name="Stochastic temporal models"/> Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping [[machine perception|perception]] systems to analyze processes that occur over time (e.g., [[hidden Markov model]]s or [[Kalman filter]]s).<ref name="Stochastic temporal models"/> Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be [[conditionally independent]] of one another. Complicated graphs with diamonds or other "loops" (undirected [[cycle (graph theory)|cycles]]) can require a sophisticated method such as [[Markov chain Monte Carlo]], which spreads an ensemble of [[random walk]]ers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities. Bayesian networks are used on [[Xbox Live]] to rate and match players; wins and losses are "evidence" of how good a player is{{citation needed|date=July 2019}}. [[Google AdSense|AdSense]] uses a Bayesian network with over 300 million edges to learn which ads to serve.{{sfn|Domingos|2015|loc=chapter 6}}
 
[[Bayesian network]]s<ref name="Bayesian networks"/> are a very general tool that can be used for various problems: reasoning (using the [[Bayesian inference]] algorithm),<ref name="Bayesian inference"/> [[Machine learning|learning]] (using the [[expectation-maximization algorithm]]),{{efn|Expectation-maximization, one of the most popular algorithms in machine learning, allows clustering in the presence of unknown [[latent variables]]{{sfn|Domingos|2015|p=210}}}}<ref name="Bayesian learning"/> [[Automated planning and scheduling|planning]] (using [[decision network]]s)<ref name="Bayesian decision networks"/> and [[machine perception|perception]] (using [[dynamic Bayesian network]]s).<ref name="Stochastic temporal models"/> Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping [[machine perception|perception]] systems to analyze processes that occur over time (e.g., [[hidden Markov model]]s or [[Kalman filter]]s).<ref name="Stochastic temporal models"/> Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be [[conditionally independent]] of one another. Complicated graphs with diamonds or other "loops" (undirected [[cycle (graph theory)|cycles]]) can require a sophisticated method such as [[Markov chain Monte Carlo]], which spreads an ensemble of [[random walk]]ers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities. Bayesian networks are used on [[Xbox Live]] to rate and match players; wins and losses are "evidence" of how good a player is{{citation needed|date=July 2019}}. [[Google AdSense|AdSense]] uses a Bayesian network with over 300 million edges to learn which ads to serve.{{sfn|Domingos|2015|loc=chapter 6}}
   −
'''<font color=#ff8000>[[贝叶斯网络]] Bayesian Networks </font>''' 是一个非常通用的工具,可用于各种问题: 推理(使用贝叶斯推断算法) ,学习(使用期望最大化算法) ,规划(使用决策网络)和感知(使用动态贝叶斯网络)。概率算法也可以用于滤波、预测、平滑和解释数据流,帮助传感系统分析随时间发生的过程(例如,隐马尔可夫模型或'''<font color=#ff8000>卡尔曼滤波器 Kalman Filters</font>''')。与符号逻辑相比,形式化的贝叶斯推断逻辑运算量很大。为了使推理易于处理,大多数观察值必须彼此条件独立。含有菱形或其他“圈”(无向循环)的复杂图形可能需要比如马尔科夫-蒙特卡罗图的复杂方法,这种方法将一组随机行走遍布整个贝叶斯网络,并试图收敛到对条件概率的评估。贝叶斯网络在 Xbox Live 上被用来评估和匹配玩家:胜率是证明一个玩家有多有优秀的“证据”。AdSense使用一个有超过3亿条边的贝叶斯网络来学习如何推送广告的。
+
'''[[贝叶斯网络]] Bayesian Networks ''' 是一个非常通用的工具,可用于各种问题: 推理(使用贝叶斯推断算法) ,学习(使用期望最大化算法) ,规划(使用决策网络)和感知(使用动态贝叶斯网络)。概率算法也可以用于滤波、预测、平滑和解释数据流,帮助传感系统分析随时间发生的过程(例如,隐马尔可夫模型或'''卡尔曼滤波器 Kalman Filters''')。与符号逻辑相比,形式化的贝叶斯推断逻辑运算量很大。为了使推理易于处理,大多数观察值必须彼此条件独立。含有菱形或其他“圈”(无向循环)的复杂图形可能需要比如马尔科夫-蒙特卡罗图的复杂方法,这种方法将一组随机行走遍布整个贝叶斯网络,并试图收敛到对条件概率的评估。贝叶斯网络在 Xbox Live 上被用来评估和匹配玩家:胜率是证明一个玩家有多有优秀的“证据”。AdSense使用一个有超过3亿条边的贝叶斯网络来学习如何推送广告的。
      第366行: 第366行:       −
最简单的AI应用程序可以分为两类: '''<font color=#ff8000>分类器 Classifiers</font>''' (“若闪光,则为钻石”)和'''<font color=#ff8000>控制器 Controllers</font>''' (“若闪光,则捡起来”)。然而,控制器在推断前也对条件进行分类,因此分类构成了许多AI系统的核心部分。分类器一组是使用匹配模式来判断最接近的匹配的函数。它们可以根据样例进行性能调优,使它们在AI应用中更有效。这些样例被称为“观察”或“模式”。在监督学习中,每个模式都属于某个预定义的类别。可以把一个类看作是一个必须做出的决定。所有的样例和它们的对应的类别标签被称为数据集。当接收一个新样例时,它会被分类器根据以前的经验进行分类。
+
最简单的AI应用程序可以分为两类: '''>分类器 Classifiers''' (“若闪光,则为钻石”)和'''控制器 Controllers''' (“若闪光,则捡起来”)。然而,控制器在推断前也对条件进行分类,因此分类构成了许多AI系统的核心部分。分类器一组是使用匹配模式来判断最接近的匹配的函数。它们可以根据样例进行性能调优,使它们在AI应用中更有效。这些样例被称为“观察”或“模式”。在监督学习中,每个模式都属于某个预定义的类别。可以把一个类看作是一个必须做出的决定。所有的样例和它们的对应的类别标签被称为数据集。当接收一个新样例时,它会被分类器根据以前的经验进行分类。<ref name="Classifiers"/>
 +
 
    
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]]) 分类器(“ if shiny then diamond”)和控制器(“ if shiny then pick up”) 一句不能准确翻译
 
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]]) 分类器(“ if shiny then diamond”)和控制器(“ if shiny then pick up”) 一句不能准确翻译
第375行: 第376行:       −
分类器可以通过多种方式进行训练;,比如许多统计学和机器学习方法。决策树可能是应用最广泛的机器学习算法。其他使用广泛的分类器还有神经网络、<ref name="Neural networks"/>K最近邻算法{{efn|The most widely used analogical AI until the mid-1990s{{sfn|Domingos|2015|p=187}}}}<ref name="K-nearest neighbor algorithm"/>、核方法(比如支持向量机){{efn|SVM displaced k-nearest neighbor in the 1990s{{sfn|Domingos|2015|p=188}}}}<ref name="Kernel methods"/>、'''<font color=#ff8000> 高斯混合模型 Gaussian Mixture Mode</font>'''<ref name="Gaussian mixture model"/>,以及非常流行的'''<font color=#ff8000>朴素贝叶斯分类器 Naive Bayes Classifier</font>'''<ref name="Naive Bayes classifier"/>。分类器的分类效果在很大程度上取决于待分类数据的特征,如数据集的大小、样本跨类别的分布、维数和噪声水平。如果假设的模型很符合实际数据,则基于这种模型的分类器就能给出很好的结果。否则,传统观点认为如果没有匹配模型可用,而且只关心准确性(而不是速度或可扩展性) ,在大多数实际数据集上鉴别分类器(尤其是支持向量机)往往比基于模型的分类器(如“朴素贝叶斯”)更准确。<ref name="Classifier performance"/>{{sfn|Russell|Norvig|2009|loc=18.12: Learning from Examples: Summary}}
+
分类器可以通过多种方式进行训练;,比如许多统计学和机器学习方法。决策树可能是应用最广泛的机器学习算法。其他使用广泛的分类器还有神经网络、<ref name="Neural networks"/>K最近邻算法{{efn|The most widely used analogical AI until the mid-1990s{{sfn|Domingos|2015|p=187}}}}<ref name="K-nearest neighbor algorithm"/>、核方法(比如支持向量机){{efn|SVM displaced k-nearest neighbor in the 1990s{{sfn|Domingos|2015|p=188}}}}<ref name="Kernel methods"/>、''' 高斯混合模型 Gaussian Mixture Mode'''<ref name="Gaussian mixture model"/>,以及非常流行的'''朴素贝叶斯分类器 Naive Bayes Classifier'''<ref name="Naive Bayes classifier"/>。分类器的分类效果在很大程度上取决于待分类数据的特征,如数据集的大小、样本跨类别的分布、维数和噪声水平。如果假设的模型很符合实际数据,则基于这种模型的分类器就能给出很好的结果。否则,传统观点认为如果没有匹配模型可用,而且只关心准确性(而不是速度或可扩展性) ,在大多数实际数据集上鉴别分类器(尤其是支持向量机)往往比基于模型的分类器(如“朴素贝叶斯”)更准确。<ref name="Classifier performance"/>{{sfn|Russell|Norvig|2009|loc=18.12: Learning from Examples: Summary}}
      第397行: 第398行:  
The study of non-learning [[artificial neural network]]s<ref name="Neural networks"/> began in the decade before the field of AI research was founded, in the work of [[Walter Pitts]] and [[Warren McCullouch]]. [[Frank Rosenblatt]] invented the [[perceptron]], a learning network with a single layer, similar to the old concept of [[linear regression]]. Early pioneers also include [[Alexey Grigorevich Ivakhnenko]], [[Teuvo Kohonen]], [[Stephen Grossberg]], [[Kunihiko Fukushima]], [[Christoph von der Malsburg]], David Willshaw, [[Shun-Ichi Amari]], [[Bernard Widrow]], [[John Hopfield]], [[Eduardo R. Caianiello]], and others{{citation needed|date=July 2019}}.
 
The study of non-learning [[artificial neural network]]s<ref name="Neural networks"/> began in the decade before the field of AI research was founded, in the work of [[Walter Pitts]] and [[Warren McCullouch]]. [[Frank Rosenblatt]] invented the [[perceptron]], a learning network with a single layer, similar to the old concept of [[linear regression]]. Early pioneers also include [[Alexey Grigorevich Ivakhnenko]], [[Teuvo Kohonen]], [[Stephen Grossberg]], [[Kunihiko Fukushima]], [[Christoph von der Malsburg]], David Willshaw, [[Shun-Ichi Amari]], [[Bernard Widrow]], [[John Hopfield]], [[Eduardo R. Caianiello]], and others{{citation needed|date=July 2019}}.
   −
沃尔特·皮茨和沃伦·麦克卢奇共同完成的非学习型人工神经网络<ref name="Neural networks"/>的研究比AI研究领域成立早十年。他们发明了'''<font color=#ff8000>感知机 Perceptron</font>''',这是一个单层的学习网络,类似于线性回归的概念。早期的先驱者还包括 Alexey Grigorevich Ivakhnenko,Teuvo Kohonen,Stephen Grossberg,Kunihiko Fukushima,Christoph von der Malsburg,David Willshaw,Shun-Ichi Amari,Bernard Widrow,John Hopfield,Eduardo r. Caianiello 等人。
+
沃尔特·皮茨和沃伦·麦克卢奇共同完成的非学习型人工神经网络<ref name="Neural networks"/>的研究比AI研究领域成立早十年。他们发明了'''感知机 Perceptron''',这是一个单层的学习网络,类似于线性回归的概念。早期的先驱者还包括 Alexey Grigorevich Ivakhnenko,Teuvo Kohonen,Stephen Grossberg,Kunihiko Fukushima,Christoph von der Malsburg,David Willshaw,Shun-Ichi Amari,Bernard Widrow,John Hopfield,Eduardo r. Caianiello 等人。
      第403行: 第404行:       −
网络主要分为'''<font color=#ff8000> 非循环或前馈神经网络 Acyclic or Feedforward Neural Networks</font>'''(信号只向一个方向传递)和'''<font color=#ff8000>循环神经网络 Recurrent Neural Network</font>''' (允许反馈和对以前的输入事件进行短期记忆)。其中最常用的前馈网络.<ref name="Feedforward neural networks"/>有感知机、'''<font color=#ff8000多层感知机 Multi-layer Perceptrons></font>''' 和'''<font color=#ff8000> 径向基网络 Radial Basis Networks</font>'''。使用'''<font color=#ff8000>赫布型学习 Hebbian Learning </font>''' (“相互放电,共同链接”) ,GMDH 或竞争学习等技术的神经网络可以被应用于智能控制(机器人)或学习问题。
+
网络主要分为'''非循环或前馈神经网络 Acyclic or Feedforward Neural Networks'''(信号只向一个方向传递)和'''循环神经网络 Recurrent Neural Network''' (允许反馈和对以前的输入事件进行短期记忆)。其中最常用的前馈网络.<ref name="Feedforward neural networks"/>有感知机、'''多层感知机 Multi-layer Perceptrons''' 和'''径向基网络 Radial Basis Networks'''。使用'''赫布型学习 Hebbian Learning''' (“相互放电,共同链接”) ,GMDH 或竞争学习等技术的神经网络可以被应用于智能控制(机器人)或学习问题。<ref name="Learning in neural networks"/>
      第409行: 第410行:  
Today, neural networks are often trained by the [[backpropagation]] algorithm, which had been around since 1970 as the reverse mode of [[automatic differentiation]] published by [[Seppo Linnainmaa]],<ref name="lin1970">[[Seppo Linnainmaa]] (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6–7.</ref><ref name="grie2012">Griewank, Andreas (2012). Who Invented the Reverse Mode of Differentiation?. Optimization Stories, Documenta Matematica, Extra Volume ISMP (2012), 389–400.</ref> and was introduced to neural networks by [[Paul Werbos]].<ref name="WERBOS1974">[[Paul Werbos]], "Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences", ''PhD thesis, Harvard University'', 1974.</ref><ref name="werbos1982">[[Paul Werbos]] (1982). Applications of advances in nonlinear sensitivity analysis. In System modeling and optimization (pp. 762–770). Springer Berlin Heidelberg. [http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf Online] {{webarchive|url=https://web.archive.org/web/20160414055503/http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf |date=14 April 2016 }}</ref><ref name="Backpropagation"/>
 
Today, neural networks are often trained by the [[backpropagation]] algorithm, which had been around since 1970 as the reverse mode of [[automatic differentiation]] published by [[Seppo Linnainmaa]],<ref name="lin1970">[[Seppo Linnainmaa]] (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6–7.</ref><ref name="grie2012">Griewank, Andreas (2012). Who Invented the Reverse Mode of Differentiation?. Optimization Stories, Documenta Matematica, Extra Volume ISMP (2012), 389–400.</ref> and was introduced to neural networks by [[Paul Werbos]].<ref name="WERBOS1974">[[Paul Werbos]], "Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences", ''PhD thesis, Harvard University'', 1974.</ref><ref name="werbos1982">[[Paul Werbos]] (1982). Applications of advances in nonlinear sensitivity analysis. In System modeling and optimization (pp. 762–770). Springer Berlin Heidelberg. [http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf Online] {{webarchive|url=https://web.archive.org/web/20160414055503/http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf |date=14 April 2016 }}</ref><ref name="Backpropagation"/>
   −
当下神经网络常用'''<font color=#ff8000>反向传播算法</font>''' 来训练,1970年反向传播算法出现,被认为是 Seppo Linnainmaa提出的自动微分的反向模式出现<ref name="lin1970">[[Seppo Linnainmaa]] (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6–7.</ref><ref name="grie2012">Griewank, Andreas (2012). Who Invented the Reverse Mode of Differentiation?. Optimization Stories, Documenta Matematica, Extra Volume ISMP (2012), 389–400.</ref>,被保罗·韦伯引入神经网络。<ref name="WERBOS1974">[[Paul Werbos]], "Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences", ''PhD thesis, Harvard University'', 1974.</ref><ref name="werbos1982">[[Paul Werbos]] (1982). Applications of advances in nonlinear sensitivity analysis. In System modeling and optimization (pp. 762–770). Springer Berlin Heidelberg. [http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf Online] {{webarchive|url=https://web.archive.org/web/20160414055503/http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf |date=14 April 2016 }}</ref><ref name="Backpropagation"/>
+
当下神经网络常用'''反向传播算法''' 来训练,1970年反向传播算法出现,被认为是 Seppo Linnainmaa提出的自动微分的反向模式出现<ref name="lin1970">[[Seppo Linnainmaa]] (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6–7.</ref><ref name="grie2012">Griewank, Andreas (2012). Who Invented the Reverse Mode of Differentiation?. Optimization Stories, Documenta Matematica, Extra Volume ISMP (2012), 389–400.</ref>,被保罗·韦伯引入神经网络。<ref name="WERBOS1974">[[Paul Werbos]], "Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences", ''PhD thesis, Harvard University'', 1974.</ref><ref name="werbos1982">[[Paul Werbos]] (1982). Applications of advances in nonlinear sensitivity analysis. In System modeling and optimization (pp. 762–770). Springer Berlin Heidelberg. [http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf Online] {{webarchive|url=https://web.archive.org/web/20160414055503/http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf |date=14 April 2016 }}</ref><ref name="Backpropagation"/>
      第423行: 第424行:  
To summarize, most neural networks use some form of [[gradient descent]] on a hand-created neural topology. However, some research groups, such as [[Uber]], argue that simple [[neuroevolution]] to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches{{citation needed|date=July 2019}}. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".<ref>{{cite news|title=Artificial intelligence can 'evolve' to solve problems|url=http://www.sciencemag.org/news/2018/01/artificial-intelligence-can-evolve-solve-problems|accessdate=7 February 2018|work=Science {{!}} AAAS|date=10 January 2018|language=en}}</ref>
 
To summarize, most neural networks use some form of [[gradient descent]] on a hand-created neural topology. However, some research groups, such as [[Uber]], argue that simple [[neuroevolution]] to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches{{citation needed|date=July 2019}}. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".<ref>{{cite news|title=Artificial intelligence can 'evolve' to solve problems|url=http://www.sciencemag.org/news/2018/01/artificial-intelligence-can-evolve-solve-problems|accessdate=7 February 2018|work=Science {{!}} AAAS|date=10 January 2018|language=en}}</ref>
   −
总之,大多数神经网络都会在人工神经拓扑结构上使用某种形式的'''<font color=#ff8000>梯度下降法 Gradient Descent</font>'''。然而,一些研究组,比如 Uber的,认为通过简单的神经进化改变新神经网络拓扑结构和神经元间的权重可能比复杂的梯度下降法更适用{{citation needed|date=July 2019}}。神经进化的一个优势是,它不容易陷入“死胡同”。<ref>{{cite news|title=Artificial intelligence can 'evolve' to solve problems|url=http://www.sciencemag.org/news/2018/01/artificial-intelligence-can-evolve-solve-problems|accessdate=7 February 2018|work=Science {{!}} AAAS|date=10 January 2018|language=en}}</ref>
+
总之,大多数神经网络都会在人工神经拓扑结构上使用某种形式的'''梯度下降法 Gradient Descent'''。然而,一些研究组,比如 Uber的,认为通过简单的神经进化改变新神经网络拓扑结构和神经元间的权重可能比复杂的梯度下降法更适用{{citation needed|date=July 2019}}。神经进化的一个优势是,它不容易陷入“死胡同”。<ref>{{cite news|title=Artificial intelligence can 'evolve' to solve problems|url=http://www.sciencemag.org/news/2018/01/artificial-intelligence-can-evolve-solve-problems|accessdate=7 February 2018|work=Science {{!}} AAAS|date=10 January 2018|language=en}}</ref>
      第434行: 第435行:       −
深度学习是任何可以学习长因果链的人工神经网络。例如,一个具有六个隐藏层的前馈网络可以学习有七个链接的因果链(六个隐藏层 + 一个输出层) ,并且具深度为7的“'''<font color=#ff8000>信用分配路径 Credit Assignment Path,CAP</font>''' ”。许多深度学习系统需要学习长度在十及以上的因果链。<ref name="goodfellow2016">Ian Goodfellow, Yoshua Bengio, and Aaron Courville (2016). Deep Learning. MIT Press. [http://www.deeplearningbook.org Online] {{webarchive|url=https://web.archive.org/web/20160416111010/http://www.deeplearningbook.org/ |date=16 April 2016 }}</ref><ref name="HintonDengYu2012">{{cite journal | last1 = Hinton | first1 = G. | last2 = Deng | first2 = L. | last3 = Yu | first3 = D. | last4 = Dahl | first4 = G. | last5 = Mohamed | first5 = A. | last6 = Jaitly | first6 = N. | last7 = Senior | first7 = A. | last8 = Vanhoucke | first8 = V. | last9 = Nguyen | first9 = P. | last10 = Sainath | first10 = T. | last11 = Kingsbury | first11 = B. | year = 2012 | title = Deep Neural Networks for Acoustic Modeling in Speech Recognition – The shared views of four research groups | url = | journal = IEEE Signal Processing Magazine | volume = 29 | issue = 6| pages = 82–97 | doi=10.1109/msp.2012.2205597}}</ref><ref name="schmidhuber2015">{{cite journal |last=Schmidhuber |first=J. |year=2015 |title=Deep Learning in Neural Networks: An Overview |journal=Neural Networks |volume=61 |pages=85–117 |arxiv=1404.7828 |doi=10.1016/j.neunet.2014.09.003|pmid=25462637 }}</ref>
+
深度学习是任何可以学习长因果链的人工神经网络。例如,一个具有六个隐藏层的前馈网络可以学习有七个链接的因果链(六个隐藏层 + 一个输出层) ,并且具深度为7的“'''信用分配路径 Credit Assignment Path,CAP''' ”。许多深度学习系统需要学习长度在十及以上的因果链。<ref name="goodfellow2016">Ian Goodfellow, Yoshua Bengio, and Aaron Courville (2016). Deep Learning. MIT Press. [http://www.deeplearningbook.org Online] {{webarchive|url=https://web.archive.org/web/20160416111010/http://www.deeplearningbook.org/ |date=16 April 2016 }}</ref><ref name="HintonDengYu2012">{{cite journal | last1 = Hinton | first1 = G. | last2 = Deng | first2 = L. | last3 = Yu | first3 = D. | last4 = Dahl | first4 = G. | last5 = Mohamed | first5 = A. | last6 = Jaitly | first6 = N. | last7 = Senior | first7 = A. | last8 = Vanhoucke | first8 = V. | last9 = Nguyen | first9 = P. | last10 = Sainath | first10 = T. | last11 = Kingsbury | first11 = B. | year = 2012 | title = Deep Neural Networks for Acoustic Modeling in Speech Recognition – The shared views of four research groups | url = | journal = IEEE Signal Processing Magazine | volume = 29 | issue = 6| pages = 82–97 | doi=10.1109/msp.2012.2205597}}</ref><ref name="schmidhuber2015">{{cite journal |last=Schmidhuber |first=J. |year=2015 |title=Deep Learning in Neural Networks: An Overview |journal=Neural Networks |volume=61 |pages=85–117 |arxiv=1404.7828 |doi=10.1016/j.neunet.2014.09.003|pmid=25462637 }}</ref>
      第453行: 第454行:  
Deep learning often uses [[convolutional neural network]]s (CNNs), whose origins can be traced back to the [[Neocognitron]] introduced by [[Kunihiko Fukushima]] in 1980.<ref name="FUKU1980">{{cite journal | last1 = Fukushima | first1 = K. | year = 1980 | title = Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position | url = | journal = Biological Cybernetics | volume = 36 | issue = 4| pages = 193–202 | doi=10.1007/bf00344251 | pmid=7370364}}</ref> In 1989, [[Yann LeCun]] and colleagues applied [[backpropagation]] to such an architecture. In the early 2000s, in an industrial application, CNNs already processed an estimated 10% to 20% of all the checks written in the US.<ref name="lecun2016slides">[[Yann LeCun]] (2016). Slides on Deep Learning [https://indico.cern.ch/event/510372/ Online] {{webarchive|url=https://web.archive.org/web/20160423021403/https://indico.cern.ch/event/510372/ |date=23 April 2016 }}</ref>
 
Deep learning often uses [[convolutional neural network]]s (CNNs), whose origins can be traced back to the [[Neocognitron]] introduced by [[Kunihiko Fukushima]] in 1980.<ref name="FUKU1980">{{cite journal | last1 = Fukushima | first1 = K. | year = 1980 | title = Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position | url = | journal = Biological Cybernetics | volume = 36 | issue = 4| pages = 193–202 | doi=10.1007/bf00344251 | pmid=7370364}}</ref> In 1989, [[Yann LeCun]] and colleagues applied [[backpropagation]] to such an architecture. In the early 2000s, in an industrial application, CNNs already processed an estimated 10% to 20% of all the checks written in the US.<ref name="lecun2016slides">[[Yann LeCun]] (2016). Slides on Deep Learning [https://indico.cern.ch/event/510372/ Online] {{webarchive|url=https://web.archive.org/web/20160423021403/https://indico.cern.ch/event/510372/ |date=23 April 2016 }}</ref>
   −
深度学习通常使用'<font color=#ff8000>卷积神经网络 ConvolutionalNeural Networks CNNs</font>''' ,其起源可以追溯到1980年由福岛邦彦引进的新认知机。1989年扬·勒丘恩(Yann LeCun)和他的同事将反向传播算法应用于这样的架构。在21世纪初,在一项工业应用中,CNNs已经处理了美国大约10% 到20%的签发支票。
+
深度学习通常使用'卷积神经网络 ConvolutionalNeural Networks CNNs''' ,其起源可以追溯到1980年由福岛邦彦引进的新认知机。1989年扬·勒丘恩(Yann LeCun)和他的同事将反向传播算法应用于这样的架构。在21世纪初,在一项工业应用中,CNNs已经处理了美国大约10% 到20%的签发支票。
      第469行: 第470行:  
Early on, deep learning was also applied to sequence learning with [[recurrent neural network]]s (RNNs)<ref name="Recurrent neural networks"/> which are in theory Turing complete<ref>{{cite journal|last1=Hyötyniemi|first1=Heikki|title=Turing machines are recurrent neural networks|journal=Proceedings of STeP '96/Publications of the Finnish Artificial Intelligence Society|pages=13–24|date=1996}}</ref> and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.<ref name="schmidhuber2015"/> RNNs can be trained by [[gradient descent]]<ref>P. J. Werbos. Generalization of backpropagation with application to a recurrent gas market model" ''Neural Networks'' 1, 1988.</ref><ref>A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, 1987.</ref><ref>R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1994.</ref> but suffer from the [[vanishing gradient problem]].<ref name="goodfellow2016"/><ref name="hochreiter1991">[[Sepp Hochreiter]] (1991), [http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf Untersuchungen zu dynamischen neuronalen Netzen] {{webarchive|url=https://web.archive.org/web/20150306075401/http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf |date=6 March 2015 }}, Diploma thesis. Institut f. Informatik, Technische Univ. Munich. Advisor: J. Schmidhuber.</ref> In 1992, it was shown that unsupervised pre-training of a stack of [[recurrent neural network]]s can speed up subsequent supervised learning of deep sequential problems.<ref name="SCHMID1992">{{cite journal | last1 = Schmidhuber | first1 = J. | year = 1992 | title = Learning complex, extended sequences using the principle of history compression | url = | journal = Neural Computation | volume = 4 | issue = 2| pages = 234–242 | doi=10.1162/neco.1992.4.2.234| citeseerx = 10.1.1.49.3934}}</ref>
 
Early on, deep learning was also applied to sequence learning with [[recurrent neural network]]s (RNNs)<ref name="Recurrent neural networks"/> which are in theory Turing complete<ref>{{cite journal|last1=Hyötyniemi|first1=Heikki|title=Turing machines are recurrent neural networks|journal=Proceedings of STeP '96/Publications of the Finnish Artificial Intelligence Society|pages=13–24|date=1996}}</ref> and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.<ref name="schmidhuber2015"/> RNNs can be trained by [[gradient descent]]<ref>P. J. Werbos. Generalization of backpropagation with application to a recurrent gas market model" ''Neural Networks'' 1, 1988.</ref><ref>A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, 1987.</ref><ref>R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1994.</ref> but suffer from the [[vanishing gradient problem]].<ref name="goodfellow2016"/><ref name="hochreiter1991">[[Sepp Hochreiter]] (1991), [http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf Untersuchungen zu dynamischen neuronalen Netzen] {{webarchive|url=https://web.archive.org/web/20150306075401/http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf |date=6 March 2015 }}, Diploma thesis. Institut f. Informatik, Technische Univ. Munich. Advisor: J. Schmidhuber.</ref> In 1992, it was shown that unsupervised pre-training of a stack of [[recurrent neural network]]s can speed up subsequent supervised learning of deep sequential problems.<ref name="SCHMID1992">{{cite journal | last1 = Schmidhuber | first1 = J. | year = 1992 | title = Learning complex, extended sequences using the principle of history compression | url = | journal = Neural Computation | volume = 4 | issue = 2| pages = 234–242 | doi=10.1162/neco.1992.4.2.234| citeseerx = 10.1.1.49.3934}}</ref>
   −
早期,深度学习也被用于'''<font color=#ff8000>循环神经网络 Recurrent Neural Networks,RNNs</font>''' 的序列学习<ref>{{cite journal|last1=Hyötyniemi|first1=Heikki|title=Turing machines are recurrent neural networks|journal=Proceedings of STeP '96/Publications of the Finnish Artificial Intelligence Society|pages=13–24|date=1996}}</ref>,可以运行任意程序来处理任意的输入序列。一个循环神经网络的深度是无限制的,取决于其输入序列的长度; 因此,循环神经网络是一个深度学习的例子<ref name="schmidhuber2015"/>,但却存在梯度消失问题。1992年的一项研究表明无监督的预训练循环神经网络可以加速后续的深度序列问题的监督式学习。]<ref>P. J. Werbos. Generalization of backpropagation with application to a recurrent gas market model" ''Neural Networks'' 1, 1988.</ref><ref>A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, 1987.</ref><ref>R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1994.</ref> but suffer from the [[vanishing gradient problem]].<ref name="goodfellow2016"/><ref name="hochreiter1991">[[Sepp Hochreiter]] (1991), [http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf Untersuchungen zu dynamischen neuronalen Netzen] {{webarchive|url=https://web.archive.org/web/20150306075401/http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf |date=6 March 2015 }}, Diploma thesis. Institut f. Informatik, Technische Univ. Munich. Advisor: J. Schmidhuber.</ref> In 1992, it was shown that unsupervised pre-training of a stack of [[recurrent neural network]]s can speed up subsequent supervised learning of deep sequential problems.<ref name="SCHMID1992">{{cite journal | last1 = Schmidhuber | first1 = J. | year = 1992 | title = Learning complex, extended sequences using the principle of history compression | url = | journal = Neural Computation | volume = 4 | issue = 2| pages = 234–242 | doi=10.1162/neco.1992.4.2.234| citeseerx = 10.1.1.49.3934}}</ref>
+
早期,深度学习也被用于'''循环神经网络 Recurrent Neural Networks,RNNs''' 的序列学习<ref>{{cite journal|last1=Hyötyniemi|first1=Heikki|title=Turing machines are recurrent neural networks|journal=Proceedings of STeP '96/Publications of the Finnish Artificial Intelligence Society|pages=13–24|date=1996}}</ref>,可以运行任意程序来处理任意的输入序列。一个循环神经网络的深度是无限制的,取决于其输入序列的长度; 因此,循环神经网络是一个深度学习的例子<ref name="schmidhuber2015"/>,但却存在梯度消失问题。1992年的一项研究表明无监督的预训练循环神经网络可以加速后续的深度序列问题的监督式学习。]<ref>P. J. Werbos. Generalization of backpropagation with application to a recurrent gas market model" ''Neural Networks'' 1, 1988.</ref><ref>A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, 1987.</ref><ref>R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1994.</ref> but suffer from the [[vanishing gradient problem]].<ref name="goodfellow2016"/><ref name="hochreiter1991">[[Sepp Hochreiter]] (1991), [http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf Untersuchungen zu dynamischen neuronalen Netzen] {{webarchive|url=https://web.archive.org/web/20150306075401/http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf |date=6 March 2015 }}, Diploma thesis. Institut f. Informatik, Technische Univ. Munich. Advisor: J. Schmidhuber.</ref> In 1992, it was shown that unsupervised pre-training of a stack of [[recurrent neural network]]s can speed up subsequent supervised learning of deep sequential problems.<ref name="SCHMID1992">{{cite journal | last1 = Schmidhuber | first1 = J. | year = 1992 | title = Learning complex, extended sequences using the principle of history compression | url = | journal = Neural Computation | volume = 4 | issue = 2| pages = 234–242 | doi=10.1162/neco.1992.4.2.234| citeseerx = 10.1.1.49.3934}}</ref>
      第476行: 第477行:       −
许多研究人员现在使用着一种被称为 '''<font color=#ff8000>长短期记忆 Long Short-term Memory, LSTM </font>'''的网络——一种深度学习循环神经网络的变体,由霍克赖特和施米德胡贝在1997年提出。人们通常使用'''<font color=#ff8000>连接时序分类 Connectionist Temporal Classification, CTC</font>'''训练LSTM<ref name="graves2006">Alex Graves, Santiago Fernandez, Faustino Gomez, and [[Jürgen Schmidhuber]] (2006). Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural nets. Proceedings of ICML'06, pp. 369–376.</ref>。谷歌,微软和百度用CTC彻底改变了语音识别。例如,2015年谷歌的语音识别性能大幅提升了49%,现在数十亿智能手机用户都可以通过谷歌声音使用这项技术。谷歌也使用LSTM来改进机器翻译,例如2015年,通过训练的LSTM,谷歌的语音识别性能大幅提升了49%,现在通过谷歌语音可以被数十亿的智能手机用户使用。谷歌还使用LSTM来改进机器翻译、语言建模和多语言语言处理。LSTM与CNNs一起使用改进了自动图像字幕的功能等众多应用。
+
许多研究人员现在使用着一种被称为 '''长短期记忆 Long Short-term Memory, LSTM'''的网络——一种深度学习循环神经网络的变体,由霍克赖特和施米德胡贝在1997年提出。人们通常使用'''连接时序分类 Connectionist Temporal Classification, CTC'''训练LSTM<ref name="graves2006">Alex Graves, Santiago Fernandez, Faustino Gomez, and [[Jürgen Schmidhuber]] (2006). Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural nets. Proceedings of ICML'06, pp. 369–376.</ref>。谷歌,微软和百度用CTC彻底改变了语音识别。例如,2015年谷歌的语音识别性能大幅提升了49%,现在数十亿智能手机用户都可以通过谷歌声音使用这项技术。谷歌也使用LSTM来改进机器翻译,例如2015年,通过训练的LSTM,谷歌的语音识别性能大幅提升了49%,现在通过谷歌语音可以被数十亿的智能手机用户使用。谷歌还使用LSTM来改进机器翻译、语言建模和多语言语言处理。LSTM与CNNs一起使用改进了自动图像字幕的功能等众多应用。<ref name="hannun2014">
      第491行: 第492行:       −
游戏是评估进步率用的一个广泛认可的基准。2016年前后,AlphaGo 为传统棋类基准的时代的拉下终幕。不过,不完全知识的游戏给AI在博弈论领域提出了新的挑战。星际争霸等电子竞技现在仍然是一项的公众基准。现在出现了设立了有许多如 Imagenet 挑战赛的比赛和奖项以促进AI研究。最常见的比赛内容包括通用机器智能、对话行为、数据挖掘、机器人汽车、机器人足球以及传统游戏。  
+
游戏是评估进步率用的一个广泛认可的基准。2016年前后,AlphaGo 为传统棋类基准的时代的拉下终幕。<ref>{{cite news|last1=Borowiec|first1=Tracey Lien, Steven|title=AlphaGo beats human Go champ in milestone for artificial intelligence|url=https://www.latimes.com/world/asia/la-fg-korea-alphago-20160312-story.html|accessdate=7 May 2018|work=latimes.com|date=2016}}</ref><ref>{{cite news|last1=Brown|first1=Noam|last2=Sandholm|first2=Tuomas|title=Superhuman AI for heads-up no-limit poker: Libratus beats top professionals|url=http://science.sciencemag.org/content/359/6374/418|accessdate=7 May 2018|work=Science|date=26 January 2018|pages=418–424|language=en|doi=10.1126/science.aao1733}}</ref> [[Esports|E-sports]] such as [[StarCraft]] continue to provide additional public benchmarks.<ref>{{cite journal|last1=Ontanon|first1=Santiago|last2=Synnaeve|first2=Gabriel|last3=Uriarte|first3=Alberto|last4=Richoux|first4=Florian|last5=Churchill|first5=David|last6=Preuss|first6=Mike|title=A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft|journal=IEEE Transactions on Computational Intelligence and AI in Games|date=December 2013|volume=5|issue=4|pages=293–311|doi=10.1109/TCIAIG.2013.2286295|citeseerx=10.1.1.406.2524}}</ref><ref>{{cite news|title=Facebook Quietly Enters StarCraft War for AI Bots, and Loses|url=https://www.wired.com/story/facebook-quietly-enters-starcraft-war-for-ai-bots-and-loses/|accessdate=7 May 2018|work=WIRED|date=2017}}</ref> 不过,不完全知识的游戏给AI在博弈论领域提出了新的挑战。星际争霸等电子竞技现在仍然是一项的公众基准。现在出现了设立了有许多如 Imagenet 挑战赛的比赛和奖项以促进AI研究。最常见的比赛内容包括通用机器智能、对话行为、数据挖掘、机器人汽车、机器人足球以及传统游戏。 <ref>{{Cite web|url=http://image-net.org/challenges/LSVRC/2017/|title=ILSVRC2017|website=image-net.org|language=en|access-date=2018-11-06}}</ref>
      第498行: 第499行:       −
“模仿游戏”(对1950年图灵测试的一种解释,用来评估计算机是否可以模仿人类)如今被认为过于灵活,所以不能成为有一项意义的基准。图灵测试衍生出了'''<font color=#ff8000>验证码 Completely Automated Public Turing test to tell Computers and Humans Apart,CAPTCHA</font>'''(即全自动区分计算机和人类的图灵测试),顾名思义,这有助于确定用户是一个真实的人,而不是一台伪装成人的计算机。与标准的图灵测试不同,CAPTCHA 是由机器控制,面向人测试,而不是由人控制的,面向机器测试的。计算机要求用户完成一个简单的测试,然后给测试评出一个等级。计算机无法解决这个问题,所以一般认为只有人参加测试才能得出正确答案。验证码的一个常见类型是要求输入一幅计算机无法破译的图中扭曲的字母,数字或符号测试。  
+
“模仿游戏”(对1950年图灵测试的一种解释,用来评估计算机是否可以模仿人类)如今被认为过于灵活,所以不能成为有一项意义的基准<ref>{{cite journal|last1=Schoenick|first1=Carissa|last2=Clark|first2=Peter|last3=Tafjord|first3=Oyvind|last4=Turney|first4=Peter|last5=Etzioni|first5=Oren|title=Moving beyond the Turing Test with the Allen AI Science Challenge|journal=Communications of the ACM|date=23 August 2017|volume=60|issue=9|pages=60–64|doi=10.1145/3122814|arxiv=1604.04315}}</ref>。图灵测试衍生出了'''<font color=#ff8000>验证码 Completely Automated Public Turing test to tell Computers and Humans Apart,CAPTCHA</font>'''(即全自动区分计算机和人类的图灵测试),顾名思义,这有助于确定用户是一个真实的人,而不是一台伪装成人的计算机。与标准的图灵测试不同,CAPTCHA 是由机器控制,面向人测试,而不是由人控制的,面向机器测试的。计算机要求用户完成一个简单的测试,然后给测试评出一个等级。计算机无法解决这个问题,所以一般认为只有人参加测试才能得出正确答案。验证码的一个常见类型是要求输入一幅计算机无法破译的图中扭曲的字母,数字或符号测试。{{sfn|O'Brien|Marakas|2011}}
      第653行: 第654行:     
Artificial Intelligence has inspired numerous creative applications including its usage to produce visual art. The exhibition "Thinking Machines: Art and Design in the Computer Age, 1959–1989" at MoMA<ref name="moma">{{Cite web|url=https://www.moma.org/calendar/exhibitions/3863|title=Thinking Machines: Art and Design in the Computer Age, 1959–1989|website=The Museum of Modern Art|language=en|access-date=2019-07-23}}</ref> provides a good overview of the historical applications of AI for art, architecture, and design. Recent exhibitions showcasing the usage of AI to produce art include the Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the [[DeepDream]] algorithm<ref name = wp1>[https://www.washingtonpost.com/news/innovations/wp/2016/03/10/googles-psychedelic-paint-brush-raises-the-oldest-question-in-art/ Retrieved July 29]</ref> and the exhibition "Unhuman: Art in the Age of AI," which took place in Los Angeles and Frankfurt in the fall of 2017.<ref name = sf>{{cite web|url=https://www.statefestival.org/program/2017/unhuman-art-in-the-age-of-ai |title=Unhuman: Art in the Age of AI – State Festival |publisher=Statefestival.org |date= |accessdate=2018-09-13}}</ref><ref name="artsy">{{Cite web|url=https://www.artsy.net/article/artsy-editorial-hard-painting-made-computer-human|title=It's Getting Hard to Tell If a Painting Was Made by a Computer or a Human|last=Chun|first=Rene|date=2017-09-21|website=Artsy|language=en|access-date=2019-07-23}}</ref> In the spring of 2018, the Association of Computing Machinery dedicated a special magazine issue to the subject of computers and art highlighting the role of machine learning in the arts.<ref name = acm>[https://dl.acm.org/citation.cfm?id=3204480.3186697 Retrieved July 29]</ref> The Austrian [[Ars Electronica]] and [[Museum of Applied Arts, Vienna]] opened exhibitions on AI in 2019.<ref name="Ars Electronica Exhibition ''Understanding AI''">{{Cite web|url=https://ars.electronica.art/center/en/exhibitions/ai/ |access-date=September 2019}}</ref><ref name="Museum of Applied Arts Exhibition ''Uncanny Values''">{{Cite web|url=https://www.mak.at/en/program/exhibitions/uncanny_values |access-date=October 2019|title=MAK Wien - MAK Museum Wien}}</ref> The Ars Electronica's 2019 festival "Out of the box" extensively thematized the role of arts for a sustainable societal transformation with AI.<ref name="European Platform for Digital Humanism">{{Cite web|url=https://ars.electronica.art/outofthebox/en/digital-humanism-conf/ |access-date=September 2019}}</ref>
 
Artificial Intelligence has inspired numerous creative applications including its usage to produce visual art. The exhibition "Thinking Machines: Art and Design in the Computer Age, 1959–1989" at MoMA<ref name="moma">{{Cite web|url=https://www.moma.org/calendar/exhibitions/3863|title=Thinking Machines: Art and Design in the Computer Age, 1959–1989|website=The Museum of Modern Art|language=en|access-date=2019-07-23}}</ref> provides a good overview of the historical applications of AI for art, architecture, and design. Recent exhibitions showcasing the usage of AI to produce art include the Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the [[DeepDream]] algorithm<ref name = wp1>[https://www.washingtonpost.com/news/innovations/wp/2016/03/10/googles-psychedelic-paint-brush-raises-the-oldest-question-in-art/ Retrieved July 29]</ref> and the exhibition "Unhuman: Art in the Age of AI," which took place in Los Angeles and Frankfurt in the fall of 2017.<ref name = sf>{{cite web|url=https://www.statefestival.org/program/2017/unhuman-art-in-the-age-of-ai |title=Unhuman: Art in the Age of AI – State Festival |publisher=Statefestival.org |date= |accessdate=2018-09-13}}</ref><ref name="artsy">{{Cite web|url=https://www.artsy.net/article/artsy-editorial-hard-painting-made-computer-human|title=It's Getting Hard to Tell If a Painting Was Made by a Computer or a Human|last=Chun|first=Rene|date=2017-09-21|website=Artsy|language=en|access-date=2019-07-23}}</ref> In the spring of 2018, the Association of Computing Machinery dedicated a special magazine issue to the subject of computers and art highlighting the role of machine learning in the arts.<ref name = acm>[https://dl.acm.org/citation.cfm?id=3204480.3186697 Retrieved July 29]</ref> The Austrian [[Ars Electronica]] and [[Museum of Applied Arts, Vienna]] opened exhibitions on AI in 2019.<ref name="Ars Electronica Exhibition ''Understanding AI''">{{Cite web|url=https://ars.electronica.art/center/en/exhibitions/ai/ |access-date=September 2019}}</ref><ref name="Museum of Applied Arts Exhibition ''Uncanny Values''">{{Cite web|url=https://www.mak.at/en/program/exhibitions/uncanny_values |access-date=October 2019|title=MAK Wien - MAK Museum Wien}}</ref> The Ars Electronica's 2019 festival "Out of the box" extensively thematized the role of arts for a sustainable societal transformation with AI.<ref name="European Platform for Digital Humanism">{{Cite web|url=https://ars.electronica.art/outofthebox/en/digital-humanism-conf/ |access-date=September 2019}}</ref>
  −
Artificial Intelligence has inspired numerous creative applications including its usage to produce visual art. The exhibition "Thinking Machines: Art and Design in the Computer Age, 1959–1989" at MoMA provides a good overview of the historical applications of AI for art, architecture, and design. Recent exhibitions showcasing the usage of AI to produce art include the Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the DeepDream algorithm and the exhibition "Unhuman: Art in the Age of AI," which took place in Los Angeles and Frankfurt in the fall of 2017. In the spring of 2018, the Association of Computing Machinery dedicated a special magazine issue to the subject of computers and art highlighting the role of machine learning in the arts. The Austrian Ars Electronica and Museum of Applied Arts, Vienna opened exhibitions on AI in 2019. The Ars Electronica's 2019 festival "Out of the box" extensively thematized the role of arts for a sustainable societal transformation with AI.
      
AI催生了许多在如视觉艺术等领域的创造性应用。在纽约现代艺术博物馆举办的“思考机器: 计算机时代的艺术与设计,1959-1989”展览概述了艺术、建筑和设计的历史中AI的应用<ref name="moma">{{Cite web|url=https://www.moma.org/calendar/exhibitions/3863|title=Thinking Machines: Art and Design in the Computer Age, 1959–1989|website=The Museum of Modern Art|language=en|access-date=2019-07-23}}</ref>。最近的展览展示了AI在艺术创作中的应用<ref name = wp1>[https://www.washingtonpost.com/news/innovations/wp/2016/03/10/googles-psychedelic-paint-brush-raises-the-oldest-question-in-art/ Retrieved July 29]</ref>,包括谷歌赞助的旧金山灰色地带基金会(Gray Area Foundation)的慈善拍卖会,艺术家们在拍卖会中尝试了 DeepDream 算法,以及2017年秋天在洛杉矶和法兰克福举办的“非人类: AI时代的艺术”展览.<ref name = sf>{{cite web|url=https://www.statefestival.org/program/2017/unhuman-art-in-the-age-of-ai |title=Unhuman: Art in the Age of AI – State Festival |publisher=Statefestival.org |date= |accessdate=2018-09-13}}</ref><ref name="artsy">{{Cite web|url=https://www.artsy.net/article/artsy-editorial-hard-painting-made-computer-human|title=It's Getting Hard to Tell If a Painting Was Made by a Computer or a Human|last=Chun|first=Rene|date=2017-09-21|website=Artsy|language=en|access-date=2019-07-23}}</ref>。2018年春天,计算机协会发行了一期主题为计算机和艺术的特刊,着重展示了机器学习在艺术中的作用。奥地利电子艺术博物馆和维也纳应用艺术博物馆于2019年开设了AI展览<ref name="Ars Electronica Exhibition ''Understanding AI''">{{Cite web|url=https://ars.electronica.art/center/en/exhibitions/ai/ |access-date=September 2019}}</ref><ref name="Museum of Applied Arts Exhibition ''Uncanny Values''">{{Cite web|url=https://www.mak.at/en/program/exhibitions/uncanny_values |access-date=October 2019|title=MAK Wien - MAK Museum Wien}}</ref>。2019年的电子艺术节 “Out of the box”将AI艺术在可持续社会转型中的作用变成了一个主题<ref name="European Platform for Digital Humanism">{{Cite web|url=https://ars.electronica.art/outofthebox/en/digital-humanism-conf/ |access-date=September 2019}}</ref>。
 
AI催生了许多在如视觉艺术等领域的创造性应用。在纽约现代艺术博物馆举办的“思考机器: 计算机时代的艺术与设计,1959-1989”展览概述了艺术、建筑和设计的历史中AI的应用<ref name="moma">{{Cite web|url=https://www.moma.org/calendar/exhibitions/3863|title=Thinking Machines: Art and Design in the Computer Age, 1959–1989|website=The Museum of Modern Art|language=en|access-date=2019-07-23}}</ref>。最近的展览展示了AI在艺术创作中的应用<ref name = wp1>[https://www.washingtonpost.com/news/innovations/wp/2016/03/10/googles-psychedelic-paint-brush-raises-the-oldest-question-in-art/ Retrieved July 29]</ref>,包括谷歌赞助的旧金山灰色地带基金会(Gray Area Foundation)的慈善拍卖会,艺术家们在拍卖会中尝试了 DeepDream 算法,以及2017年秋天在洛杉矶和法兰克福举办的“非人类: AI时代的艺术”展览.<ref name = sf>{{cite web|url=https://www.statefestival.org/program/2017/unhuman-art-in-the-age-of-ai |title=Unhuman: Art in the Age of AI – State Festival |publisher=Statefestival.org |date= |accessdate=2018-09-13}}</ref><ref name="artsy">{{Cite web|url=https://www.artsy.net/article/artsy-editorial-hard-painting-made-computer-human|title=It's Getting Hard to Tell If a Painting Was Made by a Computer or a Human|last=Chun|first=Rene|date=2017-09-21|website=Artsy|language=en|access-date=2019-07-23}}</ref>。2018年春天,计算机协会发行了一期主题为计算机和艺术的特刊,着重展示了机器学习在艺术中的作用。奥地利电子艺术博物馆和维也纳应用艺术博物馆于2019年开设了AI展览<ref name="Ars Electronica Exhibition ''Understanding AI''">{{Cite web|url=https://ars.electronica.art/center/en/exhibitions/ai/ |access-date=September 2019}}</ref><ref name="Museum of Applied Arts Exhibition ''Uncanny Values''">{{Cite web|url=https://www.mak.at/en/program/exhibitions/uncanny_values |access-date=October 2019|title=MAK Wien - MAK Museum Wien}}</ref>。2019年的电子艺术节 “Out of the box”将AI艺术在可持续社会转型中的作用变成了一个主题<ref name="European Platform for Digital Humanism">{{Cite web|url=https://ars.electronica.art/outofthebox/en/digital-humanism-conf/ |access-date=September 2019}}</ref>。
第676行: 第675行:       −
;''阿兰 · 图灵的“礼貌惯例'': 阿兰 · 图灵的'''<font color=#32cd32>“礼貌惯例”</font>'''  : 我们不需要决定一台机器是否可以“思考”;我们只需要决定一台机器是否可以像人一样聪明地行动。这个对AI相关哲学问题的回应成为了图灵测试的基础。
+
''阿兰 · 图灵的“礼貌惯例'': 阿兰 · 图灵的'''“礼貌惯例”'''  : 我们不需要决定一台机器是否可以“思考”;我们只需要决定一台机器是否可以像人一样聪明地行动。这个对AI相关哲学问题的回应成为了图灵测试的基础。
    
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])polite convention未找到标准翻译
 
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])polite convention未找到标准翻译
第682行: 第681行:  
;''The [[Dartmouth Workshop|Dartmouth proposal]]'': "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." This conjecture was printed in the proposal for the Dartmouth Conference of 1956.<ref name="Dartmouth proposal"/>
 
;''The [[Dartmouth Workshop|Dartmouth proposal]]'': "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." This conjecture was printed in the proposal for the Dartmouth Conference of 1956.<ref name="Dartmouth proposal"/>
   −
;''达特茅斯提案'':达特茅斯会议提出: “可以通过准确地描述学习的每个方面或智能的任何特征,使得一台机器模拟学习和智能。”这个猜想被写在了1956年达特茅斯学院会议的提案中。
+
''达特茅斯提案'':达特茅斯会议提出: “可以通过准确地描述学习的每个方面或智能的任何特征,使得一台机器模拟学习和智能。”这个猜想被写在了1956年达特茅斯学院会议的提案中。
      −
;纽厄尔和西蒙的物理符号系统假说: 物理符号系统是通往通用智能行为的充分必要途径。纽厄尔和西蒙认为智能由符号形式的运算组成。<ref name="Physical symbol system hypothesis"/> 休伯特·德雷福斯则相反地认为,人类的知识依赖于无意识的本能,而不是有意识的符号运算;依赖于对情境的“感觉”,而不是明确的符号知识。(参见德雷福斯对人工智能的批评。)<ref>Dreyfus criticized the [[necessary and sufficient|necessary]] condition of the [[physical symbol system]] hypothesis, which he called the "psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules." {{Harv|Dreyfus|1992|p=156}}</ref><ref name="Dreyfus' critique"/>
+
纽厄尔和西蒙的物理符号系统假说: 物理符号系统是通往通用智能行为的充分必要途径。纽厄尔和西蒙认为智能由符号形式的运算组成。<ref name="Physical symbol system hypothesis"/> 休伯特·德雷福斯则相反地认为,人类的知识依赖于无意识的本能,而不是有意识的符号运算;依赖于对情境的“感觉”,而不是明确的符号知识。(参见德雷福斯对人工智能的批评。)<ref>Dreyfus criticized the [[necessary and sufficient|necessary]] condition of the [[physical symbol system]] hypothesis, which he called the "psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules." {{Harv|Dreyfus|1992|p=156}}</ref><ref name="Dreyfus' critique"/>
         −
;''哥德尔的论点'':哥德尔本人<ref name="Gödel himself"/> 、约翰·卢卡斯(在1961年)和罗杰·彭罗斯(在1989年以后的一个更详细的争论中)提出了高度技术性的观点,认为人类数学家可以看到他们自己的“'''<font color=#ff8000>哥德尔不完备定理 Gödel Satements</font>'''”的真实性,因此计算能力超过机械图灵机<ref name="The mathematical objection"/>。然而,也有一些人不同意“哥德尔不完备定理”。<ref>{{cite web|author1=Graham Oppy|title=Gödel's Incompleteness Theorems|url=http://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec|website=[[Stanford Encyclopedia of Philosophy]]|accessdate=27 April 2016|date=20 January 2015|quote=These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.|author1-link=Graham Oppy}}</ref><ref>{{cite book|author1=Stuart J. Russell|author2-link=Peter Norvig|author2=Peter Norvig|title=Artificial Intelligence: A Modern Approach|date=2010|publisher=[[Prentice Hall]]|location=Upper Saddle River, NJ|isbn=978-0-13-604259-4|edition=3rd|chapter=26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection|quote=even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.|title-link=Artificial Intelligence: A Modern Approach|author1-link=Stuart J. Russell}}</ref><ref>Mark Colyvan. An introduction to the philosophy of mathematics. [[Cambridge University Press]], 2012. From 2.2.2, 'Philosophical significance of Gödel's incompleteness results': "The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail."</ref>
+
''哥德尔的论点'':哥德尔本人<ref name="Gödel himself"/> 、约翰·卢卡斯(在1961年)和罗杰·彭罗斯(在1989年以后的一个更详细的争论中)提出了高度技术性的观点,认为人类数学家可以看到他们自己的“'''哥德尔不完备定理 Gödel Satements'''”的真实性,因此计算能力超过机械图灵机<ref name="The mathematical objection"/>。然而,也有一些人不同意“哥德尔不完备定理”。<ref>{{cite web|author1=Graham Oppy|title=Gödel's Incompleteness Theorems|url=http://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec|website=[[Stanford Encyclopedia of Philosophy]]|accessdate=27 April 2016|date=20 January 2015|quote=These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.|author1-link=Graham Oppy}}</ref><ref>{{cite book|author1=Stuart J. Russell|author2-link=Peter Norvig|author2=Peter Norvig|title=Artificial Intelligence: A Modern Approach|date=2010|publisher=[[Prentice Hall]]|location=Upper Saddle River, NJ|isbn=978-0-13-604259-4|edition=3rd|chapter=26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection|quote=even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.|title-link=Artificial Intelligence: A Modern Approach|author1-link=Stuart J. Russell}}</ref><ref>Mark Colyvan. An introduction to the philosophy of mathematics. [[Cambridge University Press]], 2012. From 2.2.2, 'Philosophical significance of Gödel's incompleteness results': "The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail."</ref>
      第695行: 第694行:  
;''The [[artificial brain]] argument'': The brain can be simulated by machines and because brains are intelligent, simulated brains must also be intelligent; thus machines can be intelligent. [[Hans Moravec]], [[Ray Kurzweil]] and others have argued that it is technologically feasible to copy the brain directly into hardware and software and that such a simulation will be essentially identical to the original.<ref name="Brain simulation"/>
 
;''The [[artificial brain]] argument'': The brain can be simulated by machines and because brains are intelligent, simulated brains must also be intelligent; thus machines can be intelligent. [[Hans Moravec]], [[Ray Kurzweil]] and others have argued that it is technologically feasible to copy the brain directly into hardware and software and that such a simulation will be essentially identical to the original.<ref name="Brain simulation"/>
   −
;''人工大脑的观点'': 因为大脑可以被机器模拟,且大脑是智能的,模拟的大脑也必须是智能的;因此机器可以是智能的。汉斯·莫拉维克、雷·库兹韦尔和其他人认为,技术层面直接将大脑复制到硬件和软件上是可行的,而且这些拷贝在本质上和原来的大脑是没有区别的。
+
''人工大脑的观点'': 因为大脑可以被机器模拟,且大脑是智能的,模拟的大脑也必须是智能的;因此机器可以是智能的。汉斯·莫拉维克、雷·库兹韦尔和其他人认为,技术层面直接将大脑复制到硬件和软件上是可行的,而且这些拷贝在本质上和原来的大脑是没有区别的。
      第701行: 第700行:       −
;''AI效应'': 机器本来就是智能的,但是观察者却没有意识到这一点。当深蓝在国际象棋比赛中击败加里 · 卡斯帕罗夫时,机器就在做出智能行为。然而,旁观者通常对AI程序的行为不屑一顾,认为它根本不是“真正的”智能; 因此,“真正的”智能就是人任何类能够做到但机器仍然做不到的智能行为。这就是众所周知的AI效应: “AI就是一切尚未完成的事情"。
+
''AI效应'': 机器本来就是智能的,但是观察者却没有意识到这一点。当深蓝在国际象棋比赛中击败加里 · 卡斯帕罗夫时,机器就在做出智能行为。然而,旁观者通常对AI程序的行为不屑一顾,认为它根本不是“真正的”智能; 因此,“真正的”智能就是人任何类能够做到但机器仍然做不到的智能行为。这就是众所周知的AI效应: “AI就是一切尚未完成的事情"。
      第712行: 第711行:  
The potential negative effects of AI and automation were a major issue for [[Andrew Yang]]'s [[Andrew Yang 2020 presidential campaign|2020 presidential campaign]] in the United States.<ref>{{Cite journal|url=https://www.wired.com/story/andrew-yangs-presidential-bid-is-so-very-21st-century/|title=Andrew Yang's Presidential Bid Is So Very 21st Century|journal=Wired|first=Matt|last=Simon|date=1 April 2019|via=www.wired.com}}</ref> Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations, has expressed that "I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm. [Terrorists could cause harm] via digital warfare, or it could be a combination of robotics, drones, with AI and other things as well that could be really dangerous. And, of course, other risks come from things like job losses. If we have massive numbers of people losing jobs and don't find a solution, it will be extremely dangerous. Things like lethal autonomous weapons systems should be properly governed — otherwise there's massive potential of misuse."<ref>{{Cite web | url=https://futurism.com/artificial-intelligence-experts-fear/amp |title = Five experts share what scares them the most about AI|date = 5 September 2018}}</ref>
 
The potential negative effects of AI and automation were a major issue for [[Andrew Yang]]'s [[Andrew Yang 2020 presidential campaign|2020 presidential campaign]] in the United States.<ref>{{Cite journal|url=https://www.wired.com/story/andrew-yangs-presidential-bid-is-so-very-21st-century/|title=Andrew Yang's Presidential Bid Is So Very 21st Century|journal=Wired|first=Matt|last=Simon|date=1 April 2019|via=www.wired.com}}</ref> Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations, has expressed that "I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm. [Terrorists could cause harm] via digital warfare, or it could be a combination of robotics, drones, with AI and other things as well that could be really dangerous. And, of course, other risks come from things like job losses. If we have massive numbers of people losing jobs and don't find a solution, it will be extremely dangerous. Things like lethal autonomous weapons systems should be properly governed — otherwise there's massive potential of misuse."<ref>{{Cite web | url=https://futurism.com/artificial-intelligence-experts-fear/amp |title = Five experts share what scares them the most about AI|date = 5 September 2018}}</ref>
   −
The potential negative effects of AI and automation were a major issue for Andrew Yang's 2020 presidential campaign in the United States. Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations, has expressed that "I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm. [Terrorists could cause harm] via digital warfare, or it could be a combination of robotics, drones, with AI and other things as well that could be really dangerous. And, of course, other risks come from things like job losses. If we have massive numbers of people losing jobs and don't find a solution, it will be extremely dangerous. Things like lethal autonomous weapons systems should be properly governed — otherwise there's massive potential of misuse."
     −
AI和自动化潜在的负面影响在安德鲁·杨2020年竞选美国总统的过程中体现了出来。联合国人工智能与机器人中心主任伊拉克利·贝瑞德兹表示: ”我认为AI危害会体现在犯罪分子或大型恐怖组织利用AI破坏大型流程或通过数字战争造成损失,也可能是机器人、无人机、AI以及其他可能非常危险的东西的结合。当然,其还有失业等风险。如果大量的人失去工作,而且没有解决方案,这将是极其危险的。另外,致命的自动化武器系统之类的东西应该得到合适的控制,否则就可能会被大量滥用。”
+
AI和自动化潜在的负面影响在安德鲁·杨2020年竞选美国总统的过程中体现了出来<ref>{{Cite journal|url=https://www.wired.com/story/andrew-yangs-presidential-bid-is-so-very-21st-century/|title=Andrew Yang's Presidential Bid Is So Very 21st Century|journal=Wired|first=Matt|last=Simon|date=1 April 2019|via=www.wired.com}}</ref> 。联合国人工智能与机器人中心主任伊拉克利·贝瑞德兹表示: ”我认为AI危害会体现在犯罪分子或大型恐怖组织利用AI破坏大型流程或通过数字战争造成损失,也可能是机器人、无人机、AI以及其他可能非常危险的东西的结合。当然,其还有失业等风险。如果大量的人失去工作,而且没有解决方案,这将是极其危险的。另外,致命的自动化武器系统之类的东西应该得到合适的控制,否则就可能会被大量滥用。”<ref>{{Cite web | url=https://futurism.com/artificial-intelligence-experts-fear/amp |title = Five experts share what scares them the most about AI|date = 5 September 2018}}</ref>
 
   
 
   
   第732行: 第730行:     
In his book ''[[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]'', philosopher [[Nick Bostrom]] provides an argument that artificial intelligence will pose a threat to humankind. He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit [[Instrumental convergence|convergent]] behavior such as acquiring resources or protecting itself from being shut down. If this AI's goals do not fully reflect humanity's—one example is an AI told to compute as many digits of pi as possible—it might harm humanity in order to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal.  Bostrom also emphasizes the difficulty of fully conveying humanity's values to an advanced AI.  He uses the hypothetical example of giving an AI the goal to make humans smile to illustrate a misguided attempt.  If the AI in that scenario were to become superintelligent, Bostrom argues, it may resort to methods that most humans would find horrifying, such as inserting "electrodes into the facial muscles of humans to cause constant, beaming grins" because that would be an efficient way to achieve its goal of making humans smile.<ref>{{cite web|url=https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/transcript|title=What happens when our computers get smarter than we are?|first=Nick|last=Bostrom|publisher=[[TED (conference)]]|date=2015}}</ref>  In his book ''[[Human Compatible]]'', AI researcher [[Stuart J. Russell]] echoes some of Bostrom's concerns while also proposing [[Human Compatible#Russell's three principles|an approach]] to developing provably beneficial machines focused on uncertainty and deference to humans,<ref name="HC">{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |url= |location=United States |publisher=Viking |page= |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322|title-link=Human Compatible }}</ref>{{rp|173}} possibly involving [[Reinforcement learning#Inverse reinforcement learning|inverse reinforcement learning]].<ref name="HC"/>{{rp|191–193}}
 
In his book ''[[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]'', philosopher [[Nick Bostrom]] provides an argument that artificial intelligence will pose a threat to humankind. He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit [[Instrumental convergence|convergent]] behavior such as acquiring resources or protecting itself from being shut down. If this AI's goals do not fully reflect humanity's—one example is an AI told to compute as many digits of pi as possible—it might harm humanity in order to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal.  Bostrom also emphasizes the difficulty of fully conveying humanity's values to an advanced AI.  He uses the hypothetical example of giving an AI the goal to make humans smile to illustrate a misguided attempt.  If the AI in that scenario were to become superintelligent, Bostrom argues, it may resort to methods that most humans would find horrifying, such as inserting "electrodes into the facial muscles of humans to cause constant, beaming grins" because that would be an efficient way to achieve its goal of making humans smile.<ref>{{cite web|url=https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/transcript|title=What happens when our computers get smarter than we are?|first=Nick|last=Bostrom|publisher=[[TED (conference)]]|date=2015}}</ref>  In his book ''[[Human Compatible]]'', AI researcher [[Stuart J. Russell]] echoes some of Bostrom's concerns while also proposing [[Human Compatible#Russell's three principles|an approach]] to developing provably beneficial machines focused on uncertainty and deference to humans,<ref name="HC">{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |url= |location=United States |publisher=Viking |page= |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322|title-link=Human Compatible }}</ref>{{rp|173}} possibly involving [[Reinforcement learning#Inverse reinforcement learning|inverse reinforcement learning]].<ref name="HC"/>{{rp|191–193}}
  −
In his book Superintelligence, philosopher Nick Bostrom provides an argument that artificial intelligence will pose a threat to humankind. He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down. If this AI's goals do not fully reflect humanity's—one example is an AI told to compute as many digits of pi as possible—it might harm humanity in order to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal.  Bostrom also emphasizes the difficulty of fully conveying humanity's values to an advanced AI.  He uses the hypothetical example of giving an AI the goal to make humans smile to illustrate a misguided attempt.  If the AI in that scenario were to become superintelligent, Bostrom argues, it may resort to methods that most humans would find horrifying, such as inserting "electrodes into the facial muscles of humans to cause constant, beaming grins" because that would be an efficient way to achieve its goal of making humans smile.  In his book Human Compatible, AI researcher Stuart J. Russell echoes some of Bostrom's concerns while also proposing an approach to developing provably beneficial machines focused on uncertainty and deference to humans, possibly involving inverse reinforcement learning.
      
在《超级智能》一书中,哲学家尼克·博斯特罗姆提出了一个AI将对人类构成威胁的论点。他认为,如果足够智能的AI选择有目标地行动,它将表现出收敛的行为,如获取资源或保护自己不被关机。如果这个AI的目标没有人性,比如一个AI被告知要尽可能多地计算圆周率的位数,那么它可能会伤害人类,以便获得更多的资源或者防止自身被关闭,最终更好地实现目标。博斯特罗姆还强调了向高级AI充分传达人类价值观存在的困难。他假设了一个例子来说明一种南辕北辙的尝试: 给AI一个让人类微笑的目标。博斯特罗姆认为,如果这种情况下的AI变得非常聪明,它可能会采用大多数人类都会感到恐怖的方法,比如“在人类面部肌肉中插入电极,使其产生持续的笑容” ,因为这将是实现让人类微笑的目标的有效方法。<ref>{{cite web|url=https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/transcript|title=What happens when our computers get smarter than we are?|first=Nick|last=Bostrom|publisher=[[TED (conference)]]|date=2015}}</ref>AI研究人员斯图亚特.J.罗素在他的《人类相容》一书中回应了博斯特罗姆的一些担忧,同时也提出了一种开发可证明有益的机器可能涉及逆强化学习的方法<ref name="HC"/>{{rp|191–193}},这种机器侧重解决不确定性和顺从人类的问题。<ref name="HC">{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |url= |location=United States |publisher=Viking |page= |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322|title-link=Human Compatible }}</ref>{{rp|173}}
 
在《超级智能》一书中,哲学家尼克·博斯特罗姆提出了一个AI将对人类构成威胁的论点。他认为,如果足够智能的AI选择有目标地行动,它将表现出收敛的行为,如获取资源或保护自己不被关机。如果这个AI的目标没有人性,比如一个AI被告知要尽可能多地计算圆周率的位数,那么它可能会伤害人类,以便获得更多的资源或者防止自身被关闭,最终更好地实现目标。博斯特罗姆还强调了向高级AI充分传达人类价值观存在的困难。他假设了一个例子来说明一种南辕北辙的尝试: 给AI一个让人类微笑的目标。博斯特罗姆认为,如果这种情况下的AI变得非常聪明,它可能会采用大多数人类都会感到恐怖的方法,比如“在人类面部肌肉中插入电极,使其产生持续的笑容” ,因为这将是实现让人类微笑的目标的有效方法。<ref>{{cite web|url=https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/transcript|title=What happens when our computers get smarter than we are?|first=Nick|last=Bostrom|publisher=[[TED (conference)]]|date=2015}}</ref>AI研究人员斯图亚特.J.罗素在他的《人类相容》一书中回应了博斯特罗姆的一些担忧,同时也提出了一种开发可证明有益的机器可能涉及逆强化学习的方法<ref name="HC"/>{{rp|191–193}},这种机器侧重解决不确定性和顺从人类的问题。<ref name="HC">{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |url= |location=United States |publisher=Viking |page= |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322|title-link=Human Compatible }}</ref>{{rp|173}}
第747行: 第743行:  
For the danger of uncontrolled advanced AI to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching. Other counterarguments revolve around humans being either intrinsically or convergently valuable from the perspective of an artificial intelligence.
 
For the danger of uncontrolled advanced AI to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching. Other counterarguments revolve around humans being either intrinsically or convergently valuable from the perspective of an artificial intelligence.
   −
为了实现不受控制的先进人工智能的危险,假设的人工智能必须超越或超越整个人类,一小部分专家认为这种可能性在未来足够遥远,不值得研究。其他反对意见则围绕着从人工智能的角度来看, '''<font color=#32cd32> 人类有内在或可聚合的价值。</font>'''
+
为了实现不受控制的先进人工智能的危险,假设的人工智能必须超越或超越整个人类,一小部分专家认为这种可能性在未来足够遥远,不值得研究。其他反对意见则围绕着从人工智能的角度来看, '''人类有内在或可聚合的价值。'''
    
如果要实现不受控制的高级AI,这个假想中的AI必须超越或者说在思想上超越全人类,一小部分专家认为这种可能性在足够遥远未来才会出现,不值得研究。其他反对意见则认为,从AI的角度来看,人类或者具有内在价值,或者具有可交流的价值。
 
如果要实现不受控制的高级AI,这个假想中的AI必须超越或者说在思想上超越全人类,一小部分专家认为这种可能性在足够遥远未来才会出现,不值得研究。其他反对意见则认为,从AI的角度来看,人类或者具有内在价值,或者具有可交流的价值。
第762行: 第758行:  
[[Joseph Weizenbaum]] wrote that AI applications cannot, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as [[customer service]] or [[psychotherapy]]<ref>In the early 1970s, [[Kenneth Colby]] presented a version of Weizenbaum's [[ELIZA]] known as DOCTOR which he promoted as a serious therapeutic tool. {{Harv|Crevier|1993|pp=132–144}}</ref> was deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as [[computationalism]]). To Weizenbaum these points suggest that AI research devalues human life.<ref name="Weizenbaum's critique"/>
 
[[Joseph Weizenbaum]] wrote that AI applications cannot, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as [[customer service]] or [[psychotherapy]]<ref>In the early 1970s, [[Kenneth Colby]] presented a version of Weizenbaum's [[ELIZA]] known as DOCTOR which he promoted as a serious therapeutic tool. {{Harv|Crevier|1993|pp=132–144}}</ref> was deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as [[computationalism]]). To Weizenbaum these points suggest that AI research devalues human life.<ref name="Weizenbaum's critique"/>
   −
Joseph Weizenbaum wrote that AI applications cannot, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy was deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum these points suggest that AI research devalues human life.
      
约瑟夫·维森鲍姆写道,根据定义,AI应用程序不能模拟人类的同理心,并且在诸如客户服务或心理治疗等领域使用AI技术是严重错误<ref>In the early 1970s, [[Kenneth Colby]] presented a version of Weizenbaum's [[ELIZA]] known as DOCTOR which he promoted as a serious therapeutic tool. {{Harv|Crevier|1993|pp=132–144}}</ref> 。维森鲍姆还对AI研究人员(以及一些哲学家)将人类思维视为一个计算机程序(现在称为计算主义)而感到困扰。对维森鲍姆来说,这些观点表明AI研究贬低了人类的生命价值。<ref name="Weizenbaum's critique"/>
 
约瑟夫·维森鲍姆写道,根据定义,AI应用程序不能模拟人类的同理心,并且在诸如客户服务或心理治疗等领域使用AI技术是严重错误<ref>In the early 1970s, [[Kenneth Colby]] presented a version of Weizenbaum's [[ELIZA]] known as DOCTOR which he promoted as a serious therapeutic tool. {{Harv|Crevier|1993|pp=132–144}}</ref> 。维森鲍姆还对AI研究人员(以及一些哲学家)将人类思维视为一个计算机程序(现在称为计算主义)而感到困扰。对维森鲍姆来说,这些观点表明AI研究贬低了人类的生命价值。<ref name="Weizenbaum's critique"/>
第773行: 第768行:     
One concern is that AI programs may be programmed to be biased against certain groups, such as women and minorities, because most of the developers are wealthy Caucasian men.<ref>{{Cite web|url=https://www.channelnewsasia.com/news/commentary/artificial-intelligence-big-data-bias-hiring-loans-key-challenge-11097374|title=Commentary: Bad news. Artificial intelligence is biased|website=CNA}}</ref> Support for artificial intelligence is higher among men (with 47% approving) than women (35% approving).
 
One concern is that AI programs may be programmed to be biased against certain groups, such as women and minorities, because most of the developers are wealthy Caucasian men.<ref>{{Cite web|url=https://www.channelnewsasia.com/news/commentary/artificial-intelligence-big-data-bias-hiring-loans-key-challenge-11097374|title=Commentary: Bad news. Artificial intelligence is biased|website=CNA}}</ref> Support for artificial intelligence is higher among men (with 47% approving) than women (35% approving).
  −
One concern is that AI programs may be programmed to be biased against certain groups, such as women and minorities, because most of the developers are wealthy Caucasian men. Support for artificial intelligence is higher among men (with 47% approving) than women (35% approving).
      
人们担心的一个问题是,AI程序可能会对某些群体存在偏见,比如女性和少数族裔,因为大多数开发者都是富有的白人男性<ref>{{Cite web|url=https://www.channelnewsasia.com/news/commentary/artificial-intelligence-big-data-bias-hiring-loans-key-challenge-11097374|title=Commentary: Bad news. Artificial intelligence is biased|website=CNA}}</ref>。男性对AI的支持率(47%)高于女性(35%)。
 
人们担心的一个问题是,AI程序可能会对某些群体存在偏见,比如女性和少数族裔,因为大多数开发者都是富有的白人男性<ref>{{Cite web|url=https://www.channelnewsasia.com/news/commentary/artificial-intelligence-big-data-bias-hiring-loans-key-challenge-11097374|title=Commentary: Bad news. Artificial intelligence is biased|website=CNA}}</ref>。男性对AI的支持率(47%)高于女性(35%)。
第782行: 第775行:  
Algorithms have a host of applications in today's legal system already, assisting officials ranging from judges to parole officers and public defenders in gauging the predicted likelihood of recidivism of defendants.<ref name="propublica.org">{{Cite web|url=https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm|title=How We Analyzed the COMPAS Recidivism Algorithm|last=Jeff Larson|first=Julia Angwin|date=2016-05-23|website=ProPublica|language=en|access-date=2019-07-23}}</ref> COMPAS (an acronym for Correctional Offender Management Profiling for Alternative Sanctions) counts among the most widely utilized commercially available solutions.<ref name="propublica.org"/> It has been suggested that COMPAS assigns an exceptionally elevated risk of recidivism to black defendants while, conversely, ascribing low risk estimate to white defendants significantly more often than statistically expected.<ref name="propublica.org"/>
 
Algorithms have a host of applications in today's legal system already, assisting officials ranging from judges to parole officers and public defenders in gauging the predicted likelihood of recidivism of defendants.<ref name="propublica.org">{{Cite web|url=https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm|title=How We Analyzed the COMPAS Recidivism Algorithm|last=Jeff Larson|first=Julia Angwin|date=2016-05-23|website=ProPublica|language=en|access-date=2019-07-23}}</ref> COMPAS (an acronym for Correctional Offender Management Profiling for Alternative Sanctions) counts among the most widely utilized commercially available solutions.<ref name="propublica.org"/> It has been suggested that COMPAS assigns an exceptionally elevated risk of recidivism to black defendants while, conversely, ascribing low risk estimate to white defendants significantly more often than statistically expected.<ref name="propublica.org"/>
   −
Algorithms have a host of applications in today's legal system already, assisting officials ranging from judges to parole officers and public defenders in gauging the predicted likelihood of recidivism of defendants. COMPAS (an acronym for Correctional Offender Management Profiling for Alternative Sanctions) counts among the most widely utilized commercially available solutions. It has been suggested that COMPAS assigns an exceptionally elevated risk of recidivism to black defendants while, conversely, ascribing low risk estimate to white defendants significantly more often than statistically expected.
      
算法在今天的法律体系中已经有了大量的应用,它能协助法官以及假释官员,以及哪些负责评估被告再次犯罪可能性的公设辩护人<ref name="propublica.org">{{Cite web|url=https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm|title=How We Analyzed the COMPAS Recidivism Algorithm|last=Jeff Larson|first=Julia Angwin|date=2016-05-23|website=ProPublica|language=en|access-date=2019-07-23}}</ref>。COMPAS(Correctional Offender Management Profiling for Alternative Sanctions,即“替代性制裁的惩罚性罪犯管理分析”的首字母缩写)是商业上使用最广泛的解决方案之一<ref name="propublica.org"/>。有人指出,COMPAS 对黑人被告累犯风险的评估数值非常高,而相反的,白人被告低风险估计的频率明显高于统计学期望。<ref name="propublica.org"/>
 
算法在今天的法律体系中已经有了大量的应用,它能协助法官以及假释官员,以及哪些负责评估被告再次犯罪可能性的公设辩护人<ref name="propublica.org">{{Cite web|url=https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm|title=How We Analyzed the COMPAS Recidivism Algorithm|last=Jeff Larson|first=Julia Angwin|date=2016-05-23|website=ProPublica|language=en|access-date=2019-07-23}}</ref>。COMPAS(Correctional Offender Management Profiling for Alternative Sanctions,即“替代性制裁的惩罚性罪犯管理分析”的首字母缩写)是商业上使用最广泛的解决方案之一<ref name="propublica.org"/>。有人指出,COMPAS 对黑人被告累犯风险的评估数值非常高,而相反的,白人被告低风险估计的频率明显高于统计学期望。<ref name="propublica.org"/>
第793行: 第785行:  
The relationship between automation and employment is complicated. While automation eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects.<ref>E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 SSRN, part 2(3)]</ref> Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; ''[[The Economist]]'' states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".<ref>{{cite news|title=Automation and anxiety|url=https://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety|accessdate=13 January 2018|work=The Economist|date=9 May 2015}}</ref> Subjective estimates of the risk vary widely; for example, Michael Osborne and [[Carl Benedikt Frey]] estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S.<!-- see report p. 33 table 4; 9% is both the OECD average and the US average --> jobs as "high risk".<ref>{{cite news|last1=Lohr|first1=Steve|title=Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says|url=https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-but-not-as-fast-as-some-fear-new-report-says.html|accessdate=13 January 2018|work=The New York Times|date=2017}}</ref><ref>{{Cite journal|date=1 January 2017|title=The future of employment: How susceptible are jobs to computerisation?|journal=Technological Forecasting and Social Change|volume=114|pages=254–280|doi=10.1016/j.techfore.2016.08.019|issn=0040-1625|last1=Frey|first1=Carl Benedikt|last2=Osborne|first2=Michael A|citeseerx=10.1.1.395.416}}</ref><ref>Arntz, Melanie, Terry Gregory, and Ulrich Zierahn. "The risk of automation for jobs in OECD countries: A comparative analysis." OECD Social, Employment, and Migration Working Papers 189 (2016). p. 33.</ref> Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.<ref>{{cite news|last1=Mahdawi|first1=Arwa|title=What jobs will still be around in 20 years? Read this to prepare your future|url=https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health|accessdate=13 January 2018|work=The Guardian|date=26 June 2017}}</ref> Author [[Martin Ford (author)|Martin Ford]] and others go further and argue that many jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be "accessible to people with average capability", even with retraining. Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that "we're in uncharted territory" with AI.<ref name="guardian jobs debate">{{cite news|last1=Ford|first1=Martin|last2=Colvin|first2=Geoff|title=Will robots create more jobs than they destroy?|url=https://www.theguardian.com/technology/2015/sep/06/will-robots-create-destroy-jobs|accessdate=13 January 2018|work=The Guardian|date=6 September 2015}}</ref>
 
The relationship between automation and employment is complicated. While automation eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects.<ref>E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 SSRN, part 2(3)]</ref> Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; ''[[The Economist]]'' states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".<ref>{{cite news|title=Automation and anxiety|url=https://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety|accessdate=13 January 2018|work=The Economist|date=9 May 2015}}</ref> Subjective estimates of the risk vary widely; for example, Michael Osborne and [[Carl Benedikt Frey]] estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S.<!-- see report p. 33 table 4; 9% is both the OECD average and the US average --> jobs as "high risk".<ref>{{cite news|last1=Lohr|first1=Steve|title=Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says|url=https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-but-not-as-fast-as-some-fear-new-report-says.html|accessdate=13 January 2018|work=The New York Times|date=2017}}</ref><ref>{{Cite journal|date=1 January 2017|title=The future of employment: How susceptible are jobs to computerisation?|journal=Technological Forecasting and Social Change|volume=114|pages=254–280|doi=10.1016/j.techfore.2016.08.019|issn=0040-1625|last1=Frey|first1=Carl Benedikt|last2=Osborne|first2=Michael A|citeseerx=10.1.1.395.416}}</ref><ref>Arntz, Melanie, Terry Gregory, and Ulrich Zierahn. "The risk of automation for jobs in OECD countries: A comparative analysis." OECD Social, Employment, and Migration Working Papers 189 (2016). p. 33.</ref> Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.<ref>{{cite news|last1=Mahdawi|first1=Arwa|title=What jobs will still be around in 20 years? Read this to prepare your future|url=https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health|accessdate=13 January 2018|work=The Guardian|date=26 June 2017}}</ref> Author [[Martin Ford (author)|Martin Ford]] and others go further and argue that many jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be "accessible to people with average capability", even with retraining. Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that "we're in uncharted territory" with AI.<ref name="guardian jobs debate">{{cite news|last1=Ford|first1=Martin|last2=Colvin|first2=Geoff|title=Will robots create more jobs than they destroy?|url=https://www.theguardian.com/technology/2015/sep/06/will-robots-create-destroy-jobs|accessdate=13 January 2018|work=The Guardian|date=6 September 2015}}</ref>
   −
The relationship between automation and employment is complicated. While automation eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects. Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously". Subjective estimates of the risk vary widely; for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S.<!-- see report p. 33 table 4; 9% is both the OECD average and the US average --> jobs as "high risk". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. Author Martin Ford and others go further and argue that many jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be "accessible to people with average capability", even with retraining. Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that "we're in uncharted territory" with AI.
      
自动化与就业的关系是复杂的。自动化在减少过时工作的同时,也通过微观经济和宏观经济效应创造了新的就业机会<ref>E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 SSRN, part 2(3)]</ref>。与以往的自动化浪潮不同,许多中产阶级的工作可能会被AI淘汰。《经济学人》杂志指出,“AI对白领工作的影响,就像工业革命时期蒸汽动力对蓝领工作的影响一样,需要我们正视”<ref>{{cite news|title=Automation and anxiety|url=https://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety|accessdate=13 January 2018|work=The Economist|date=9 May 2015}}</ref>。对风险的主观估计差别很大,例如,迈克尔·奥斯本和卡尔·贝内迪克特·弗雷估计,美国47% 的工作有较高风险被自动化取代 ,而经合组织的报告认为美国仅有9% 的工作处于“高风险”状态<ref>{{cite news|last1=Lohr|first1=Steve|title=Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says|url=https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-but-not-as-fast-as-some-fear-new-report-says.html|accessdate=13 January 2018|work=The New York Times|date=2017}}</ref><ref>{{Cite journal|date=1 January 2017|title=The future of employment: How susceptible are jobs to computerisation?|journal=Technological Forecasting and Social Change|volume=114|pages=254–280|doi=10.1016/j.techfore.2016.08.019|issn=0040-1625|last1=Frey|first1=Carl Benedikt|last2=Osborne|first2=Michael A|citeseerx=10.1.1.395.416}}</ref><ref>Arntz, Melanie, Terry Gregory, and Ulrich Zierahn. "The risk of automation for jobs in OECD countries: A comparative analysis." OECD Social, Employment, and Migration Working Papers 189 (2016). p. 33.</ref>。从律师助理到快餐厨师等职业都面临着极大的风险,而个人医疗保健、神职人员等护理相关职业的就业需求可能会增加<ref>{{cite news|last1=Mahdawi|first1=Arwa|title=What jobs will still be around in 20 years? Read this to prepare your future|url=https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health|accessdate=13 January 2018|work=The Guardian|date=26 June 2017}}</ref>。作家马丁•福特和其他人进一步指出,许多工作都是常规、重复的,对AI而言是可以预测的。福特警告道,这些工作可能在未来几十年内实现自动化,而且即便对失业人员进行再培训,许多能力一般的人也不能获得新工作。经济学家指出,在过去技术往往会增加而不是减少总就业人数,但他们承认,AI“正处于未知领域”<ref name="guardian jobs debate">{{cite news|last1=Ford|first1=Martin|last2=Colvin|first2=Geoff|title=Will robots create more jobs than they destroy?|url=https://www.theguardian.com/technology/2015/sep/06/will-robots-create-destroy-jobs|accessdate=13 January 2018|work=The Guardian|date=6 September 2015}}</ref>。
 
自动化与就业的关系是复杂的。自动化在减少过时工作的同时,也通过微观经济和宏观经济效应创造了新的就业机会<ref>E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 SSRN, part 2(3)]</ref>。与以往的自动化浪潮不同,许多中产阶级的工作可能会被AI淘汰。《经济学人》杂志指出,“AI对白领工作的影响,就像工业革命时期蒸汽动力对蓝领工作的影响一样,需要我们正视”<ref>{{cite news|title=Automation and anxiety|url=https://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety|accessdate=13 January 2018|work=The Economist|date=9 May 2015}}</ref>。对风险的主观估计差别很大,例如,迈克尔·奥斯本和卡尔·贝内迪克特·弗雷估计,美国47% 的工作有较高风险被自动化取代 ,而经合组织的报告认为美国仅有9% 的工作处于“高风险”状态<ref>{{cite news|last1=Lohr|first1=Steve|title=Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says|url=https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-but-not-as-fast-as-some-fear-new-report-says.html|accessdate=13 January 2018|work=The New York Times|date=2017}}</ref><ref>{{Cite journal|date=1 January 2017|title=The future of employment: How susceptible are jobs to computerisation?|journal=Technological Forecasting and Social Change|volume=114|pages=254–280|doi=10.1016/j.techfore.2016.08.019|issn=0040-1625|last1=Frey|first1=Carl Benedikt|last2=Osborne|first2=Michael A|citeseerx=10.1.1.395.416}}</ref><ref>Arntz, Melanie, Terry Gregory, and Ulrich Zierahn. "The risk of automation for jobs in OECD countries: A comparative analysis." OECD Social, Employment, and Migration Working Papers 189 (2016). p. 33.</ref>。从律师助理到快餐厨师等职业都面临着极大的风险,而个人医疗保健、神职人员等护理相关职业的就业需求可能会增加<ref>{{cite news|last1=Mahdawi|first1=Arwa|title=What jobs will still be around in 20 years? Read this to prepare your future|url=https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health|accessdate=13 January 2018|work=The Guardian|date=26 June 2017}}</ref>。作家马丁•福特和其他人进一步指出,许多工作都是常规、重复的,对AI而言是可以预测的。福特警告道,这些工作可能在未来几十年内实现自动化,而且即便对失业人员进行再培训,许多能力一般的人也不能获得新工作。经济学家指出,在过去技术往往会增加而不是减少总就业人数,但他们承认,AI“正处于未知领域”<ref name="guardian jobs debate">{{cite news|last1=Ford|first1=Martin|last2=Colvin|first2=Geoff|title=Will robots create more jobs than they destroy?|url=https://www.theguardian.com/technology/2015/sep/06/will-robots-create-destroy-jobs|accessdate=13 January 2018|work=The Guardian|date=6 September 2015}}</ref>。
第803行: 第794行:     
Currently, 50+ countries are researching battlefield robots, including the United States, China, Russia, and the United Kingdom. Many people concerned about risk from superintelligent AI also want to limit the use of artificial soldiers and drones.<ref>{{cite web|title = Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence|url = http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|website = Observer|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-19}}</ref>
 
Currently, 50+ countries are researching battlefield robots, including the United States, China, Russia, and the United Kingdom. Many people concerned about risk from superintelligent AI also want to limit the use of artificial soldiers and drones.<ref>{{cite web|title = Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence|url = http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|website = Observer|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-19}}</ref>
  −
Currently, 50+ countries are researching battlefield robots, including the United States, China, Russia, and the United Kingdom. Many people concerned about risk from superintelligent AI also want to limit the use of artificial soldiers and drones.
      
目前,包括美国、中国、俄罗斯和英国在内的50多个国家正在研究战场机器人。许多人在担心来自超级智能AI的风险的同时,也希望限制人造士兵和无人机的使用。<ref>{{cite web|title = Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence|url = http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|website = Observer|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-19}}</ref>
 
目前,包括美国、中国、俄罗斯和英国在内的50多个国家正在研究战场机器人。许多人在担心来自超级智能AI的风险的同时,也希望限制人造士兵和无人机的使用。<ref>{{cite web|title = Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence|url = http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|website = Observer|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-19}}</ref>
第812行: 第801行:  
Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use [[ethics|ethical reasoning]] to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics.<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref> Research in this area includes [[machine ethics]], [[artificial moral agents]], [[friendly AI]] and discussion towards building a [[human rights]] framework is also in talks.<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>
 
Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use [[ethics|ethical reasoning]] to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics.<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref> Research in this area includes [[machine ethics]], [[artificial moral agents]], [[friendly AI]] and discussion towards building a [[human rights]] framework is also in talks.<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>
   −
Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use ethical reasoning to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics. Research in this area includes machine ethics, artificial moral agents, friendly AI and discussion towards building a human rights framework is also in talks.
      
具有智能的机器有潜力使用它们的智能来防止伤害和减少风险;它们也有能力利用伦理推理来更好地做出它们在世界上的行动。因此,有必要为AI和机器人制定和规范政策<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref>。这一领域的研究包括机器伦理学、人工道德主题、友好AI以及关于建立人权框架的讨论<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>。
 
具有智能的机器有潜力使用它们的智能来防止伤害和减少风险;它们也有能力利用伦理推理来更好地做出它们在世界上的行动。因此,有必要为AI和机器人制定和规范政策<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref>。这一领域的研究包括机器伦理学、人工道德主题、友好AI以及关于建立人权框架的讨论<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>。
第821行: 第809行:     
Wendell Wallach introduced the concept of [[artificial moral agents]] (AMA) in his book ''Moral Machines''<ref>Wendell Wallach (2010). ''Moral Machines'', Oxford University Press.</ref> For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as "Does Humanity Want Computers Making Moral Decisions"<ref>Wallach, pp 37–54.</ref> and "Can (Ro)bots Really Be Moral".<ref>Wallach, pp 55–73.</ref> For Wallach, the question is not centered on the issue of ''whether'' machines can demonstrate the equivalent of moral behavior in contrast to the ''constraints'' which society may place on the development of AMAs.<ref>Wallach, Introduction chapter.</ref>
 
Wendell Wallach introduced the concept of [[artificial moral agents]] (AMA) in his book ''Moral Machines''<ref>Wendell Wallach (2010). ''Moral Machines'', Oxford University Press.</ref> For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as "Does Humanity Want Computers Making Moral Decisions"<ref>Wallach, pp 37–54.</ref> and "Can (Ro)bots Really Be Moral".<ref>Wallach, pp 55–73.</ref> For Wallach, the question is not centered on the issue of ''whether'' machines can demonstrate the equivalent of moral behavior in contrast to the ''constraints'' which society may place on the development of AMAs.<ref>Wallach, Introduction chapter.</ref>
  −
Wendell Wallach introduced the concept of artificial moral agents (AMA) in his book Moral Machines For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as "Does Humanity Want Computers Making Moral Decisions" and "Can (Ro)bots Really Be Moral". For Wallach, the question is not centered on the issue of whether machines can demonstrate the equivalent of moral behavior in contrast to the constraints which society may place on the development of AMAs.
      
温德尔•沃勒克在他的著作《道德机器》(Moral Machines)<ref>Wendell Wallach (2010). ''Moral Machines'', Oxford University Press.</ref>中提出了人工道德智能主体(AMA)的概念。在两个核心问题的指导下,AMA 已经成为AI研究领域的一部分。他将这两个核心问题定义为“人类是否希望计算机做出道德决策”和“机器人真的可以拥有道德吗”。对于沃勒克来说,这个问题的重点并不是机器能否适应社会,表现与社会对AMA发展所施加的限制相对应的道德行为。
 
温德尔•沃勒克在他的著作《道德机器》(Moral Machines)<ref>Wendell Wallach (2010). ''Moral Machines'', Oxford University Press.</ref>中提出了人工道德智能主体(AMA)的概念。在两个核心问题的指导下,AMA 已经成为AI研究领域的一部分。他将这两个核心问题定义为“人类是否希望计算机做出道德决策”和“机器人真的可以拥有道德吗”。对于沃勒克来说,这个问题的重点并不是机器能否适应社会,表现与社会对AMA发展所施加的限制相对应的道德行为。
第834行: 第820行:     
The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.<ref name="autogenerated1">Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press.</ref> The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: "Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitate this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about Ethics."<ref name="autogenerated2">{{cite web|url=http://www.aaai.org/Library/Symposia/Fall/fs05-06 |title=Machine Ethics |work=aaai.org |url-status=dead |archiveurl=https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06 |archivedate=29 November 2014 }}</ref> Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality. A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics"<ref name="autogenerated1"/> that stems from the AAAI Fall 2005 Symposium on Machine Ethics.<ref name="autogenerated2"/>
 
The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.<ref name="autogenerated1">Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press.</ref> The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: "Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitate this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about Ethics."<ref name="autogenerated2">{{cite web|url=http://www.aaai.org/Library/Symposia/Fall/fs05-06 |title=Machine Ethics |work=aaai.org |url-status=dead |archiveurl=https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06 |archivedate=29 November 2014 }}</ref> Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality. A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics"<ref name="autogenerated1"/> that stems from the AAAI Fall 2005 Symposium on Machine Ethics.<ref name="autogenerated2"/>
  −
The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: "Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitate this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about Ethics." Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality. A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics" that stems from the AAAI Fall 2005 Symposium on Machine Ethics.
      
机器伦理学领域关注的是给予机器伦理原则,或者一种用于解决它们可能遇到的伦理困境的方法,使它们能够通过自己的伦理决策以一种符合伦理的方式运作.<ref name="autogenerated1">Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press.</ref>。2005年秋季AAAI机器伦理研讨会描述了这一领域: ”过去关于技术与伦理学之间关系的研究主要侧重于人类是否应该对技术的使用负责,只有少数人对人类应当如何对待机器感兴趣。任何时候都只有人类会参与伦理推理。现在是时候给至少一些机器增加道德层面的考虑了。这势必要的,因为我们认识到了机器行为的道德后果,以及机器自主性领域最新和潜在的发展。与计算机黑客行为、软件产权问题、隐私问题和其他通常归因于计算机道德的主题不同,机器道德关注的是机器对人类用户和其他机器的行为。机器伦理学的研究是减轻人们对自主系统担忧的关键——可以说,人们对机器智能担忧的根源是自主机器概念没有道德维度。此外,在机器伦理学的研究中可以发现当前伦理学理论存在的问题,加深我们对伦理学的思考。”<ref name="autogenerated2">{{cite web|url=http://www.aaai.org/Library/Symposia/Fall/fs05-06 |title=Machine Ethics |work=aaai.org |url-status=dead |archiveurl=https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06 |archivedate=29 November 2014 }}</ref> 机器伦理学有时被称为机器道德、计算伦理学或计算伦理学<ref name="autogenerated1"/>。这个新兴领域的各种观点可以在 AAAI 秋季2005年机器伦理学研讨会上收集的“机器伦理学”版本中找到。<ref name="autogenerated2"/>
 
机器伦理学领域关注的是给予机器伦理原则,或者一种用于解决它们可能遇到的伦理困境的方法,使它们能够通过自己的伦理决策以一种符合伦理的方式运作.<ref name="autogenerated1">Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press.</ref>。2005年秋季AAAI机器伦理研讨会描述了这一领域: ”过去关于技术与伦理学之间关系的研究主要侧重于人类是否应该对技术的使用负责,只有少数人对人类应当如何对待机器感兴趣。任何时候都只有人类会参与伦理推理。现在是时候给至少一些机器增加道德层面的考虑了。这势必要的,因为我们认识到了机器行为的道德后果,以及机器自主性领域最新和潜在的发展。与计算机黑客行为、软件产权问题、隐私问题和其他通常归因于计算机道德的主题不同,机器道德关注的是机器对人类用户和其他机器的行为。机器伦理学的研究是减轻人们对自主系统担忧的关键——可以说,人们对机器智能担忧的根源是自主机器概念没有道德维度。此外,在机器伦理学的研究中可以发现当前伦理学理论存在的问题,加深我们对伦理学的思考。”<ref name="autogenerated2">{{cite web|url=http://www.aaai.org/Library/Symposia/Fall/fs05-06 |title=Machine Ethics |work=aaai.org |url-status=dead |archiveurl=https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06 |archivedate=29 November 2014 }}</ref> 机器伦理学有时被称为机器道德、计算伦理学或计算伦理学<ref name="autogenerated1"/>。这个新兴领域的各种观点可以在 AAAI 秋季2005年机器伦理学研讨会上收集的“机器伦理学”版本中找到。<ref name="autogenerated2"/>
第845行: 第829行:     
Political scientist [[Charles T. Rubin]] believes that AI can be neither designed nor guaranteed to be benevolent.<ref>{{cite journal|last=Rubin |first=Charles |authorlink=Charles T. Rubin |date=Spring 2003 |title=Artificial Intelligence and Human Nature|journal=The New Atlantis |volume=1 |pages=88–100 |url=http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |url-status=dead |archiveurl=https://web.archive.org/web/20120611115223/http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |archivedate=11 June 2012 |df=dmy}}</ref> He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no ''a priori'' reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.
 
Political scientist [[Charles T. Rubin]] believes that AI can be neither designed nor guaranteed to be benevolent.<ref>{{cite journal|last=Rubin |first=Charles |authorlink=Charles T. Rubin |date=Spring 2003 |title=Artificial Intelligence and Human Nature|journal=The New Atlantis |volume=1 |pages=88–100 |url=http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |url-status=dead |archiveurl=https://web.archive.org/web/20120611115223/http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |archivedate=11 June 2012 |df=dmy}}</ref> He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no ''a priori'' reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.
  −
Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent. He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.
      
政治科学家查尔斯 · 鲁宾认为,AI既不可能被设计成是友好的,也不能保证会是友好的<ref>{{cite journal|last=Rubin |first=Charles |authorlink=Charles T. Rubin |date=Spring 2003 |title=Artificial Intelligence and Human Nature|journal=The New Atlantis |volume=1 |pages=88–100 |url=http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |url-status=dead |archiveurl=https://web.archive.org/web/20120611115223/http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |archivedate=11 June 2012 |df=dmy}}</ref>。他认为“任何足够的友善都可能难以与邪恶区分。”人类不应该假设机器或机器人会对我们友好,因为没有先验理由认为他们会对我们的道德体系有共鸣。这个体系是在我们特定的生物进化过程中产生的(AI没有这个过程)。超智能软件不一定会认同人类的继续存在,且我们将极难停止超级AI的运转。最近一些学术出版物也开始讨论这个话题,认为它是对文明、人类和地球造成风险的真正来源。
 
政治科学家查尔斯 · 鲁宾认为,AI既不可能被设计成是友好的,也不能保证会是友好的<ref>{{cite journal|last=Rubin |first=Charles |authorlink=Charles T. Rubin |date=Spring 2003 |title=Artificial Intelligence and Human Nature|journal=The New Atlantis |volume=1 |pages=88–100 |url=http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |url-status=dead |archiveurl=https://web.archive.org/web/20120611115223/http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |archivedate=11 June 2012 |df=dmy}}</ref>。他认为“任何足够的友善都可能难以与邪恶区分。”人类不应该假设机器或机器人会对我们友好,因为没有先验理由认为他们会对我们的道德体系有共鸣。这个体系是在我们特定的生物进化过程中产生的(AI没有这个过程)。超智能软件不一定会认同人类的继续存在,且我们将极难停止超级AI的运转。最近一些学术出版物也开始讨论这个话题,认为它是对文明、人类和地球造成风险的真正来源。
第852行: 第834行:     
One proposal to deal with this is to ensure that the first generally intelligent AI is '[[Friendly AI]]' and will be able to control subsequently developed AIs. Some question whether this kind of check could actually remain in place.
 
One proposal to deal with this is to ensure that the first generally intelligent AI is '[[Friendly AI]]' and will be able to control subsequently developed AIs. Some question whether this kind of check could actually remain in place.
  −
One proposal to deal with this is to ensure that the first generally intelligent AI is 'Friendly AI' and will be able to control subsequently developed AIs. Some question whether this kind of check could actually remain in place.
      
解决这个问题的一个建议是确保第一个具有通用智能的AI是“友好的AI”,并能够控制后面研发的AI。一些人质疑这种“友好”是否真的能够保持不变。
 
解决这个问题的一个建议是确保第一个具有通用智能的AI是“友好的AI”,并能够控制后面研发的AI。一些人质疑这种“友好”是否真的能够保持不变。
第859行: 第839行:     
Leading AI researcher [[Rodney Brooks]] writes, "I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence."<ref>{{cite web|last=Brooks|first=Rodney|title=artificial intelligence is a tool, not a threat|date=10 November 2014|url=http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|url-status=dead|archiveurl=https://web.archive.org/web/20141112130954/http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|archivedate=12 November 2014|df=dmy-all}}</ref>
 
Leading AI researcher [[Rodney Brooks]] writes, "I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence."<ref>{{cite web|last=Brooks|first=Rodney|title=artificial intelligence is a tool, not a threat|date=10 November 2014|url=http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|url-status=dead|archiveurl=https://web.archive.org/web/20141112130954/http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|archivedate=12 November 2014|df=dmy-all}}</ref>
  −
Leading AI researcher Rodney Brooks writes, "I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence."
      
首席AI研究员罗德尼 · 布鲁克斯写道: “我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我觉得这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快,但构建有意识有感情的智能则是件庞杂且艰巨的任务。”<ref>{{cite web|last=Brooks|first=Rodney|title=artificial intelligence is a tool, not a threat|date=10 November 2014|url=http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|url-status=dead|archiveurl=https://web.archive.org/web/20141112130954/http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|archivedate=12 November 2014|df=dmy-all}}</ref>
 
首席AI研究员罗德尼 · 布鲁克斯写道: “我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我觉得这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快,但构建有意识有感情的智能则是件庞杂且艰巨的任务。”<ref>{{cite web|last=Brooks|first=Rodney|title=artificial intelligence is a tool, not a threat|date=10 November 2014|url=http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|url-status=dead|archiveurl=https://web.archive.org/web/20141112130954/http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|archivedate=12 November 2014|df=dmy-all}}</ref>
  −
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])“我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我认为,这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快但构建有意识有感情的智能是件庞杂且艰巨的任务。”该句为意译
  −
  --[[用户:Qige96|Ricky]]([[用户讨论:Qige96|讨论]])干得漂亮
      
===机器意识、知觉和思维 ===
 
===机器意识、知觉和思维 ===
第873行: 第848行:     
If an AI system replicates all key aspects of human intelligence, will that system also be [[Sentience|sentient]]—will it have a [[mind]] which has [[consciousness|conscious experiences]]? This question is closely related to the philosophical problem as to the nature of human consciousness, generally referred to as the [[hard problem of consciousness]].
 
If an AI system replicates all key aspects of human intelligence, will that system also be [[Sentience|sentient]]—will it have a [[mind]] which has [[consciousness|conscious experiences]]? This question is closely related to the philosophical problem as to the nature of human consciousness, generally referred to as the [[hard problem of consciousness]].
  −
If an AI system replicates all key aspects of human intelligence, will that system also be sentient—will it have a mind which has conscious experiences? This question is closely related to the philosophical problem as to the nature of human consciousness, generally referred to as the hard problem of consciousness.
      
如果一个AI系统复制了人类智能的所有关键部分,那么这个系统是否也能有意识——它是否能拥有一个有知觉的头脑?这个问题与人类意识本质的哲学问题密切相关,一般称之为意识难题。
 
如果一个AI系统复制了人类智能的所有关键部分,那么这个系统是否也能有意识——它是否能拥有一个有知觉的头脑?这个问题与人类意识本质的哲学问题密切相关,一般称之为意识难题。
第891行: 第864行:     
The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this ''feels'' or why it should feel like anything at all. Human [[information processing]] is easy to explain, however human [[subjective experience]] is difficult to explain.
 
The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this ''feels'' or why it should feel like anything at all. Human [[information processing]] is easy to explain, however human [[subjective experience]] is difficult to explain.
  −
The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all. Human information processing is easy to explain, however human subjective experience is difficult to explain.
      
“容易”的问题是理解大脑如何处理信号,制定计划和控制行为。“困难”的问题是如何解释这种感觉或者为什么它会有这种感觉。人类的信息处理过程很容易解释,然而人类的主观体验却很难解释。
 
“容易”的问题是理解大脑如何处理信号,制定计划和控制行为。“困难”的问题是如何解释这种感觉或者为什么它会有这种感觉。人类的信息处理过程很容易解释,然而人类的主观体验却很难解释。
第899行: 第870行:  
For example, consider what happens when a person is shown a color swatch and identifies it, saying "it's red". The easy problem only requires understanding the machinery in the brain that makes it possible for a person to know that the color swatch is red. The hard problem is that people also know something else—they also know ''what red looks like''. (Consider that a person born blind can know that something is red without knowing what red looks like.){{efn|This is based on [[Mary's Room]], a thought experiment first proposed by [[Frank Cameron Jackson|Frank Jackson]] in 1982}} Everyone knows subjective experience exists, because they do it every day (e.g., all sighted people know what red looks like). The hard problem is explaining how the brain creates it, why it exists, and how it is different from knowledge and other aspects of the brain.
 
For example, consider what happens when a person is shown a color swatch and identifies it, saying "it's red". The easy problem only requires understanding the machinery in the brain that makes it possible for a person to know that the color swatch is red. The hard problem is that people also know something else—they also know ''what red looks like''. (Consider that a person born blind can know that something is red without knowing what red looks like.){{efn|This is based on [[Mary's Room]], a thought experiment first proposed by [[Frank Cameron Jackson|Frank Jackson]] in 1982}} Everyone knows subjective experience exists, because they do it every day (e.g., all sighted people know what red looks like). The hard problem is explaining how the brain creates it, why it exists, and how it is different from knowledge and other aspects of the brain.
   −
For example, consider what happens when a person is shown a color swatch and identifies it, saying "it's red". The easy problem only requires understanding the machinery in the brain that makes it possible for a person to know that the color swatch is red. The hard problem is that people also know something else—they also know what red looks like. (Consider that a person born blind can know that something is red without knowing what red looks like.) Everyone knows subjective experience exists, because they do it every day (e.g., all sighted people know what red looks like). The hard problem is explaining how the brain creates it, why it exists, and how it is different from knowledge and other aspects of the brain.
      
例如当一个人看到一张色卡并识别它,说“它是红色的”时会发生什么。这个简单的问题只需要知道这个人大脑中认出色卡是红色的机制。困难的问题是,人们还知道其他一些东西——他们还知道红色长什么样。(一个天生失明的人也能知道什么是红色,即使不知道红色是什么样子。)每个人都知道主观体验的存在,因为他们每天都有主观体验(例如,所有视力正常的人都知道红色是什么样子)。困难的问题是解释大脑如何创造它,为什么它存在,以及它如何区别于知识和大脑的其他方面。
 
例如当一个人看到一张色卡并识别它,说“它是红色的”时会发生什么。这个简单的问题只需要知道这个人大脑中认出色卡是红色的机制。困难的问题是,人们还知道其他一些东西——他们还知道红色长什么样。(一个天生失明的人也能知道什么是红色,即使不知道红色是什么样子。)每个人都知道主观体验的存在,因为他们每天都有主观体验(例如,所有视力正常的人都知道红色是什么样子)。困难的问题是解释大脑如何创造它,为什么它存在,以及它如何区别于知识和大脑的其他方面。
第910行: 第880行:  
Computationalism is the position in the [[philosophy of mind]] that the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing.<ref>[[Steven Horst|Horst, Steven]], (2005) [http://plato.stanford.edu/entries/computational-mind/ "The Computational Theory of Mind"] in ''The Stanford Encyclopedia of Philosophy''</ref> Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the [[mind-body problem]]. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers [[Jerry Fodor]] and [[Hilary Putnam]].
 
Computationalism is the position in the [[philosophy of mind]] that the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing.<ref>[[Steven Horst|Horst, Steven]], (2005) [http://plato.stanford.edu/entries/computational-mind/ "The Computational Theory of Mind"] in ''The Stanford Encyclopedia of Philosophy''</ref> Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the [[mind-body problem]]. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers [[Jerry Fodor]] and [[Hilary Putnam]].
   −
Computationalism is the position in the philosophy of mind that the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind-body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam.
+
计算主义站在心智哲学的立场,认为人类心智或人类大脑(都)是一个信息处理系统,思维是一种计算形式<ref>[[Steven Horst|Horst, Steven]], (2005) [http://plato.stanford.edu/entries/computational-mind/ "The Computational Theory of Mind"] in ''The Stanford Encyclopedia of Philosophy''</ref> 。计算主义认为,思想和身体之间的关系与软件和硬件之间的关系是相似或相同的,因此这也许能帮助解决“意识和身体问题”。这一哲学立场受20世纪60年代AI研究人员和认知科学家的工作的启发,最初由哲学家杰里·福多和希拉里·普特南提出。
 
  −
计算主义站在心智哲学的立场,认为人类心智或人类大脑(都)是一个信息处理系统,思维是一种计算形式。计算主义认为,思想和身体之间的关系与软件和硬件之间的关系是相似或相同的,因此这也许能帮助解决“意识和身体问题”。这一哲学立场受20世纪60年代AI研究人员和认知科学家的工作的启发,最初由哲学家杰里·福多和希拉里·普特南提出。
        第921行: 第889行:  
The philosophical position that [[John Searle]] has named [[strong AI hypothesis|"strong AI"]] states: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."<ref name="Searle's strong AI"/> Searle counters this assertion with his [[Chinese room]] argument, which asks us to look ''inside'' the computer and try to find where the "mind" might be.<ref name="Chinese room"/>
 
The philosophical position that [[John Searle]] has named [[strong AI hypothesis|"strong AI"]] states: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."<ref name="Searle's strong AI"/> Searle counters this assertion with his [[Chinese room]] argument, which asks us to look ''inside'' the computer and try to find where the "mind" might be.<ref name="Chinese room"/>
   −
The philosophical position that John Searle has named "strong AI" states: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the "mind" might be.
     −
“具有正确输入和输出程序的计算机,将因此拥有与人脑意义完全相同的头脑。”约翰·塞尔称这种哲学立场为“强人工智能”,然后用他的中文屋论点反驳了这种说法,他让人们看看电脑内部,并试图找出“思维”可能在哪里。
+
“具有正确输入和输出程序的计算机,将因此拥有与人脑意义完全相同的头脑。”约翰·塞尔称这种哲学立场为“强人工智能”<ref name="Searle's strong AI"/>,然后用他的中文屋论点反驳了这种说法,他让人们看看电脑内部,并试图找出“思维”可能在哪里。<ref name="Chinese room"/>
--[[用户:Qige96|Ricky]]([[用户讨论:Qige96讨论]])其实我不认同中文屋,我觉得这就是对系统论(总体大于部分之和)的无视。
+
 
    
====机器人的权利====
 
====机器人的权利====
第931行: 第898行:     
If a machine can be created that has intelligence, could it also ''[[sentience|feel]]''? If it can feel, does it have the same rights as a human? This issue, now known as "[[robot rights]]", is currently being considered by, for example, California's [[Institute for the Future]], although many critics believe that the discussion is premature.<ref name="Robot rights"/> Some critics of [[transhumanism]] argue that any hypothetical robot rights would lie on a spectrum with [[animal rights]] and human rights. <ref Name="Evans 2015">{{cite journal | last = Evans | first = Woody | authorlink = Woody Evans | title = Posthuman Rights: Dimensions of Transhuman Worlds | journal = Teknokultura | volume = 12 | issue = 2 | date = 2015 | df = dmy-all | doi = 10.5209/rev_TK.2015.v12.n2.49072 | doi-access = free }}</ref> The subject is profoundly discussed in the 2010 documentary film ''[[Plug & Pray]]'',<ref>{{cite web|url=http://www.plugandpray-film.de/en/content.html|title=Content: Plug & Pray Film – Artificial Intelligence – Robots -|author=maschafilm|work=plugandpray-film.de|url-status=live|archiveurl=https://web.archive.org/web/20160212040134/http://www.plugandpray-film.de/en/content.html|archivedate=12 February 2016|df=dmy-all}}</ref> and many sci fi media such as [[Star Trek]] Next Generation, with the character of [[Commander Data]], who fought being disassembled for research, and wanted to "become human", and the robotic holograms in Voyager.
 
If a machine can be created that has intelligence, could it also ''[[sentience|feel]]''? If it can feel, does it have the same rights as a human? This issue, now known as "[[robot rights]]", is currently being considered by, for example, California's [[Institute for the Future]], although many critics believe that the discussion is premature.<ref name="Robot rights"/> Some critics of [[transhumanism]] argue that any hypothetical robot rights would lie on a spectrum with [[animal rights]] and human rights. <ref Name="Evans 2015">{{cite journal | last = Evans | first = Woody | authorlink = Woody Evans | title = Posthuman Rights: Dimensions of Transhuman Worlds | journal = Teknokultura | volume = 12 | issue = 2 | date = 2015 | df = dmy-all | doi = 10.5209/rev_TK.2015.v12.n2.49072 | doi-access = free }}</ref> The subject is profoundly discussed in the 2010 documentary film ''[[Plug & Pray]]'',<ref>{{cite web|url=http://www.plugandpray-film.de/en/content.html|title=Content: Plug & Pray Film – Artificial Intelligence – Robots -|author=maschafilm|work=plugandpray-film.de|url-status=live|archiveurl=https://web.archive.org/web/20160212040134/http://www.plugandpray-film.de/en/content.html|archivedate=12 February 2016|df=dmy-all}}</ref> and many sci fi media such as [[Star Trek]] Next Generation, with the character of [[Commander Data]], who fought being disassembled for research, and wanted to "become human", and the robotic holograms in Voyager.
  −
If a machine can be created that has intelligence, could it also feel? If it can feel, does it have the same rights as a human? This issue, now known as "robot rights", is currently being considered by, for example, California's Institute for the Future, although many critics believe that the discussion is premature. The subject is profoundly discussed in the 2010 documentary film Plug & Pray, and many sci fi media such as Star Trek Next Generation, with the character of Commander Data, who fought being disassembled for research, and wanted to "become human", and the robotic holograms in Voyager.
      
如果可以创造出一台有智能的机器,那么它是否也有感觉呢?如果它有感觉,它是否拥有与人类同样的权利?这个目前被称为“机器人权利”的问题正在被人们考虑<ref name="Robot rights"/>,例如,加利福尼亚的未来研究所就在从事相关研究,尽管许多批评论家认为这种讨论为时过早<ref Name="Evans 2015">{{cite journal | last = Evans | first = Woody | authorlink = Woody Evans | title = Posthuman Rights: Dimensions of Transhuman Worlds | journal = Teknokultura | volume = 12 | issue = 2 | date = 2015 | df = dmy-all | doi = 10.5209/rev_TK.2015.v12.n2.49072 | doi-access = free }}</ref>。2010年的纪录片《插头与祷告》(Plug & Pray)<ref>{{cite web|url=http://www.plugandpray-film.de/en/content.html|title=Content: Plug & Pray Film – Artificial Intelligence – Robots -|author=maschafilm|work=plugandpray-film.de|url-status=live|archiveurl=https://web.archive.org/web/20160212040134/http://www.plugandpray-film.de/en/content.html|archivedate=12 February 2016|df=dmy-all}}</ref>以及《星际迷航: 下一代》(Star Trek Next Generation)等许多科幻媒体都对这个主题进行了深入讨论。《星际迷航》中有个指挥官角色叫戴塔(Data) ,他希望“变成人类”,和为了不被拆解而抗争。
 
如果可以创造出一台有智能的机器,那么它是否也有感觉呢?如果它有感觉,它是否拥有与人类同样的权利?这个目前被称为“机器人权利”的问题正在被人们考虑<ref name="Robot rights"/>,例如,加利福尼亚的未来研究所就在从事相关研究,尽管许多批评论家认为这种讨论为时过早<ref Name="Evans 2015">{{cite journal | last = Evans | first = Woody | authorlink = Woody Evans | title = Posthuman Rights: Dimensions of Transhuman Worlds | journal = Teknokultura | volume = 12 | issue = 2 | date = 2015 | df = dmy-all | doi = 10.5209/rev_TK.2015.v12.n2.49072 | doi-access = free }}</ref>。2010年的纪录片《插头与祷告》(Plug & Pray)<ref>{{cite web|url=http://www.plugandpray-film.de/en/content.html|title=Content: Plug & Pray Film – Artificial Intelligence – Robots -|author=maschafilm|work=plugandpray-film.de|url-status=live|archiveurl=https://web.archive.org/web/20160212040134/http://www.plugandpray-film.de/en/content.html|archivedate=12 February 2016|df=dmy-all}}</ref>以及《星际迷航: 下一代》(Star Trek Next Generation)等许多科幻媒体都对这个主题进行了深入讨论。《星际迷航》中有个指挥官角色叫戴塔(Data) ,他希望“变成人类”,和为了不被拆解而抗争。
第941行: 第906行:     
Are there limits to how intelligent machines—or human-machine hybrids—can be? A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. ''Superintelligence'' may also refer to the form or degree of intelligence possessed by such an agent.<ref name="Roberts"/>
 
Are there limits to how intelligent machines—or human-machine hybrids—can be? A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. ''Superintelligence'' may also refer to the form or degree of intelligence possessed by such an agent.<ref name="Roberts"/>
  −
Are there limits to how intelligent machines—or human-machine hybrids—can be? A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. Superintelligence may also refer to the form or degree of intelligence possessed by such an agent.
      
智能机器——或者说人机混合体——能达到的怎样的程度有限吗?超级智能、超智能或者超人智能是一种假想的智能主体,它拥有的智能远远超过最聪明、最有天赋的人类智慧。超级智能也可以指这种智能体所拥有的智能的形式或程度。<ref name="Roberts"/>
 
智能机器——或者说人机混合体——能达到的怎样的程度有限吗?超级智能、超智能或者超人智能是一种假想的智能主体,它拥有的智能远远超过最聪明、最有天赋的人类智慧。超级智能也可以指这种智能体所拥有的智能的形式或程度。<ref name="Roberts"/>
第952行: 第915行:     
If research into [[artificial general intelligence|Strong AI]] produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to [[Intelligence explosion|recursive self-improvement]].<ref name="recurse"/> The new intelligence could thus increase exponentially and dramatically surpass humans. Science fiction writer [[Vernor Vinge]] named this scenario "[[technological singularity|singularity]]".<ref name=Singularity/> Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.<ref name=Singularity/><ref name="Roberts"/>
 
If research into [[artificial general intelligence|Strong AI]] produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to [[Intelligence explosion|recursive self-improvement]].<ref name="recurse"/> The new intelligence could thus increase exponentially and dramatically surpass humans. Science fiction writer [[Vernor Vinge]] named this scenario "[[technological singularity|singularity]]".<ref name=Singularity/> Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.<ref name=Singularity/><ref name="Roberts"/>
  −
If research into Strong AI produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to recursive self-improvement. The new intelligence could thus increase exponentially and dramatically surpass humans. Science fiction writer Vernor Vinge named this scenario "singularity". Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.
      
如果对强人工智能的研究造出了足够智能的软件,那么它也许能做到重新编程并改进自己。改进后的软件甚至可以更好地改进自己,从而实现递归的自我改进。这种新的智能因此可以呈指数增长,并大大超过人类<ref name="recurse"/>。科幻作家弗诺·文奇将这种情况命名为“奇点”<ref name=Singularity/> :技术的加速发展将导致AI超越人类智力和控制能力的失控局面,从而彻底改变甚至终结人类文明。因为这样的智能人类难以理解,所有技术奇点出现后发生的事是不可预测,或者说深不可测的。<ref name=Singularity/><ref name="Roberts"/>
 
如果对强人工智能的研究造出了足够智能的软件,那么它也许能做到重新编程并改进自己。改进后的软件甚至可以更好地改进自己,从而实现递归的自我改进。这种新的智能因此可以呈指数增长,并大大超过人类<ref name="recurse"/>。科幻作家弗诺·文奇将这种情况命名为“奇点”<ref name=Singularity/> :技术的加速发展将导致AI超越人类智力和控制能力的失控局面,从而彻底改变甚至终结人类文明。因为这样的智能人类难以理解,所有技术奇点出现后发生的事是不可预测,或者说深不可测的。<ref name=Singularity/><ref name="Roberts"/>
第959行: 第920行:     
[[Ray Kurzweil]] has used [[Moore's law]] (which describes the relentless exponential improvement in digital technology) to calculate that [[desktop computer]]s will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.<ref name=Singularity/>
 
[[Ray Kurzweil]] has used [[Moore's law]] (which describes the relentless exponential improvement in digital technology) to calculate that [[desktop computer]]s will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.<ref name=Singularity/>
  −
Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.
      
雷·库兹韦尔利用摩尔定律(描述了数字技术指数增长的现象)计算出,到2029年,台式电脑的处理能力将与人类大脑相当,并预测奇点将出现在2045年。
 
雷·库兹韦尔利用摩尔定律(描述了数字技术指数增长的现象)计算出,到2029年,台式电脑的处理能力将与人类大脑相当,并预测奇点将出现在2045年。
第970行: 第929行:     
Robot designer [[Hans Moravec]], cyberneticist [[Kevin Warwick]] and inventor [[Ray Kurzweil]] have predicted that humans and machines will merge in the future into [[cyborg]]s that are more capable and powerful than either.<ref name="Transhumanism"/> This idea, called [[transhumanism]], has roots in [[Aldous Huxley]] and [[Robert Ettinger]].
 
Robot designer [[Hans Moravec]], cyberneticist [[Kevin Warwick]] and inventor [[Ray Kurzweil]] have predicted that humans and machines will merge in the future into [[cyborg]]s that are more capable and powerful than either.<ref name="Transhumanism"/> This idea, called [[transhumanism]], has roots in [[Aldous Huxley]] and [[Robert Ettinger]].
  −
Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.
      
机器人设计师汉斯·莫拉维克、控制论专家凯文·沃里克和发明家雷·库兹韦尔预言,人类和机器将在未来融合成为比两者都更强的半机器人<ref name="Transhumanism"/>。这种观点被称为“超人类主义”,这种观点起源于阿道司·赫胥黎和罗伯特•艾廷格。
 
机器人设计师汉斯·莫拉维克、控制论专家凯文·沃里克和发明家雷·库兹韦尔预言,人类和机器将在未来融合成为比两者都更强的半机器人<ref name="Transhumanism"/>。这种观点被称为“超人类主义”,这种观点起源于阿道司·赫胥黎和罗伯特•艾廷格。
第977行: 第934行:     
[[Edward Fredkin]] argues that "artificial intelligence is the next stage in evolution", an idea first proposed by [[Samuel Butler (novelist)|Samuel Butler]]'s "[[Darwin among the Machines]]" as far back as 1863, and expanded upon by [[George Dyson (science historian)|George Dyson]] in his book of the same name in 1998.<ref name="AI as evolution"/>
 
[[Edward Fredkin]] argues that "artificial intelligence is the next stage in evolution", an idea first proposed by [[Samuel Butler (novelist)|Samuel Butler]]'s "[[Darwin among the Machines]]" as far back as 1863, and expanded upon by [[George Dyson (science historian)|George Dyson]] in his book of the same name in 1998.<ref name="AI as evolution"/>
  −
Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998.
      
爱德华•弗雷德金认为,“人工智能是进化的下一个阶段”。早在1863年,塞缪尔•巴特勒的《机器中的达尔文》(Darwin among the Machines)就首次提出了这一观点,乔治•戴森在1998年的同名著作中对其进行了延伸。
 
爱德华•弗雷德金认为,“人工智能是进化的下一个阶段”。早在1863年,塞缪尔•巴特勒的《机器中的达尔文》(Darwin among the Machines)就首次提出了这一观点,乔治•戴森在1998年的同名著作中对其进行了延伸。
第987行: 第942行:     
The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term [[unemployment]], but they generally agree that it could be a net benefit, if [[productivity]] gains are [[Redistribution of income and wealth|redistributed]].<ref>{{Cite web|url=http://www.igmchicago.org/surveys/robots-and-artificial-intelligence|title=Robots and Artificial Intelligence|last=|first=|date=|website=www.igmchicago.org|access-date=2019-07-03}}</ref> A February 2020 European Union white paper on artificial intelligence advocated for artificial intelligence for economic benefits, including "improving healthcare (e.g. making diagnosis more  precise,  enabling  better  prevention  of  diseases), increasing  the  efficiency  of  farming, contributing  to climate  change mitigation  and  adaptation, [and] improving  the  efficiency  of production systems through predictive maintenance", while acknowledging potential risks.<ref name=":1" />
 
The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term [[unemployment]], but they generally agree that it could be a net benefit, if [[productivity]] gains are [[Redistribution of income and wealth|redistributed]].<ref>{{Cite web|url=http://www.igmchicago.org/surveys/robots-and-artificial-intelligence|title=Robots and Artificial Intelligence|last=|first=|date=|website=www.igmchicago.org|access-date=2019-07-03}}</ref> A February 2020 European Union white paper on artificial intelligence advocated for artificial intelligence for economic benefits, including "improving healthcare (e.g. making diagnosis more  precise,  enabling  better  prevention  of  diseases), increasing  the  efficiency  of  farming, contributing  to climate  change mitigation  and  adaptation, [and] improving  the  efficiency  of production systems through predictive maintenance", while acknowledging potential risks.<ref name=":1" />
  −
The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed. A February 2020 European Union white paper on artificial intelligence advocated for artificial intelligence for economic benefits, including "improving healthcare (e.g. making diagnosis more  precise,  enabling  better  prevention  of  diseases), increasing  the  efficiency  of  farming, contributing  to climate  change mitigation  and  adaptation, [and] improving  the  efficiency  of production systems through predictive maintenance", while acknowledging potential risks.
      
人工智能的长期经济效应是不确定的。一项对经济学家的调查显示,在机器人和AI的使用的日益增加是否会导致长期失业率大幅上升的问题上,人们的意见存在分歧。但他们普遍认为,如果生产力提高的成果得到重新分配,也许这也不是一件坏事<ref>{{Cite web|url=http://www.igmchicago.org/surveys/robots-and-artificial-intelligence|title=Robots and Artificial Intelligence|last=|first=|date=|website=www.igmchicago.org|access-date=2019-07-03}}</ref>。2020年2月,欧盟发表了一份关于AI的白皮书,主张为了增加经济利益而使用AI,其中包括“改善医疗保健(例如:使诊断更加精确,能够更好地预防疾病) ,提高耕作效率,为缓解和适应气候变化做出贡献,以及通过预测性维护提高生产系统的效率”,同时也承认AI有潜在风险<ref name=":1" />。
 
人工智能的长期经济效应是不确定的。一项对经济学家的调查显示,在机器人和AI的使用的日益增加是否会导致长期失业率大幅上升的问题上,人们的意见存在分歧。但他们普遍认为,如果生产力提高的成果得到重新分配,也许这也不是一件坏事<ref>{{Cite web|url=http://www.igmchicago.org/surveys/robots-and-artificial-intelligence|title=Robots and Artificial Intelligence|last=|first=|date=|website=www.igmchicago.org|access-date=2019-07-03}}</ref>。2020年2月,欧盟发表了一份关于AI的白皮书,主张为了增加经济利益而使用AI,其中包括“改善医疗保健(例如:使诊断更加精确,能够更好地预防疾病) ,提高耕作效率,为缓解和适应气候变化做出贡献,以及通过预测性维护提高生产系统的效率”,同时也承认AI有潜在风险<ref name=":1" />。
第1,001行: 第954行:     
The development of public sector policies for promoting and regulating artificial intelligence (AI) is considered necessary to both encourage AI and manage associated risks, but challenging.<ref>{{Cite journal|last=Wirtz|first=Bernd W.|last2=Weyerer|first2=Jan C.|last3=Geyer|first3=Carolin|date=2018-07-24|title=Artificial Intelligence and the Public Sector—Applications and Challenges|journal=International Journal of Public Administration|volume=42|issue=7|pages=596–615|doi=10.1080/01900692.2018.1498103|issn=0190-0692}}</ref> In 2017 [[Elon Musk]] called for regulation of AI development.<ref>{{cite news|url=https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk|title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'|work=NPR.org|accessdate=27 November 2017|language=en}}</ref> Multiple states now have national policies under development or in place,<ref>{{Cite book|last=Campbell|first=Thomas A.|url=http://www.unicri.it/in_focus/files/Report_AI-An_Overview_of_State_Initiatives_FutureGrasp_7-23-19.pdf|title=Artificial Intelligence: An Overview of State Initiatives|publisher=FutureGrasp, LLC|year=2019|isbn=|location=Evergreen, CO|pages=}}</ref> and in February 2020, the European Union published its draft strategy paper for promoting and regulating AI.<ref name=":12">{{Cite book|last=|first=|url=https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf|title=White Paper: On Artificial Intelligence - A European approach to excellence and trust|publisher=European Commission|year=2020|isbn=|location=Brussels|pages=1}}</ref>
 
The development of public sector policies for promoting and regulating artificial intelligence (AI) is considered necessary to both encourage AI and manage associated risks, but challenging.<ref>{{Cite journal|last=Wirtz|first=Bernd W.|last2=Weyerer|first2=Jan C.|last3=Geyer|first3=Carolin|date=2018-07-24|title=Artificial Intelligence and the Public Sector—Applications and Challenges|journal=International Journal of Public Administration|volume=42|issue=7|pages=596–615|doi=10.1080/01900692.2018.1498103|issn=0190-0692}}</ref> In 2017 [[Elon Musk]] called for regulation of AI development.<ref>{{cite news|url=https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk|title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'|work=NPR.org|accessdate=27 November 2017|language=en}}</ref> Multiple states now have national policies under development or in place,<ref>{{Cite book|last=Campbell|first=Thomas A.|url=http://www.unicri.it/in_focus/files/Report_AI-An_Overview_of_State_Initiatives_FutureGrasp_7-23-19.pdf|title=Artificial Intelligence: An Overview of State Initiatives|publisher=FutureGrasp, LLC|year=2019|isbn=|location=Evergreen, CO|pages=}}</ref> and in February 2020, the European Union published its draft strategy paper for promoting and regulating AI.<ref name=":12">{{Cite book|last=|first=|url=https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf|title=White Paper: On Artificial Intelligence - A European approach to excellence and trust|publisher=European Commission|year=2020|isbn=|location=Brussels|pages=1}}</ref>
  −
The development of public sector policies for promoting and regulating artificial intelligence (AI) is considered necessary to both encourage AI and manage associated risks, but challenging. In 2017 Elon Musk called for regulation of AI development. Multiple states now have national policies under development or in place, and in February 2020, the European Union published its draft strategy paper for promoting and regulating AI.
      
促进和规范AI的公共部门政策被认为对鼓励人工智能和控制相关风险是必要的,但具有挑战性<ref>{{Cite journal|last=Wirtz|first=Bernd W.|last2=Weyerer|first2=Jan C.|last3=Geyer|first3=Carolin|date=2018-07-24|title=Artificial Intelligence and the Public Sector—Applications and Challenges|journal=International Journal of Public Administration|volume=42|issue=7|pages=596–615|doi=10.1080/01900692.2018.1498103|issn=0190-0692}}</ref> 。2017年,埃隆 · 马斯克呼吁监管AI的发展<ref>{{cite news|url=https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk|title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'|work=NPR.org|accessdate=27 November 2017|language=en}}</ref>。多个国家现在正在制定或实施国家性政策<ref>{{Cite book|last=Campbell|first=Thomas A.|url=http://www.unicri.it/in_focus/files/Report_AI-An_Overview_of_State_Initiatives_FutureGrasp_7-23-19.pdf|title=Artificial Intelligence: An Overview of State Initiatives|publisher=FutureGrasp, LLC|year=2019|isbn=|location=Evergreen, CO|pages=}}</ref> ,2020年2月,欧盟公布了促进和管理AI的战略文件草案<ref name=":12">{{Cite book|last=|first=|url=https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf|title=White Paper: On Artificial Intelligence - A European approach to excellence and trust|publisher=European Commission|year=2020|isbn=|location=Brussels|pages=1}}</ref>。
 
促进和规范AI的公共部门政策被认为对鼓励人工智能和控制相关风险是必要的,但具有挑战性<ref>{{Cite journal|last=Wirtz|first=Bernd W.|last2=Weyerer|first2=Jan C.|last3=Geyer|first3=Carolin|date=2018-07-24|title=Artificial Intelligence and the Public Sector—Applications and Challenges|journal=International Journal of Public Administration|volume=42|issue=7|pages=596–615|doi=10.1080/01900692.2018.1498103|issn=0190-0692}}</ref> 。2017年,埃隆 · 马斯克呼吁监管AI的发展<ref>{{cite news|url=https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk|title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'|work=NPR.org|accessdate=27 November 2017|language=en}}</ref>。多个国家现在正在制定或实施国家性政策<ref>{{Cite book|last=Campbell|first=Thomas A.|url=http://www.unicri.it/in_focus/files/Report_AI-An_Overview_of_State_Initiatives_FutureGrasp_7-23-19.pdf|title=Artificial Intelligence: An Overview of State Initiatives|publisher=FutureGrasp, LLC|year=2019|isbn=|location=Evergreen, CO|pages=}}</ref> ,2020年2月,欧盟公布了促进和管理AI的战略文件草案<ref name=":12">{{Cite book|last=|first=|url=https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf|title=White Paper: On Artificial Intelligence - A European approach to excellence and trust|publisher=European Commission|year=2020|isbn=|location=Brussels|pages=1}}</ref>。
第1,019行: 第970行:     
Thought-capable artificial beings appeared as storytelling devices since antiquity,<ref name="AI in myth"/>and have been a persistent theme in [[science fiction]].
 
Thought-capable artificial beings appeared as storytelling devices since antiquity,<ref name="AI in myth"/>and have been a persistent theme in [[science fiction]].
  −
Thought-capable artificial beings appeared as storytelling devices since antiquity, and have been a persistent theme in science fiction.
  −
   
自古以来,具有思考能力的人工生物就作为叙事工具在小说中出现<ref name="AI in myth"/>,并一直是科幻小说中的一个永恒主题。
 
自古以来,具有思考能力的人工生物就作为叙事工具在小说中出现<ref name="AI in myth"/>,并一直是科幻小说中的一个永恒主题。
       
A common [[Trope (literature)|trope]] in these works began with [[Mary Shelley]]'s ''[[Frankenstein]]'', where a human creation becomes a threat to its masters. This includes such works as [[2001: A Space Odyssey (novel)|Arthur C. Clarke's]] and [[2001: A Space Odyssey (film)|Stanley Kubrick's]] ''[[2001: A Space Odyssey]]'' (both 1968), with [[HAL 9000]], the murderous computer in charge of the ''[[Discovery One]]'' spaceship, as well as ''[[The Terminator]]'' (1984) and ''[[The Matrix]]'' (1999). In contrast, the rare loyal robots such as Gort from ''[[The Day the Earth Stood Still]]'' (1951) and Bishop from ''[[Aliens (film)|Aliens]]'' (1986) are less prominent in popular culture.<ref>{{cite journal|last1=Buttazzo|first1=G.|title=Artificial consciousness: Utopia or real possibility?|journal=[[Computer (magazine)|Computer]]|date=July 2001|volume=34|issue=7|pages=24–30|doi=10.1109/2.933500|df=dmy-all}}</ref>
 
A common [[Trope (literature)|trope]] in these works began with [[Mary Shelley]]'s ''[[Frankenstein]]'', where a human creation becomes a threat to its masters. This includes such works as [[2001: A Space Odyssey (novel)|Arthur C. Clarke's]] and [[2001: A Space Odyssey (film)|Stanley Kubrick's]] ''[[2001: A Space Odyssey]]'' (both 1968), with [[HAL 9000]], the murderous computer in charge of the ''[[Discovery One]]'' spaceship, as well as ''[[The Terminator]]'' (1984) and ''[[The Matrix]]'' (1999). In contrast, the rare loyal robots such as Gort from ''[[The Day the Earth Stood Still]]'' (1951) and Bishop from ''[[Aliens (film)|Aliens]]'' (1986) are less prominent in popular culture.<ref>{{cite journal|last1=Buttazzo|first1=G.|title=Artificial consciousness: Utopia or real possibility?|journal=[[Computer (magazine)|Computer]]|date=July 2001|volume=34|issue=7|pages=24–30|doi=10.1109/2.933500|df=dmy-all}}</ref>
  −
A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.
      
在这些作品中,玛丽 · 雪莱的《弗兰肯斯坦》最先使用了这种常见的提法 ,在这部作品中,人造物对其主人产生了威胁。这些作品包括亚瑟·查理斯·克拉克和斯坦利 · 库布里克的《2001: 太空漫游》(2001: a Space Odyssey,都是1968年出品) ,包括哈尔9000(HAL 9000) ,负责发现一号飞船的凶残计算机,以及《终结者》(The Terminator,1984)和《黑客帝国》(The Matrix,1999)。相比之下,像《地球停止转动的日子》(1951)中的格特和《异形》(1986)中的毕晓普(Bishop)这样罕见的忠诚机器人在流行文化中就不那么突出了。<ref>{{cite journal|last1=Buttazzo|first1=G.|title=Artificial consciousness: Utopia or real possibility?|journal=[[Computer (magazine)|Computer]]|date=July 2001|volume=34|issue=7|pages=24–30|doi=10.1109/2.933500|df=dmy-all}}</ref>
 
在这些作品中,玛丽 · 雪莱的《弗兰肯斯坦》最先使用了这种常见的提法 ,在这部作品中,人造物对其主人产生了威胁。这些作品包括亚瑟·查理斯·克拉克和斯坦利 · 库布里克的《2001: 太空漫游》(2001: a Space Odyssey,都是1968年出品) ,包括哈尔9000(HAL 9000) ,负责发现一号飞船的凶残计算机,以及《终结者》(The Terminator,1984)和《黑客帝国》(The Matrix,1999)。相比之下,像《地球停止转动的日子》(1951)中的格特和《异形》(1986)中的毕晓普(Bishop)这样罕见的忠诚机器人在流行文化中就不那么突出了。<ref>{{cite journal|last1=Buttazzo|first1=G.|title=Artificial consciousness: Utopia or real possibility?|journal=[[Computer (magazine)|Computer]]|date=July 2001|volume=34|issue=7|pages=24–30|doi=10.1109/2.933500|df=dmy-all}}</ref>
第1,035行: 第981行:  
[[Isaac Asimov]] introduced the [[Three Laws of Robotics]] in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics;<ref>Anderson, Susan Leigh. "Asimov's "three laws of robotics" and machine metaethics." AI & Society 22.4 (2008): 477–493.</ref> while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.<ref>{{cite journal | last1 = McCauley | first1 = Lee | year = 2007 | title = AI armageddon and the three laws of robotics | url = | journal = Ethics and Information Technology | volume = 9 | issue = 2| pages = 153–164 | doi=10.1007/s10676-007-9138-2| citeseerx = 10.1.1.85.8904}}</ref>
 
[[Isaac Asimov]] introduced the [[Three Laws of Robotics]] in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics;<ref>Anderson, Susan Leigh. "Asimov's "three laws of robotics" and machine metaethics." AI & Society 22.4 (2008): 477–493.</ref> while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.<ref>{{cite journal | last1 = McCauley | first1 = Lee | year = 2007 | title = AI armageddon and the three laws of robotics | url = | journal = Ethics and Information Technology | volume = 9 | issue = 2| pages = 153–164 | doi=10.1007/s10676-007-9138-2| citeseerx = 10.1.1.85.8904}}</ref>
   −
Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics; while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.
      
艾萨克 · 阿西莫夫在许多书籍和故事中介绍了机器人三定律,最著名的是关于同名的“Multitvac”超级智能计算机系列。阿西莫夫定律经常在茶余饭后对机器伦理的讨论中被提起。几乎所有的AI研究人员都通过流行文化熟悉阿西莫夫定律,但他们通常认为这些定律因为许多原因而无用,其中一个原因就是它们的描述过于模糊。<ref>{{cite journal | last1 = McCauley | first1 = Lee | year = 2007 | title = AI armageddon and the three laws of robotics | url = | journal = Ethics and Information Technology | volume = 9 | issue = 2| pages = 153–164 | doi=10.1007/s10676-007-9138-2| citeseerx = 10.1.1.85.8904}}</ref>
 
艾萨克 · 阿西莫夫在许多书籍和故事中介绍了机器人三定律,最著名的是关于同名的“Multitvac”超级智能计算机系列。阿西莫夫定律经常在茶余饭后对机器伦理的讨论中被提起。几乎所有的AI研究人员都通过流行文化熟悉阿西莫夫定律,但他们通常认为这些定律因为许多原因而无用,其中一个原因就是它们的描述过于模糊。<ref>{{cite journal | last1 = McCauley | first1 = Lee | year = 2007 | title = AI armageddon and the three laws of robotics | url = | journal = Ethics and Information Technology | volume = 9 | issue = 2| pages = 153–164 | doi=10.1007/s10676-007-9138-2| citeseerx = 10.1.1.85.8904}}</ref>
第1,043行: 第988行:  
[[Transhumanism]] (the merging of humans and machines) is explored in the [[manga]] ''[[Ghost in the Shell]]'' and the science-fiction series ''[[Dune (novel)|Dune]]''. In the 1980s, artist [[Hajime Sorayama]]'s Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later "the Gynoids" book followed that was used by or influenced movie makers including [[George Lucas]] and other creatives. Sorayama never considered these organic robots to be real part of nature but always unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form.
 
[[Transhumanism]] (the merging of humans and machines) is explored in the [[manga]] ''[[Ghost in the Shell]]'' and the science-fiction series ''[[Dune (novel)|Dune]]''. In the 1980s, artist [[Hajime Sorayama]]'s Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later "the Gynoids" book followed that was used by or influenced movie makers including [[George Lucas]] and other creatives. Sorayama never considered these organic robots to be real part of nature but always unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form.
   −
Transhumanism (the merging of humans and machines) is explored in the manga Ghost in the Shell and the science-fiction series Dune. In the 1980s, artist Hajime Sorayama's Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later "the Gynoids" book followed that was used by or influenced movie makers including George Lucas and other creatives. Sorayama never considered these organic robots to be real part of nature but always unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form.
      
漫画《攻壳机动队》(manga Ghost in the Shell)和科幻小说《沙丘》(Dune)探讨了超人类主义(人类和机器的结合)。20世纪80年代,艺术家空山基的性感机器人系列在日本绘制并出版,描绘了真实的拥有栩栩如生的金属肌肉皮肤的有机人类形体,后来又出版了《雌蕊》一书,该书被乔治 · 卢卡斯等电影制作人使用。空山基从来不认为这些有机机器人是自然的一部分,而是非自然的人类心智的产品,一个存在于头脑中,也许能以实体形式实现的幻想。
 
漫画《攻壳机动队》(manga Ghost in the Shell)和科幻小说《沙丘》(Dune)探讨了超人类主义(人类和机器的结合)。20世纪80年代,艺术家空山基的性感机器人系列在日本绘制并出版,描绘了真实的拥有栩栩如生的金属肌肉皮肤的有机人类形体,后来又出版了《雌蕊》一书,该书被乔治 · 卢卡斯等电影制作人使用。空山基从来不认为这些有机机器人是自然的一部分,而是非自然的人类心智的产品,一个存在于头脑中,也许能以实体形式实现的幻想。
第1,051行: 第995行:  
Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have [[sentience|the ability to feel]], and thus to suffer. This appears in [[Karel Čapek]]'s ''[[R.U.R.]]'', the films ''[[A.I. Artificial Intelligence]]'' and ''[[Ex Machina (film)|Ex Machina]]'', as well as the novel ''[[Do Androids Dream of Electric Sheep?]]'', by [[Philip K. Dick]]. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.<ref>{{Cite journal|last=Galvan|first=Jill|date=1 January 1997|title=Entering the Posthuman Collective in Philip K. Dick's "Do Androids Dream of Electric Sheep?"|journal=Science Fiction Studies|volume=24|issue=3|pages=413–429|jstor=4240644}}</ref>
 
Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have [[sentience|the ability to feel]], and thus to suffer. This appears in [[Karel Čapek]]'s ''[[R.U.R.]]'', the films ''[[A.I. Artificial Intelligence]]'' and ''[[Ex Machina (film)|Ex Machina]]'', as well as the novel ''[[Do Androids Dream of Electric Sheep?]]'', by [[Philip K. Dick]]. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.<ref>{{Cite journal|last=Galvan|first=Jill|date=1 January 1997|title=Entering the Posthuman Collective in Philip K. Dick's "Do Androids Dream of Electric Sheep?"|journal=Science Fiction Studies|volume=24|issue=3|pages=413–429|jstor=4240644}}</ref>
   −
Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.
      
一些作品向我们展示了有感知的能力,因此也有遭受苦难的能力的AI,迫使我们面对是什么让我们成为人类这一根本问题。这些都在卡雷尔 · 阿佩克的电影《人工智能》、《机器姬》,以及菲利普·K·迪克的小说《机器人会梦见电子羊吗?》中都有出现。迪克认为,AI创造的技术改变了我们对人类主观性的理解。<ref>{{Cite journal|last=Galvan|first=Jill|date=1 January 1997|title=Entering the Posthuman Collective in Philip K. Dick's "Do Androids Dream of Electric Sheep?"|journal=Science Fiction Studies|volume=24|issue=3|pages=413–429|jstor=4240644}}</ref>
 
一些作品向我们展示了有感知的能力,因此也有遭受苦难的能力的AI,迫使我们面对是什么让我们成为人类这一根本问题。这些都在卡雷尔 · 阿佩克的电影《人工智能》、《机器姬》,以及菲利普·K·迪克的小说《机器人会梦见电子羊吗?》中都有出现。迪克认为,AI创造的技术改变了我们对人类主观性的理解。<ref>{{Cite journal|last=Galvan|first=Jill|date=1 January 1997|title=Entering the Posthuman Collective in Philip K. Dick's "Do Androids Dream of Electric Sheep?"|journal=Science Fiction Studies|volume=24|issue=3|pages=413–429|jstor=4240644}}</ref>
567

个编辑