更改

删除1,898字节 、 2020年8月29日 (六) 22:43
无编辑摘要
第80行: 第80行:       −
== History ==
+
== 历史 History ==
 +
 
 +
 
   −
== History ==
     −
历史
      
<!-- THIS IS A SOCIAL HISTORY. TECHNICAL HISTORY IS COVERED IN THE "APPROACHES" AND "TOOLS" SECTIONS. -->
 
<!-- THIS IS A SOCIAL HISTORY. TECHNICAL HISTORY IS COVERED IN THE "APPROACHES" AND "TOOLS" SECTIONS. -->
第100行: 第100行:       −
[[File:Didrachm Phaistos obverse CdM.jpg|thumb|Silver [[didrachma]] from [[Crete]] depicting [[Talos]], an ancient mythical [[automaton]] with artificial intelligence]]
+
[[File:Didrachm Phaistos obverse CdM.jpg|thumb|Silver [[didrachma]] from [[Crete]] depicting [[Talos]], an ancient mythical [[automaton]] with artificial intelligence]银[来自克里特岛的描绘塔罗斯的狄拉克马,一种古代神话中的具有人工智能的自动机]]
    
Silver [[didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence]]
 
Silver [[didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence]]
   −
银[来自克里特岛的描绘塔罗斯的狄拉克马,一种古代神话中的具有人工智能的自动机]
+
 
      第313行: 第313行:       −
== Definitions ==
+
== 定义 Definitions ==
 +
 
 +
 
   −
== Definitions ==
     −
定义
      
Computer science defines AI research as the study of "[[intelligent agent]]s": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.<ref name="Definition of AI"/> A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."<ref>{{Cite journal|title=Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence|first1=Andreas|last1=Kaplan|first2=Michael|last2=Haenlein|date=1 January 2019|journal=Business Horizons|volume=62|issue=1|pages=15–25|doi=10.1016/j.bushor.2018.08.004}}</ref>
 
Computer science defines AI research as the study of "[[intelligent agent]]s": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.<ref name="Definition of AI"/> A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."<ref>{{Cite journal|title=Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence|first1=Andreas|last1=Kaplan|first2=Michael|last2=Haenlein|date=1 January 2019|journal=Business Horizons|volume=62|issue=1|pages=15–25|doi=10.1016/j.bushor.2018.08.004}}</ref>
第327行: 第327行:       −
== Basics ==
+
== 基本知识 Basics ==
 +
 
   −
== Basics ==
     −
基本知识
      
<!-- This section is for explaining, to non-specialists, core concepts that are helpful for understanding AI; feel free to greatly expand or even draw out into its own "Introduction to AI" article, similar to [[Introduction to Quantum Mechanics]] -->
 
<!-- This section is for explaining, to non-specialists, core concepts that are helpful for understanding AI; feel free to greatly expand or even draw out into its own "Introduction to AI" article, similar to [[Introduction to Quantum Mechanics]] -->
第423行: 第422行:       −
[[File:Overfitted Data.png|thumb|The blue line could be an example of [[overfitting]] a linear function due to random noise.]]
+
[[File:Overfitted Data.png|thumb|The blue line could be an example of [[overfitting]] a linear function due to random noise.]蓝线是[[由于随机噪声过拟合线性函数]的一个例子。]
 +
]
    
The blue line could be an example of [[overfitting a linear function due to random noise.]]
 
The blue line could be an example of [[overfitting a linear function due to random noise.]]
   −
蓝线是[[由于随机噪声过拟合线性函数]的一个例子。]
      
Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as [[overfitting]]. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is.{{sfn|Domingos|2015|loc=Chapter 6, Chapter 7}} Besides classic overfitting, learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.{{sfn|Domingos|2015|p=286}} A real-world example is that, unlike humans, current image classifiers don't determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an "adversarial" image that the system misclassifies.{{efn|Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. Some systems are so brittle that changing a single adversarial pixel predictably induces misclassification.}}<ref>{{cite news|title=Single pixel change fools AI programs|url=https://www.bbc.com/news/technology-41845878|accessdate=12 March 2018|work=BBC News|date=3 November 2017}}</ref><ref>{{cite news|title=AI Has a Hallucination Problem That's Proving Tough to Fix|url=https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/|accessdate=12 March 2018|work=WIRED|date=2018}}</ref><ref>{{cite arxiv|eprint=1412.6572|last1=Goodfellow|first1=Ian J.|last2=Shlens|first2=Jonathon|last3=Szegedy|first3=Christian|title=Explaining and Harnessing Adversarial Examples|class=stat.ML|year=2014}}</ref>
 
Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as [[overfitting]]. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is.{{sfn|Domingos|2015|loc=Chapter 6, Chapter 7}} Besides classic overfitting, learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.{{sfn|Domingos|2015|p=286}} A real-world example is that, unlike humans, current image classifiers don't determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an "adversarial" image that the system misclassifies.{{efn|Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. Some systems are so brittle that changing a single adversarial pixel predictably induces misclassification.}}<ref>{{cite news|title=Single pixel change fools AI programs|url=https://www.bbc.com/news/technology-41845878|accessdate=12 March 2018|work=BBC News|date=3 November 2017}}</ref><ref>{{cite news|title=AI Has a Hallucination Problem That's Proving Tough to Fix|url=https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/|accessdate=12 March 2018|work=WIRED|date=2018}}</ref><ref>{{cite arxiv|eprint=1412.6572|last1=Goodfellow|first1=Ian J.|last2=Shlens|first2=Jonathon|last3=Szegedy|first3=Christian|title=Explaining and Harnessing Adversarial Examples|class=stat.ML|year=2014}}</ref>
第438行: 第437行:       −
[[File:Détection de personne - exemple 3.jpg|thumb|A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.<ref>{{cite book|last1=Matti|first1=D.|last2=Ekenel|first2=H. K.|last3=Thiran|first3=J. P.|title=Combining LiDAR space clustering and convolutional neural networks for pedestrian detection|journal=2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)|date=2017|pages=1–6|doi=10.1109/AVSS.2017.8078512|isbn=978-1-5386-2939-0|arxiv=1710.06160}}</ref><ref>{{cite book|last1=Ferguson|first1=Sarah|last2=Luders|first2=Brandon|last3=Grande|first3=Robert C.|last4=How|first4=Jonathan P.|title=Real-Time Predictive Modeling and Robust Avoidance of Pedestrians with Uncertain, Changing Intentions|journal=Algorithmic Foundations of Robotics XI|volume=107|date=2015|pages=161–177|doi=10.1007/978-3-319-16595-0_10|publisher=Springer, Cham|language=en|series=Springer Tracts in Advanced Robotics|isbn=978-3-319-16594-3|arxiv=1405.5581}}</ref>]]
+
[[File:Détection de personne - exemple 3.jpg|thumb|A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.自动驾驶汽车系统可以使用神经网络来确定图像的哪些部分与先前训练数据里的行人图像匹配,然后将这些区域建模为移动缓慢但有点不可预测,且必须避让的矩形棱柱。<ref>{{cite book|last1=Matti|first1=D.|last2=Ekenel|first2=H. K.|last3=Thiran|first3=J. P.|title=Combining LiDAR space clustering and convolutional neural networks for pedestrian detection|journal=2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)|date=2017|pages=1–6|doi=10.1109/AVSS.2017.8078512|isbn=978-1-5386-2939-0|arxiv=1710.06160}}</ref><ref>{{cite book|last1=Ferguson|first1=Sarah|last2=Luders|first2=Brandon|last3=Grande|first3=Robert C.|last4=How|first4=Jonathan P.|title=Real-Time Predictive Modeling and Robust Avoidance of Pedestrians with Uncertain, Changing Intentions|journal=Algorithmic Foundations of Robotics XI|volume=107|date=2015|pages=161–177|doi=10.1007/978-3-319-16595-0_10|publisher=Springer, Cham|language=en|series=Springer Tracts in Advanced Robotics|isbn=978-3-319-16594-3|arxiv=1405.5581}}</ref>]]
    
A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.
 
A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.
   −
自动驾驶汽车系统可以使用神经网络来确定图像的哪些部分与先前训练数据里的行人图像匹配,然后将这些区域建模为移动缓慢但有点不可预测,且必须避让的矩形棱柱。
+
 
    
Compared with humans, existing AI lacks several features of human "[[commonsense reasoning]]"; most notably, humans have powerful mechanisms for reasoning about "[[naïve physics]]" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "[[folk psychology]]" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence". (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators.)<ref>{{cite news|title=Cultivating Common Sense {{!}} DiscoverMagazine.com|url=http://discovermagazine.com/2017/april-2017/cultivating-common-sense|accessdate=24 March 2018|work=Discover Magazine|date=2017}}</ref><ref>{{cite journal|last1=Davis|first1=Ernest|last2=Marcus|first2=Gary|title=Commonsense reasoning and commonsense knowledge in artificial intelligence|journal=Communications of the ACM|date=24 August 2015|volume=58|issue=9|pages=92–103|doi=10.1145/2701413|url=https://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/}}</ref><ref>{{cite journal|last1=Winograd|first1=Terry|title=Understanding natural language|journal=Cognitive Psychology|date=January 1972|volume=3|issue=1|pages=1–191|doi=10.1016/0010-0285(72)90002-3}}</ref> This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.<ref>{{cite news|title=Don't worry: Autonomous cars aren't coming tomorrow (or next year)|url=http://autoweek.com/article/technology/fully-autonomous-vehicles-are-more-decade-down-road|accessdate=24 March 2018|work=Autoweek|date=2016}}</ref><ref>{{cite news|last1=Knight|first1=Will|title=Boston may be famous for bad drivers, but it's the testing ground for a smarter self-driving car|url=https://www.technologyreview.com/s/608871/finally-a-driverless-car-with-some-common-sense/|accessdate=27 March 2018|work=MIT Technology Review|date=2017|language=en}}</ref><ref>{{cite journal|last1=Prakken|first1=Henry|title=On the problem of making autonomous vehicles conform to traffic law|journal=Artificial Intelligence and Law|date=31 August 2017|volume=25|issue=3|pages=341–363|doi=10.1007/s10506-017-9210-0|doi-access=free}}</ref>
 
Compared with humans, existing AI lacks several features of human "[[commonsense reasoning]]"; most notably, humans have powerful mechanisms for reasoning about "[[naïve physics]]" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "[[folk psychology]]" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence". (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators.)<ref>{{cite news|title=Cultivating Common Sense {{!}} DiscoverMagazine.com|url=http://discovermagazine.com/2017/april-2017/cultivating-common-sense|accessdate=24 March 2018|work=Discover Magazine|date=2017}}</ref><ref>{{cite journal|last1=Davis|first1=Ernest|last2=Marcus|first2=Gary|title=Commonsense reasoning and commonsense knowledge in artificial intelligence|journal=Communications of the ACM|date=24 August 2015|volume=58|issue=9|pages=92–103|doi=10.1145/2701413|url=https://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/}}</ref><ref>{{cite journal|last1=Winograd|first1=Terry|title=Understanding natural language|journal=Cognitive Psychology|date=January 1972|volume=3|issue=1|pages=1–191|doi=10.1016/0010-0285(72)90002-3}}</ref> This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.<ref>{{cite news|title=Don't worry: Autonomous cars aren't coming tomorrow (or next year)|url=http://autoweek.com/article/technology/fully-autonomous-vehicles-are-more-decade-down-road|accessdate=24 March 2018|work=Autoweek|date=2016}}</ref><ref>{{cite news|last1=Knight|first1=Will|title=Boston may be famous for bad drivers, but it's the testing ground for a smarter self-driving car|url=https://www.technologyreview.com/s/608871/finally-a-driverless-car-with-some-common-sense/|accessdate=27 March 2018|work=MIT Technology Review|date=2017|language=en}}</ref><ref>{{cite journal|last1=Prakken|first1=Henry|title=On the problem of making autonomous vehicles conform to traffic law|journal=Artificial Intelligence and Law|date=31 August 2017|volume=25|issue=3|pages=341–363|doi=10.1007/s10506-017-9210-0|doi-access=free}}</ref>
第451行: 第450行:       −
== Challenges ==
+
== 挑战 Challenges ==
 +
 
 +
 
   −
== Challenges ==
     −
挑战
      
<!--- This is linked to in the introduction to the article and to the "AI research" section -->
 
<!--- This is linked to in the introduction to the article and to the "AI research" section -->
第483行: 第482行:       −
=== Reasoning, problem solving ===
+
=== 推理,解决问题 Reasoning, problem solving ===
 +
 
 +
 
   −
=== Reasoning, problem solving ===
     −
推理,解决问题
      
<!-- This is linked to in the introduction --><!-- SOLVED PROBLEMS -->
 
<!-- This is linked to in the introduction --><!-- SOLVED PROBLEMS -->
第511行: 第510行:       −
=== Knowledge representation ===
+
=== 知识表示 Knowledge representation ===
 +
 
 +
 
   −
=== Knowledge representation ===
     −
知识表示
      
<!-- This is linked to in the introduction -->
 
<!-- This is linked to in the introduction -->
第523行: 第522行:  
! ——这个链接在介绍中——
 
! ——这个链接在介绍中——
   −
[[File:GFO taxonomy tree.png|right|thumb|An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.]]
+
[[File:GFO taxonomy tree.png|right|thumb|An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.本体将知识表示为领域中的一组概念以及这些概念之间的关系。]]
    
An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
 
An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
   −
本体将知识表示为领域中的一组概念以及这些概念之间的关系。
+
 
    
{{Main|Knowledge representation|Commonsense knowledge}}
 
{{Main|Knowledge representation|Commonsense knowledge}}
第575行: 第574行:       −
=== Planning ===
+
=== 规划 Planning ===
 +
 
 +
 
   −
=== Planning ===
     −
规划
      
<!-- This is linked to in the introduction -->
 
<!-- This is linked to in the introduction -->
第587行: 第586行:  
! ——这个链接在介绍中——
 
! ——这个链接在介绍中——
   −
[[File:Hierarchical-control-system.svg|thumb| A [[hierarchical control system]] is a form of [[control system]] in which a set of devices and governing software is arranged in a hierarchy.]]
+
[[File:Hierarchical-control-system.svg|thumb| A [[hierarchical control system]] is a form of [[control system]] in which a set of devices and governing software is arranged in a hierarchy分层控制系统是控制系统的一种形式,在这种控制系统中,一组设备和控制软件被放在一个层次结构中.]]
    
  A [[hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.]]
 
  A [[hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.]]
   −
分层控制系统是控制系统的一种形式,在这种控制系统中,一组设备和控制软件被放在一个层次结构中
        第626行: 第624行:       −
=== Learning ===
+
===学习 Learning ===
 +
 
 +
 
   −
=== Learning ===
     −
学习
      
<!-- This is linked to in the introduction -->
 
<!-- This is linked to in the introduction -->
第661行: 第659行:       −
=== Natural language processing ===
+
===自然语言处理 Natural language processing ===
 +
 
 +
 
   −
=== Natural language processing ===
     −
自然语言处理
      
<!-- This is linked to in the introduction -->
 
<!-- This is linked to in the introduction -->
第673行: 第671行:  
! ——这个链接在介绍中——
 
! ——这个链接在介绍中——
   −
[[File:ParseTree.svg|thumb| A [[parse tree]] represents the [[syntax|syntactic]] structure of a sentence according to some [[formal grammar]].]]
+
[[File:ParseTree.svg|thumb| A [[parse tree]] represents the [[syntax|syntactic]] structure of a sentence according to some [[formal grammar]].]一个[[根据某种形式语法,解析树表示一个句子的句法结构]]
    
  A [[parse tree represents the syntactic structure of a sentence according to some formal grammar.]]
 
  A [[parse tree represents the syntactic structure of a sentence according to some formal grammar.]]
   −
一个[[根据某种形式语法,解析树表示一个句子的句法结构]
+
 
    
{{Main|Natural language processing}}
 
{{Main|Natural language processing}}
第703行: 第701行:       −
=== Perception ===
+
===知觉 Perception ===
 +
 
 +
 
   −
=== Perception ===
     −
知觉
      
<!-- This is linked to in the introduction -->
 
<!-- This is linked to in the introduction -->
第723行: 第721行:       −
[[File:Ääretuvastuse näide.png|thumb|[[Feature detection (computer vision)|Feature detection]] (pictured: [[edge detection]]) helps AI compose informative abstract structures out of raw data.]]
+
[[File:Ääretuvastuse näide.png|thumb|[[Feature detection (computer vision)|Feature detection]] (pictured: [[edge detection]]) helps AI compose informative abstract structures out of raw data.Feature detection (pictured: edge detection) helps AI compose informative abstract structures out of raw data.]]]]
 +
 
   −
Feature detection (pictured: edge detection) helps AI compose informative abstract structures out of raw data.]]
      
[图片: 边缘检测特征提取]帮助AI从原始数据中合成有信息量的抽象结构
 
[图片: 边缘检测特征提取]帮助AI从原始数据中合成有信息量的抽象结构
第742行: 第740行:       −
=== Motion and manipulation ===
+
=== 运动和操作 Motion and manipulation ===
 +
 
 +
 
   −
=== Motion and manipulation ===
     −
运动和操作
      
<!-- This is linked to in the introduction -->
 
<!-- This is linked to in the introduction -->
第774行: 第772行:       −
=== Social intelligence ===
+
=== 社会智能 Social intelligence ===
 +
 
 +
 
   −
=== Social intelligence ===
     −
社会智力
      
<!-- This is linked to in the introduction -->
 
<!-- This is linked to in the introduction -->
第790行: 第788行:       −
[[File:Kismet robot at MIT Museum.jpg|thumb|[[Kismet (robot)|Kismet]], a robot with rudimentary social skills{{sfn|''Kismet''}}]]
+
[[File:Kismet robot at MIT Museum.jpg|thumb|[[Kismet (robot)|Kismet]], a robot with rudimentary social skills{{sfn|''Kismet''}}Kismet,一个具有基本社交技能的机器人]]]]
    
Kismet, a robot with rudimentary social skills]]
 
Kismet, a robot with rudimentary social skills]]
   −
Kismet,一个具有基本社交技能的机器人]]
+
 
      第816行: 第814行:       −
=== General intelligence ===
+
=== 通用智能 General intelligence ===
 +
 
 +
 
   −
=== General intelligence ===
     −
通用智能
      
<!-- This is linked to in the introduction -->
 
<!-- This is linked to in the introduction -->
第853行: 第851行:       −
== Approaches ==
+
== 方法 Approaches ==
 +
 
   −
== Approaches ==
     −
方法
      
There is no established unifying theory or [[paradigm]] that guides AI research. Researchers disagree about many issues.<ref>[[Nils Nilsson (researcher)|Nils Nilsson]] writes: "Simply put, there is wide disagreement in the field about what AI is all about" {{Harv|Nilsson|1983|p=10}}.</ref> A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying [[psychology]] or [[Neuroscience|neurobiology]]? Or is [[human biology]] as irrelevant to AI research as bird biology is to [[aeronautical engineering]]?<ref name="Biological intelligence vs. intelligence in general"/>
 
There is no established unifying theory or [[paradigm]] that guides AI research. Researchers disagree about many issues.<ref>[[Nils Nilsson (researcher)|Nils Nilsson]] writes: "Simply put, there is wide disagreement in the field about what AI is all about" {{Harv|Nilsson|1983|p=10}}.</ref> A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying [[psychology]] or [[Neuroscience|neurobiology]]? Or is [[human biology]] as irrelevant to AI research as bird biology is to [[aeronautical engineering]]?<ref name="Biological intelligence vs. intelligence in general"/>
第872行: 第869行:       −
=== Cybernetics and brain simulation ===
+
===控制论与大脑模拟 Cybernetics and brain simulation ===
 +
 
 +
 
   −
=== Cybernetics and brain simulation ===
     −
控制论与大脑模拟
      
{{Main|Cybernetics|Computational neuroscience}}
 
{{Main|Cybernetics|Computational neuroscience}}
第889行: 第886行:       −
=== Symbolic ===
+
===符号化 Symbolic ===
 +
 
 +
 
   −
=== Symbolic ===
     −
符号化
      
{{Main|Symbolic AI}}
 
{{Main|Symbolic AI}}
第916行: 第913行:       −
==== Cognitive simulation ====
+
====认知模拟 Cognitive simulation ====
 +
 
   −
==== Cognitive simulation ====
     −
认知模拟
      
Economist [[Herbert A. Simon|Herbert Simon]] and [[Allen Newell]] studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as [[cognitive science]], [[operations research]] and [[management science]]. Their research team used the results of [[psychology|psychological]] experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at [[Carnegie Mellon University]] would eventually culminate in the development of the [[Soar (cognitive architecture)|Soar]] architecture in the middle 1980s.<ref name="AI at CMU in the 60s"/><ref name="Soar"/>
 
Economist [[Herbert A. Simon|Herbert Simon]] and [[Allen Newell]] studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as [[cognitive science]], [[operations research]] and [[management science]]. Their research team used the results of [[psychology|psychological]] experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at [[Carnegie Mellon University]] would eventually culminate in the development of the [[Soar (cognitive architecture)|Soar]] architecture in the middle 1980s.<ref name="AI at CMU in the 60s"/><ref name="Soar"/>
第929行: 第925行:       −
==== Logic-based ====
+
==== 基于逻辑 Logic-based ====
 +
 
 +
 
   −
==== Logic-based ====
     −
基于逻辑的
      
Unlike Simon and Newell, [[John McCarthy (computer scientist)|John McCarthy]] felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless whether people used the same algorithms.<ref name="Biological intelligence vs. intelligence in general"/> His laboratory at [[Stanford University|Stanford]] ([[Stanford Artificial Intelligence Laboratory|SAIL]]) focused on using formal [[logic]] to solve a wide variety of problems, including [[knowledge representation]], [[automated planning and scheduling|planning]] and [[machine learning|learning]].<ref name="AI at Stanford in the 60s"/> Logic was also the focus of the work at the [[University of Edinburgh]] and elsewhere in Europe which led to the development of the programming language [[Prolog]] and the science of [[logic programming]].<ref name="AI at Edinburgh and France in the 60s"/>
 
Unlike Simon and Newell, [[John McCarthy (computer scientist)|John McCarthy]] felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless whether people used the same algorithms.<ref name="Biological intelligence vs. intelligence in general"/> His laboratory at [[Stanford University|Stanford]] ([[Stanford Artificial Intelligence Laboratory|SAIL]]) focused on using formal [[logic]] to solve a wide variety of problems, including [[knowledge representation]], [[automated planning and scheduling|planning]] and [[machine learning|learning]].<ref name="AI at Stanford in the 60s"/> Logic was also the focus of the work at the [[University of Edinburgh]] and elsewhere in Europe which led to the development of the programming language [[Prolog]] and the science of [[logic programming]].<ref name="AI at Edinburgh and France in the 60s"/>
第943行: 第939行:       −
==== Anti-logic or scruffy ====
+
====反逻辑的或邋遢的 Anti-logic or scruffy ====
   −
==== Anti-logic or scruffy ====
     −
反逻辑的或邋遢的(scruffy)
      
Researchers at [[MIT]] (such as [[Marvin Minsky]] and [[Seymour Papert]])<ref name="AI at MIT in the 60s"/> found that solving difficult problems in [[computer vision|vision]] and [[natural language processing]] required ad-hoc solutions—they argued that there was no simple and general principle (like [[logic]]) that would capture all the aspects of intelligent behavior. [[Roger Schank]] described their "anti-logic" approaches as "[[Neats vs. scruffies|scruffy]]" (as opposed to the "[[neats vs. scruffies|neat]]" paradigms at [[Carnegie Mellon University|CMU]] and Stanford).<ref name="Neats vs. scruffies"/> [[Commonsense knowledge bases]] (such as [[Doug Lenat]]'s [[Cyc]]) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.<ref name="Cyc"/>
 
Researchers at [[MIT]] (such as [[Marvin Minsky]] and [[Seymour Papert]])<ref name="AI at MIT in the 60s"/> found that solving difficult problems in [[computer vision|vision]] and [[natural language processing]] required ad-hoc solutions—they argued that there was no simple and general principle (like [[logic]]) that would capture all the aspects of intelligent behavior. [[Roger Schank]] described their "anti-logic" approaches as "[[Neats vs. scruffies|scruffy]]" (as opposed to the "[[neats vs. scruffies|neat]]" paradigms at [[Carnegie Mellon University|CMU]] and Stanford).<ref name="Neats vs. scruffies"/> [[Commonsense knowledge bases]] (such as [[Doug Lenat]]'s [[Cyc]]) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.<ref name="Cyc"/>
第957行: 第951行:       −
====Knowledge-based====
+
====基于知识 Knowledge-based====
 +
 
 +
 
   −
====Knowledge-based====
     −
基于知识
      
When computers with large memories became available around 1970, researchers from all three traditions began to build [[knowledge representation|knowledge]] into AI applications.<ref name="Knowledge revolution"/> This "knowledge revolution" led to the development and deployment of [[expert system]]s (introduced by [[Edward Feigenbaum]]), the first truly successful form of AI software.<ref name="Expert systems"/> A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.<ref>{{Cite journal |last=Frederick |first=Hayes-Roth |last2=William |first2=Murray |last3=Leonard |first3=Adelman |title=Expert systems|journal=AccessScience |language=en |doi=10.1036/1097-8542.248550}}</ref> The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.
 
When computers with large memories became available around 1970, researchers from all three traditions began to build [[knowledge representation|knowledge]] into AI applications.<ref name="Knowledge revolution"/> This "knowledge revolution" led to the development and deployment of [[expert system]]s (introduced by [[Edward Feigenbaum]]), the first truly successful form of AI software.<ref name="Expert systems"/> A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.<ref>{{Cite journal |last=Frederick |first=Hayes-Roth |last2=William |first2=Murray |last3=Leonard |first3=Adelman |title=Expert systems|journal=AccessScience |language=en |doi=10.1036/1097-8542.248550}}</ref> The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.
第971行: 第965行:       −
=== Sub-symbolic ===
+
===亚符号 Sub-symbolic ===
 +
 
 +
 
   −
=== Sub-symbolic ===
     −
亚符号
      
By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially [[machine perception|perception]], [[robotics]], [[machine learning|learning]] and [[pattern recognition]]. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.<ref name="Symbolic vs. sub-symbolic"/> Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.
 
By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially [[machine perception|perception]], [[robotics]], [[machine learning|learning]] and [[pattern recognition]]. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.<ref name="Symbolic vs. sub-symbolic"/> Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.
第987行: 第981行:       −
==== Embodied intelligence ====
+
====具身智慧 Embodied intelligence ====
 +
 
 +
 
   −
==== Embodied intelligence ====
     −
具身智慧
      
This includes [[embodied agent|embodied]], [[situated]], [[behavior-based AI|behavior-based]], and [[nouvelle AI]]. Researchers from the related field of [[robotics]], such as [[Rodney Brooks]], rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.<ref name="Embodied AI"/> Their work revived the non-symbolic point of view of the early [[cybernetic]]s researchers of the 1950s and reintroduced the use of [[control theory]] in AI. This coincided with the development of the [[embodied mind thesis]] in the related field of [[cognitive science]]: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.
 
This includes [[embodied agent|embodied]], [[situated]], [[behavior-based AI|behavior-based]], and [[nouvelle AI]]. Researchers from the related field of [[robotics]], such as [[Rodney Brooks]], rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.<ref name="Embodied AI"/> Their work revived the non-symbolic point of view of the early [[cybernetic]]s researchers of the 1950s and reintroduced the use of [[control theory]] in AI. This coincided with the development of the [[embodied mind thesis]] in the related field of [[cognitive science]]: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.
第1,010行: 第1,004行:       −
====Computational intelligence and soft computing====
+
====计算智能与软计算 Computational intelligence and soft computing====
 +
 
   −
====Computational intelligence and soft computing====
     −
计算智能与软计算
      
Interest in [[Artificial neural network|neural networks]] and "[[connectionism]]" was revived by [[David Rumelhart]] and others in the middle of the 1980s.<ref name="Revival of connectionism"/> [[Artificial neural network]]s are an example of [[soft computing]]—they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other [[soft computing]] approaches to AI include [[fuzzy system]]s, [[Grey system theory]], [[evolutionary computation]] and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of [[computational intelligence]].<ref name="Computational intelligence"/>
 
Interest in [[Artificial neural network|neural networks]] and "[[connectionism]]" was revived by [[David Rumelhart]] and others in the middle of the 1980s.<ref name="Revival of connectionism"/> [[Artificial neural network]]s are an example of [[soft computing]]—they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other [[soft computing]] approaches to AI include [[fuzzy system]]s, [[Grey system theory]], [[evolutionary computation]] and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of [[computational intelligence]].<ref name="Computational intelligence"/>
第1,024行: 第1,017行:       −
=== Statistical learning ===
+
===统计学习 Statistical learning ===
 +
 
 +
 
   −
=== Statistical learning ===
     −
统计学习
      
Much of traditional [[Symbolic artificial intelligence|GOFAI]] got bogged down on ''ad hoc'' patches to [[symbolic computation]] that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as [[hidden Markov model]]s (HMM), [[information theory]], and normative Bayesian [[decision theory]] to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like [[mathematics]], economics or [[operations research]]).{{efn|While such a "victory of the neats" may be a consequence of the field becoming more mature, [[Artificial Intelligence: A Modern Approach|AIMA]] states that in practice both [[neats and scruffies|neat and scruffy]] approaches continue to be necessary in AI research.}} Compared with GOFAI, new "statistical learning" techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as [[data mining]], without necessarily acquiring a semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more [[scientific method|scientific]]. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.<ref name="Formal methods in AI"/><ref>{{cite news|last1=Hutson|first1=Matthew|title=Artificial intelligence faces reproducibility crisis|url=http://science.sciencemag.org/content/359/6377/725|accessdate=28 April 2018|work=[[Science Magazine|Science]]|date=16 February 2018|pages=725–726|language=en|doi=10.1126/science.359.6377.725|bibcode=2018Sci...359..725H}}</ref> Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language.{{sfn|Norvig|2012}} Critics note that the shift from GOFAI to statistical learning is often also a shift away from [[explainable AI]]. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.{{sfn|Langley|2011}}{{sfn|Katz|2012}}
 
Much of traditional [[Symbolic artificial intelligence|GOFAI]] got bogged down on ''ad hoc'' patches to [[symbolic computation]] that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as [[hidden Markov model]]s (HMM), [[information theory]], and normative Bayesian [[decision theory]] to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like [[mathematics]], economics or [[operations research]]).{{efn|While such a "victory of the neats" may be a consequence of the field becoming more mature, [[Artificial Intelligence: A Modern Approach|AIMA]] states that in practice both [[neats and scruffies|neat and scruffy]] approaches continue to be necessary in AI research.}} Compared with GOFAI, new "statistical learning" techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as [[data mining]], without necessarily acquiring a semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more [[scientific method|scientific]]. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.<ref name="Formal methods in AI"/><ref>{{cite news|last1=Hutson|first1=Matthew|title=Artificial intelligence faces reproducibility crisis|url=http://science.sciencemag.org/content/359/6377/725|accessdate=28 April 2018|work=[[Science Magazine|Science]]|date=16 February 2018|pages=725–726|language=en|doi=10.1126/science.359.6377.725|bibcode=2018Sci...359..725H}}</ref> Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language.{{sfn|Norvig|2012}} Critics note that the shift from GOFAI to statistical learning is often also a shift away from [[explainable AI]]. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.{{sfn|Langley|2011}}{{sfn|Katz|2012}}
第1,037行: 第1,030行:       −
=== Integrating the approaches ===
+
=== 方法整合 Integrating the approaches ===
   −
=== Integrating the approaches ===
     −
整合各种方法
      
;Intelligent agent paradigm: An [[intelligent agent]] is a system that perceives its environment and takes actions that maximize its chances of success. The simplest intelligent agents are programs that solve specific problems. More complicated agents include human beings and organizations of human beings (such as [[firm]]s). The paradigm allows researchers to directly compare or even combine different approaches to isolated problems, by asking which agent is best at maximizing a given "goal function". An agent that solves a specific problem can use any approach that works—some agents are symbolic and logical, some are sub-symbolic [[artificial neural network]]s and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fields—such as [[decision theory]] and economics—that also use concepts of abstract agents. Building a complete agent requires researchers to address realistic problems of integration; for example, because sensory systems give uncertain information about the environment, planning systems must be able to function in the presence of uncertainty. The intelligent agent paradigm became widely accepted during the 1990s.<ref name="Intelligent agents"/>
 
;Intelligent agent paradigm: An [[intelligent agent]] is a system that perceives its environment and takes actions that maximize its chances of success. The simplest intelligent agents are programs that solve specific problems. More complicated agents include human beings and organizations of human beings (such as [[firm]]s). The paradigm allows researchers to directly compare or even combine different approaches to isolated problems, by asking which agent is best at maximizing a given "goal function". An agent that solves a specific problem can use any approach that works—some agents are symbolic and logical, some are sub-symbolic [[artificial neural network]]s and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fields—such as [[decision theory]] and economics—that also use concepts of abstract agents. Building a complete agent requires researchers to address realistic problems of integration; for example, because sensory systems give uncertain information about the environment, planning systems must be able to function in the presence of uncertainty. The intelligent agent paradigm became widely accepted during the 1990s.<ref name="Intelligent agents"/>
第1,061行: 第1,052行:       −
== Tools ==
+
==工具 Tools ==
 +
 
 +
 
   −
== Tools ==
     −
工具
      
AI has developed many tools to solve the most difficult problems in [[computer science]]. A few of the most general of these methods are discussed below.
 
AI has developed many tools to solve the most difficult problems in [[computer science]]. A few of the most general of these methods are discussed below.
第1,074行: 第1,065行:       −
=== Search and optimization ===
+
===搜索和优化 Search and optimization ===
 +
 
 +
 
   −
=== Search and optimization ===
     −
搜索和优化
        第1,115行: 第1,106行:       −
[[File:ParticleSwarmArrowsAnimation.gif|thumb|A [[particle swarm optimization|particle swarm]] seeking the [[global minimum]]]]
+
[[File:ParticleSwarmArrowsAnimation.gif|thumb|A [[particle swarm optimization|particle swarm]] seeking the [[global minimum]][粒子群搜索全局最小]]]
    
particle swarm seeking the global minimum]]
 
particle swarm seeking the global minimum]]
   −
[粒子群搜索全局最小]
+
 
    
[[Evolutionary computation]] uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, [[artificial selection|selecting]] only the fittest to survive each generation (refining the guesses). Classic [[evolutionary algorithms]] include [[genetic algorithms]], [[gene expression programming]], and [[genetic programming]].<ref name="Genetic programming"/> Alternatively, distributed search processes can coordinate via [[swarm intelligence]] algorithms. Two popular swarm algorithms used in search are [[particle swarm optimization]] (inspired by bird [[flocking (behavior)|flocking]]) and [[ant colony optimization]] (inspired by [[ant trail]]s).<ref name="Society based learning"/><ref>{{cite book|author1=Daniel Merkle|author2=Martin Middendorf|editor1-last=Burke|editor1-first=Edmund K.|editor2-last=Kendall|editor2-first=Graham|title=Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques|date=2013|publisher=Springer Science & Business Media|isbn=978-1-4614-6940-7|language=en|chapter=Swarm Intelligence}}</ref>
 
[[Evolutionary computation]] uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, [[artificial selection|selecting]] only the fittest to survive each generation (refining the guesses). Classic [[evolutionary algorithms]] include [[genetic algorithms]], [[gene expression programming]], and [[genetic programming]].<ref name="Genetic programming"/> Alternatively, distributed search processes can coordinate via [[swarm intelligence]] algorithms. Two popular swarm algorithms used in search are [[particle swarm optimization]] (inspired by bird [[flocking (behavior)|flocking]]) and [[ant colony optimization]] (inspired by [[ant trail]]s).<ref name="Society based learning"/><ref>{{cite book|author1=Daniel Merkle|author2=Martin Middendorf|editor1-last=Burke|editor1-first=Edmund K.|editor2-last=Kendall|editor2-first=Graham|title=Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques|date=2013|publisher=Springer Science & Business Media|isbn=978-1-4614-6940-7|language=en|chapter=Swarm Intelligence}}</ref>
第1,130行: 第1,121行:       −
=== Logic ===
+
===逻辑 Logic ===
 +
 
   −
=== Logic ===
     −
逻辑
        第1,174行: 第1,164行:       −
=== Probabilistic methods for uncertain reasoning ===
+
===不确定推理的概率方法 Probabilistic methods for uncertain reasoning ===
 +
 
 +
 
   −
=== Probabilistic methods for uncertain reasoning ===
     −
不确定推理的概率方法
        第1,188行: 第1,178行:       −
[[File:EM Clustering of Old Faithful data.gif|right|frame|[[Expectation-maximization]] clustering of [[Old Faithful]] eruption data starts from a random guess but then successfully converges on an accurate clustering of the two physically distinct modes of eruption.]]
+
[[File:EM Clustering of Old Faithful data.gif|right|frame|[[Expectation-maximization]] clustering of [[Old Faithful]] eruption data starts from a random guess but then successfully converges on an accurate clustering of the two physically distinct modes of eruption.[[期望-最大化老实泉喷发数据的聚类从一个随机的猜测开始,然后成功地收敛到两个物理上截然不同的喷发模式的精确聚类]]]]
    
[[Expectation-maximization clustering of Old Faithful eruption data starts from a random guess but then successfully converges on an accurate clustering of the two physically distinct modes of eruption.]]
 
[[Expectation-maximization clustering of Old Faithful eruption data starts from a random guess but then successfully converges on an accurate clustering of the two physically distinct modes of eruption.]]
   −
[[期望-最大化老实泉喷发数据的聚类从一个随机的猜测开始,然后成功地收敛到两个物理上截然不同的喷发模式的精确聚类]]
+
 
      第1,224行: 第1,214行:       −
=== Classifiers and statistical learning methods ===
+
===分类器与统计学习方法 Classifiers and statistical learning methods ===
 +
 
   −
=== Classifiers and statistical learning methods ===
     −
分类器与统计学习方法
        第1,277行: 第1,266行:       −
=== Artificial neural networks ===
+
===人工神经网络 Artificial neural networks ===
 +
 
 +
 
   −
=== Artificial neural networks ===
     −
人工神经网络
        第1,291行: 第1,280行:       −
[[File:Artificial neural network.svg|thumb|A neural network is an interconnected group of nodes, akin to the vast network of [[neuron]]s in the [[human brain]].]]
+
[[File:Artificial neural network.svg|thumb|A neural network is an interconnected group of nodes, akin to the vast network of [[neuron]]s in the [[human brain]].神经网络是一组相互连接的节点,类似于人脑中庞大的神经元网络。]]
    
A neural network is an interconnected group of nodes, akin to the vast network of [[neurons in the human brain.]]
 
A neural network is an interconnected group of nodes, akin to the vast network of [[neurons in the human brain.]]
   −
神经网络是一组相互连接的节点,类似于人脑中庞大的神经元网络。
+
 
      第1,350行: 第1,339行:       −
==== Deep feedforward neural networks ====
+
====深层前馈神经网络 Deep feedforward neural networks ====
 +
 
 +
 
   −
==== Deep feedforward neural networks ====
     −
深层前馈神经网络
        第1,424行: 第1,413行:       −
==== Deep recurrent neural networks ====
+
====深层递归神经网络 Deep recurrent neural networks ====
   −
==== Deep recurrent neural networks ====
     −
深层递归神经网络
        第1,434行: 第1,421行:       −
{{Main|Recurrent neural networks}}
+
 
 +
 
 +
{{Main|Recurrent neural networks}}
      第1,457行: 第1,446行:       −
=== Evaluating progress ===
+
===评估进度 Evaluating progress ===
 +
 
 +
 
   −
=== Evaluating progress ===
     −
评估进度
      
{{Further|Progress in artificial intelligence|Competitions and prizes in artificial intelligence}}
 
{{Further|Progress in artificial intelligence|Competitions and prizes in artificial intelligence}}
第1,501行: 第1,490行:       −
== Applications{{anchor|Goals}} ==
+
== 应用 Applications{{anchor|Goals}} ==
   −
== Applications ==
     −
申请
+
[[File:Automated online assistant.png|thumb|An [[automated online assistant]] providing customer service on a web page – one of many very primitive applications of artificial intelligence]AI的初级应用之一:提供客户服务的网页自动化助理] ]
   −
[[File:Automated online assistant.png|thumb|An [[automated online assistant]] providing customer service on a web page – one of many very primitive applications of artificial intelligence]]
+
An [[automated online assistant providing customer service on a web page – one of many very primitive applications of artificial intelligence]]
   −
An [[automated online assistant providing customer service on a web page – one of many very primitive applications of artificial intelligence]]
     −
AI的初级应用之一:提供客户服务的网页自动化助理]
      
{{Main|Applications of artificial intelligence}}
 
{{Main|Applications of artificial intelligence}}
第1,552行: 第1,538行:       −
=== Healthcare ===
+
===医疗 Healthcare ===
 +
 
 +
 
   −
=== Healthcare ===
     −
医疗
      
{{Main|Artificial intelligence in healthcare}}
 
{{Main|Artificial intelligence in healthcare}}
第1,562行: 第1,548行:       −
[[File:Laproscopic Surgery Robot.jpg|thumb| A patient-side surgical arm of [[Da Vinci Surgical System]]]]AI in healthcare is often used for classification, whether to automate initial evaluation of a CT scan or EKG or to identify high-risk patients for population health. The breadth of applications is rapidly increasing.
+
[[File:Laproscopic Surgery Robot.jpg|thumb| A patient-side surgical arm of [[Da Vinci Surgical System]]]]AI in healthcare is often used for classification, whether to automate initial evaluation of a CT scan or EKG or to identify high-risk patients for population health. The breadth of applications is rapidly increasing.在医疗保健中, AI通常被用于分类,它既可以自动对 CT 扫描或心电图EKG进行初步评估,又可以在人口健康调查中识别高风险患者。AI的应用范围正在迅速扩大。
    
  A patient-side surgical arm of [[Da Vinci Surgical System]]AI in healthcare is often used for classification, whether to automate initial evaluation of a CT scan or EKG or to identify high-risk patients for population health. The breadth of applications is rapidly increasing.
 
  A patient-side surgical arm of [[Da Vinci Surgical System]]AI in healthcare is often used for classification, whether to automate initial evaluation of a CT scan or EKG or to identify high-risk patients for population health. The breadth of applications is rapidly increasing.
   −
在医疗保健中, AI通常被用于分类,它既可以自动对 CT 扫描或心电图EKG进行初步评估,又可以在人口健康调查中识别高风险患者。AI的应用范围正在迅速扩大。
              −
As an example, AI is being applied to the high-cost problem of dosage issues—where findings suggested that AI could save $16 billion. In 2016, a groundbreaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.<ref>{{Cite news|url=https://hbr.org/2018/05/10-promising-ai-applications-in-health-care|title=10 Promising AI Applications in Health Care|date=2018-05-10|work=Harvard Business Review|access-date=2018-08-28|archive-url=https://web.archive.org/web/20181215015645/https://hbr.org/2018/05/10-promising-ai-applications-in-health-care|archive-date=15 December 2018|url-status=dead}}</ref> [[File:X-ray of a hand with automatic bone age calculation.jpg|thumb|[[Projectional radiography|X-ray]] of a hand, with automatic calculation of [[bone age]] by computer software]]
+
As an example, AI is being applied to the high-cost problem of dosage issues—where findings suggested that AI could save $16 billion. In 2016, a groundbreaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.<ref>{{Cite news|url=https://hbr.org/2018/05/10-promising-ai-applications-in-health-care|title=10 Promising AI Applications in Health Care|date=2018-05-10|work=Harvard Business Review|access-date=2018-08-28|archive-url=https://web.archive.org/web/20181215015645/https://hbr.org/2018/05/10-promising-ai-applications-in-health-care|archive-date=15 December 2018|url-status=dead}}</ref> [[File:X-ray of a hand with automatic bone age calculation.jpg|thumb|[[Projectional radiography|X-ray]] of a hand, with automatic calculation of [[bone age]] by computer software,一只手的X光射线图,自动计算了骨龄]]
    
As an example, AI is being applied to the high-cost problem of dosage issues—where findings suggested that AI could save $16 billion. In 2016, a groundbreaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients. X-ray of a hand, with automatic calculation of bone age by computer software]]
 
As an example, AI is being applied to the high-cost problem of dosage issues—where findings suggested that AI could save $16 billion. In 2016, a groundbreaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients. X-ray of a hand, with automatic calculation of bone age by computer software]]
第1,597行: 第1,582行:       −
=== Automotive ===
+
===汽车 Automotive ===
 +
 
 +
 
   −
=== Automotive ===
     −
汽车
      
{{Main|driverless cars}}
 
{{Main|driverless cars}}
第1,651行: 第1,636行:       −
=== Finance and economics ===
+
===金融和经济 Finance and economics ===
 +
 
   −
=== Finance and economics ===
     −
金融和经济
      
[[Financial institution]]s have long used [[artificial neural network]] systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in [[banking]] can be traced back to 1987 when [[Security Pacific National Bank]] in US set-up a Fraud Prevention Task force to counter the unauthorized use of debit cards.<ref>{{Cite web|url=https://www.latimes.com/archives/la-xpm-1990-01-17-fi-233-story.html|title=Impact of Artificial Intelligence on Banking|last=Christy|first=Charles A.|website=latimes.com|access-date=2019-09-10|date=17 January 1990}}</ref> Programs like Kasisto and Moneystream are using AI in financial services.
 
[[Financial institution]]s have long used [[artificial neural network]] systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in [[banking]] can be traced back to 1987 when [[Security Pacific National Bank]] in US set-up a Fraud Prevention Task force to counter the unauthorized use of debit cards.<ref>{{Cite web|url=https://www.latimes.com/archives/la-xpm-1990-01-17-fi-233-story.html|title=Impact of Artificial Intelligence on Banking|last=Christy|first=Charles A.|website=latimes.com|access-date=2019-09-10|date=17 January 1990}}</ref> Programs like Kasisto and Moneystream are using AI in financial services.
第1,687行: 第1,671行:       −
=== Cybersecurity ===
+
===网络安全 Cybersecurity ===
 +
 
 +
 
   −
=== Cybersecurity ===
     −
网络安全
      
{{More citations needed section|date=January 2020}}
 
{{More citations needed section|date=January 2020}}
第1,704行: 第1,688行:       −
=== Government ===
+
===政府 Government ===
 +
 
 +
 
   −
=== Government ===
     −
政府
      
{{Main|Artificial intelligence in government}}
 
{{Main|Artificial intelligence in government}}
第1,732行: 第1,716行:       −
=== Law-related professions ===
+
===与法律有关的专业 Law-related professions ===
 +
 
 +
 
   −
=== Law-related professions ===
     −
与法律有关的专业
      
{{Main|Legal informatics#Artificial intelligence}}
 
{{Main|Legal informatics#Artificial intelligence}}
第1,758行: 第1,742行:       −
=== Video games ===
+
===电子游戏 Video games ===
 +
 
 +
 
   −
=== Video games ===
     −
电子游戏
      
{{Main|Artificial intelligence (video games)}}
 
{{Main|Artificial intelligence (video games)}}
第1,776行: 第1,760行:  
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]]) neuroevolutionary training of platoons 未找到标准翻译
 
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]]) neuroevolutionary training of platoons 未找到标准翻译
   −
=== Military ===
+
===军事 Military ===
 +
 
   −
=== Military ===
     −
军事
      
{{Further|Artificial intelligence arms race|Lethal autonomous weapon|Unmanned combat aerial vehicle}}
 
{{Further|Artificial intelligence arms race|Lethal autonomous weapon|Unmanned combat aerial vehicle}}
第1,801行: 第1,784行:       −
=== Hospitality ===
+
===服务 Hospitality ===
 +
 
   −
=== Hospitality ===
     −
服务
      
In the hospitality industry, Artificial Intelligence based solutions are used to reduce staff load and increase efficiency<ref>{{cite web|title=Role of AI in travel and Hospitality Industry|url=https://www.infosys.com/industries/travel-hospitality/documents/ai-travel-hospitality.pdf|accessdate=14 January 2020|work=Infosys|date=2018}}</ref> by cutting repetitive tasks frequency, trends analysis, guest interaction, and customer needs prediction.<ref>{{cite web|title=Advanced analytics in hospitality|url=https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/advanced-analytics-in-hospitality|accessdate=14 January 2020|work=McKinsey & Company|date=2017}}</ref> Hotel services backed by Artificial Intelligence are represented in the form of a chatbot,<ref>{{cite web|title=Current applications of Artificial Intelligence in tourism and hospitality|url=https://www.researchgate.net/publication/333242550|accessdate=14 January 2020|work=Sinteza|date=2019}}</ref> application, virtual voice assistant and service robots.
 
In the hospitality industry, Artificial Intelligence based solutions are used to reduce staff load and increase efficiency<ref>{{cite web|title=Role of AI in travel and Hospitality Industry|url=https://www.infosys.com/industries/travel-hospitality/documents/ai-travel-hospitality.pdf|accessdate=14 January 2020|work=Infosys|date=2018}}</ref> by cutting repetitive tasks frequency, trends analysis, guest interaction, and customer needs prediction.<ref>{{cite web|title=Advanced analytics in hospitality|url=https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/advanced-analytics-in-hospitality|accessdate=14 January 2020|work=McKinsey & Company|date=2017}}</ref> Hotel services backed by Artificial Intelligence are represented in the form of a chatbot,<ref>{{cite web|title=Current applications of Artificial Intelligence in tourism and hospitality|url=https://www.researchgate.net/publication/333242550|accessdate=14 January 2020|work=Sinteza|date=2019}}</ref> application, virtual voice assistant and service robots.
第1,817行: 第1,799行:       −
=== Audit ===
+
===审计 Audit ===
 +
 
 +
 
   −
=== Audit ===
     −
审计
      
For financial statements audit, AI makes continuous audit possible. AI tools could analyze many sets of different information immediately. The potential benefit would be the overall audit risk will be reduced, the level of assurance will be increased and the time duration of audit will be reduced.<ref>{{cite journal|last1=Chang|first1=Hsihui|last2=Kao|first2=Yi-Ching|last3=Mashruwala|first3=Raj|last4=Sorensen|first4=Susan M.|title=Technical Inefficiency, Allocative Inefficiency, and Audit Pricing|journal=Journal of Accounting, Auditing & Finance|volume=33|issue=4|date=10 April 2017|pages=580–600|doi=10.1177/0148558X17696760}}</ref>
 
For financial statements audit, AI makes continuous audit possible. AI tools could analyze many sets of different information immediately. The potential benefit would be the overall audit risk will be reduced, the level of assurance will be increased and the time duration of audit will be reduced.<ref>{{cite journal|last1=Chang|first1=Hsihui|last2=Kao|first2=Yi-Ching|last3=Mashruwala|first3=Raj|last4=Sorensen|first4=Susan M.|title=Technical Inefficiency, Allocative Inefficiency, and Audit Pricing|journal=Journal of Accounting, Auditing & Finance|volume=33|issue=4|date=10 April 2017|pages=580–600|doi=10.1177/0148558X17696760}}</ref>
第1,831行: 第1,813行:       −
=== Advertising ===
+
===广告 Advertising ===
 +
 
   −
=== Advertising ===
     −
广告
      
It is possible to use AI to predict or generalize the behavior of customers from their [[digital footprints]] in order to target them with personalized promotions or build customer personas automatically.<ref name="Matz et al 2017">Matz, S. C., et al. "Psychological targeting as an effective approach to digital mass persuasion." Proceedings of the National Academy of Sciences (2017): 201710966.</ref> A documented case reports that online gambling companies were using AI to improve customer targeting.<ref>{{cite web |last1=Busby |first1=Mattha |title=Revealed: how bookies use AI to keep gamblers hooked |url=https://www.theguardian.com/technology/2018/apr/30/bookies-using-ai-to-keep-gamblers-hooked-insiders-say |website=the Guardian |language=en |date=30 April 2018}}</ref>
 
It is possible to use AI to predict or generalize the behavior of customers from their [[digital footprints]] in order to target them with personalized promotions or build customer personas automatically.<ref name="Matz et al 2017">Matz, S. C., et al. "Psychological targeting as an effective approach to digital mass persuasion." Proceedings of the National Academy of Sciences (2017): 201710966.</ref> A documented case reports that online gambling companies were using AI to improve customer targeting.<ref>{{cite web |last1=Busby |first1=Mattha |title=Revealed: how bookies use AI to keep gamblers hooked |url=https://www.theguardian.com/technology/2018/apr/30/bookies-using-ai-to-keep-gamblers-hooked-insiders-say |website=the Guardian |language=en |date=30 April 2018}}</ref>
第1,851行: 第1,832行:       −
=== Art ===
+
===艺术 Art ===
 +
 
   −
=== Art ===
     −
艺术
      
{{Further|Computer art}}
 
{{Further|Computer art}}
第1,868行: 第1,848行:       −
== Philosophy and ethics ==
+
== 哲学和伦理学 Philosophy and ethics ==
 +
 
   −
== Philosophy and ethics ==
     −
哲学和伦理学
      
{{Main|Philosophy of artificial intelligence|Ethics of artificial intelligence}}
 
{{Main|Philosophy of artificial intelligence|Ethics of artificial intelligence}}
第1,904行: 第1,883行:       −
=== The limits of artificial general intelligence ===
+
===人工智能的局限性 The limits of artificial general intelligence ===
 +
 
 +
 
   −
=== The limits of artificial general intelligence ===
     −
人工智能的局限性
      
{{Main|Philosophy of AI|Turing test|Physical symbol systems hypothesis|Dreyfus' critique of AI|The Emperor's New Mind|AI effect}}
 
{{Main|Philosophy of AI|Turing test|Physical symbol systems hypothesis|Dreyfus' critique of AI|The Emperor's New Mind|AI effect}}
第1,984行: 第1,963行:       −
=== Potential harm{{anchor|Potential_risks_and_moral_reasoning}} ===
+
===潜在危害 Potential harm{{anchor|Potential_risks_and_moral_reasoning}} ===
 +
 
   −
=== Potential harm ===
     −
=== Potential harm ===
  −
潜在危害
   
Widespread use of artificial intelligence could have [[unintended consequences]] that are dangerous or undesirable. Scientists from the [[Future of Life Institute]], among others, described some short-term research goals to see how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.<ref>Russel, Stuart., Daniel Dewey, and Max Tegmark. Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine 36:4 (2015). 8 December 2016.</ref>
 
Widespread use of artificial intelligence could have [[unintended consequences]] that are dangerous or undesirable. Scientists from the [[Future of Life Institute]], among others, described some short-term research goals to see how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.<ref>Russel, Stuart., Daniel Dewey, and Max Tegmark. Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine 36:4 (2015). 8 December 2016.</ref>
   第2,004行: 第1,981行:  
   
 
   
   −
==== Existential risk ====
+
====存在风险  Existential risk ====
 +
 
 +
 
   −
==== Existential risk ====
  −
存在风险
      
{{Main|Existential risk from artificial general intelligence}}
 
{{Main|Existential risk from artificial general intelligence}}
第2,063行: 第2,040行:       −
==== Devaluation of humanity ====
+
====人性贬值 Devaluation of humanity ====
 +
 
 +
 
   −
==== Devaluation of humanity ====
     −
人性的贬值
      
{{Main|Computer Power and Human Reason}}
 
{{Main|Computer Power and Human Reason}}
第2,081行: 第2,058行:       −
====Social justice====
+
====社会正义 Social justice====
   −
====Social justice====
     −
社会正义
      
{{further|Algorithmic bias}}
 
{{further|Algorithmic bias}}
第2,110行: 第2,085行:       −
==== Decrease in demand for human labor ====
+
==== 劳动力需求降低 Decrease in demand for human labor ====
   −
==== Decrease in demand for human labor ====
     −
减少对人力劳动的需求
      
{{Further|Technological unemployment#21st century}}
 
{{Further|Technological unemployment#21st century}}
第2,127行: 第2,100行:       −
==== Autonomous weapons ====
+
====自动化武器 Autonomous weapons ====
 +
 
 +
 
   −
==== Autonomous weapons ====
     −
自动化武器
      
{{See also|Lethal autonomous weapon}}
 
{{See also|Lethal autonomous weapon}}
第2,145行: 第2,118行:       −
=== Ethical machines ===
+
===道德机器 Ethical machines ===
 +
 
 +
 
   −
=== Ethical machines ===
     −
道德的机器
      
Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use [[ethics|ethical reasoning]] to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics.<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref> Research in this area includes [[machine ethics]], [[artificial moral agents]], [[friendly AI]] and discussion towards building a [[human rights]] framework is also in talks.<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>
 
Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use [[ethics|ethical reasoning]] to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics.<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref> Research in this area includes [[machine ethics]], [[artificial moral agents]], [[friendly AI]] and discussion towards building a [[human rights]] framework is also in talks.<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>
第2,159行: 第2,132行:       −
==== Artificial moral agents ====
+
====人工道德智能体 Artificial moral agents ====
 +
 
 +
 
   −
==== Artificial moral agents ====
     −
人工道德智能体
      
Wendell Wallach introduced the concept of [[artificial moral agents]] (AMA) in his book ''Moral Machines''<ref>Wendell Wallach (2010). ''Moral Machines'', Oxford University Press.</ref> For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as "Does Humanity Want Computers Making Moral Decisions"<ref>Wallach, pp 37–54.</ref> and "Can (Ro)bots Really Be Moral".<ref>Wallach, pp 55–73.</ref> For Wallach, the question is not centered on the issue of ''whether'' machines can demonstrate the equivalent of moral behavior in contrast to the ''constraints'' which society may place on the development of AMAs.<ref>Wallach, Introduction chapter.</ref>
 
Wendell Wallach introduced the concept of [[artificial moral agents]] (AMA) in his book ''Moral Machines''<ref>Wendell Wallach (2010). ''Moral Machines'', Oxford University Press.</ref> For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as "Does Humanity Want Computers Making Moral Decisions"<ref>Wallach, pp 37–54.</ref> and "Can (Ro)bots Really Be Moral".<ref>Wallach, pp 55–73.</ref> For Wallach, the question is not centered on the issue of ''whether'' machines can demonstrate the equivalent of moral behavior in contrast to the ''constraints'' which society may place on the development of AMAs.<ref>Wallach, Introduction chapter.</ref>
第2,174行: 第2,147行:       −
==== Machine ethics ====
+
==== 机器伦理学 Machine ethics ====
 +
 
 +
 
   −
==== Machine ethics ====
     −
机器伦理学
      
{{Main|Machine ethics}}
 
{{Main|Machine ethics}}
第2,193行: 第2,166行:       −
==== Malevolent and friendly AI ====
+
====善恶AI Malevolent and friendly AI ====
 +
 
 +
 
   −
==== Malevolent and friendly AI ====
     −
善恶AI
      
{{Main|Friendly AI}}
 
{{Main|Friendly AI}}
第2,225行: 第2,198行:  
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])“我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我认为,这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快但构建有意识有感情的智能是件庞杂且艰巨的任务。”该句为意译
 
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])“我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我认为,这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快但构建有意识有感情的智能是件庞杂且艰巨的任务。”该句为意译
   −
=== Machine consciousness, sentience and mind ===
+
===机器意识、知觉和思维 Machine consciousness, sentience and mind ===
 +
 
 +
 
   −
=== Machine consciousness, sentience and mind ===
     −
机器意识、知觉和思维
      
{{Main|Artificial consciousness}}
 
{{Main|Artificial consciousness}}
第2,246行: 第2,219行:       −
==== Consciousness ====
+
====意识 Consciousness ====
 +
 
 +
 
   −
==== Consciousness ====
     −
意识
      
{{Main|Hard problem of consciousness|Theory of mind}}
 
{{Main|Hard problem of consciousness|Theory of mind}}
第2,279行: 第2,252行:       −
==== Computationalism and functionalism ====
+
====计算主义和功能主义 Computationalism and functionalism ====
 +
 
 +
 
   −
==== Computationalism and functionalism ====
     −
计算主义和功能主义
      
{{Main|Computationalism|Functionalism (philosophy of mind)}}
 
{{Main|Computationalism|Functionalism (philosophy of mind)}}
第2,298行: 第2,271行:       −
==== Strong AI hypothesis ====
+
====强人工智能假说 Strong AI hypothesis ====
 +
 
 +
 
   −
==== Strong AI hypothesis ====
     −
强人工智能假说
      
{{Main|Chinese room}}
 
{{Main|Chinese room}}
第2,316行: 第2,289行:       −
==== Robot rights ====
+
====机器人权利 Robot rights ====
 +
 
 +
 
   −
==== Robot rights ====
     −
机器人的权利
      
{{Main|Robot rights}}
 
{{Main|Robot rights}}
第2,333行: 第2,306行:       −
=== Superintelligence ===
+
===超级智能 Superintelligence ===
 +
 
 +
 
   −
=== Superintelligence ===
     −
超级智能
      
{{Main|Superintelligence}}
 
{{Main|Superintelligence}}
第2,350行: 第2,323行:       −
==== Technological singularity ====
+
====技术奇异点 Technological singularity ====
 +
 
 +
 
   −
==== Technological singularity ====
     −
技术奇异点
      
{{Main|Technological singularity|Moore's law}}
 
{{Main|Technological singularity|Moore's law}}
第2,376行: 第2,349行:       −
==== Transhumanism ====
+
====超人类主义 Transhumanism ====
 +
 
 +
 
   −
==== Transhumanism ====
     −
超人类主义
      
{{Main|Transhumanism}}
 
{{Main|Transhumanism}}
第2,406行: 第2,379行:       −
== Economics ==
+
== 经济学 Economics ==
   −
== Economics ==
     −
经济学
      
The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term [[unemployment]], but they generally agree that it could be a net benefit, if [[productivity]] gains are [[Redistribution of income and wealth|redistributed]].<ref>{{Cite web|url=http://www.igmchicago.org/surveys/robots-and-artificial-intelligence|title=Robots and Artificial Intelligence|last=|first=|date=|website=www.igmchicago.org|access-date=2019-07-03}}</ref> A February 2020 European Union white paper on artificial intelligence advocated for artificial intelligence for economic benefits, including "improving healthcare (e.g. making diagnosis more  precise,  enabling  better  prevention  of  diseases), increasing  the  efficiency  of  farming, contributing  to climate  change mitigation  and  adaptation, [and] improving  the  efficiency  of production systems through predictive maintenance", while acknowledging potential risks.<ref name=":1" />
 
The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term [[unemployment]], but they generally agree that it could be a net benefit, if [[productivity]] gains are [[Redistribution of income and wealth|redistributed]].<ref>{{Cite web|url=http://www.igmchicago.org/surveys/robots-and-artificial-intelligence|title=Robots and Artificial Intelligence|last=|first=|date=|website=www.igmchicago.org|access-date=2019-07-03}}</ref> A February 2020 European Union white paper on artificial intelligence advocated for artificial intelligence for economic benefits, including "improving healthcare (e.g. making diagnosis more  precise,  enabling  better  prevention  of  diseases), increasing  the  efficiency  of  farming, contributing  to climate  change mitigation  and  adaptation, [and] improving  the  efficiency  of production systems through predictive maintenance", while acknowledging potential risks.<ref name=":1" />
第2,421行: 第2,392行:       −
== Regulation ==
+
== 规范 Regulation ==
   −
== Regulation ==
     −
规例
      
{{Main|Regulation of artificial intelligence|Regulation of algorithms}}
 
{{Main|Regulation of artificial intelligence|Regulation of algorithms}}
第2,439行: 第2,408行:       −
== In fiction ==
+
== 小说 In fiction ==
   −
== In fiction ==
     −
在小说里
      
{{Main|Artificial intelligence in fiction}}
 
{{Main|Artificial intelligence in fiction}}
第2,449行: 第2,416行:       −
[[File:Capek play.jpg|thumb|The word "robot" itself was coined by [[Karel Čapek]] in his 1921 play ''[[R.U.R.]]'', the title standing for "[[Rossum's Universal Robots]]"]]
+
[[File:Capek play.jpg|thumb|The word "robot" itself was coined by [[Karel Čapek]] in his 1921 play ''[[R.U.R.]]'', the title standing for "[[Rossum's Universal Robots]]"“机器人”这个词本身是由[[ 卡雷尔·恰佩克 在他1921年的戏剧《R.U.R》中创造的,剧名代表“Rossum 的万能机器人”(Rossum's Universal Robots)]]]
    
The word "robot" itself was coined by [[Karel Čapek in his 1921 play R.U.R., the title standing for "Rossum's Universal Robots"]]
 
The word "robot" itself was coined by [[Karel Čapek in his 1921 play R.U.R., the title standing for "Rossum's Universal Robots"]]
   −
“机器人”这个词本身是由[[ 卡雷尔·恰佩克 在他1921年的戏剧《R.U.R》中创造的,剧名代表“Rossum 的万能机器人”(Rossum's Universal Robots)]
+
 
      第2,512行: 第2,479行:       −
==See also==
+
==参见 See also==
 +
 
 +
 
   −
==See also==
     −
参见
      
{{portal|Computer programming}}
 
{{portal|Computer programming}}
143

个编辑