第1,652行: |
第1,652行: |
| | | |
| # Is [[artificial general intelligence]] possible? Can a machine solve any problem that a human being can solve using intelligence? Or are there hard limits to what a machine can accomplish? | | # Is [[artificial general intelligence]] possible? Can a machine solve any problem that a human being can solve using intelligence? Or are there hard limits to what a machine can accomplish? |
− |
| |
− | Is artificial general intelligence possible? Can a machine solve any problem that a human being can solve using intelligence? Or are there hard limits to what a machine can accomplish?
| |
− |
| |
− | 通用人工智能可能实现吗?机器能解决任何人类智能能解决的问题吗?或者一台机器所能完成的事情是否有严格的界限?
| |
− |
| |
| # Are intelligent machines dangerous? How can we ensure that machines behave ethically and that they are used ethically? | | # Are intelligent machines dangerous? How can we ensure that machines behave ethically and that they are used ethically? |
− |
| |
− | Are intelligent machines dangerous? How can we ensure that machines behave ethically and that they are used ethically?
| |
− |
| |
− | 智能机器危险吗?我们怎样才能确保机器的行为和使用机器的过程符合道德规范?
| |
− |
| |
| # Can a machine have a [[mind]], [[consciousness]] and [[philosophy of mind|mental states]] in exactly the same sense that human beings do? Can a machine be [[Sentience|sentient]], and thus deserve certain rights? Can a machine [[intention]]ally cause harm? | | # Can a machine have a [[mind]], [[consciousness]] and [[philosophy of mind|mental states]] in exactly the same sense that human beings do? Can a machine be [[Sentience|sentient]], and thus deserve certain rights? Can a machine [[intention]]ally cause harm? |
− |
| |
− | Can a machine have a mind, consciousness and mental states in exactly the same sense that human beings do? Can a machine be sentient, and thus deserve certain rights? Can a machine intentionally cause harm?
| |
− |
| |
− | 机器能否拥有与人类完全相同的思维、意识和精神状态?一台机器是否能拥有直觉,因此得到某些权利?机器会做出刻意伤害吗?
| |
− |
| |
− |
| |
− |
| |
− | ===人工智能的局限性 The limits of artificial general intelligence ===
| |
| | | |
| | | |
| + | # 通用人工智能可能实现吗?机器能解决任何人类使用智能就能解决的问题吗?或者一台机器所能完成的事情是否有严格的界限? |
| + | # 智能机器危险吗?我们怎样才能确保机器的行为和使用机器的过程符合道德规范? |
| + | # 机器能否拥有与人类完全相同的思维、意识和精神状态?一台机器是否能拥有直觉,因此得到某些权利?机器会做出刻意伤害吗? |
| | | |
| | | |
| + | ===人工智能的局限性=== |
| | | |
| {{Main|Philosophy of AI|Turing test|Physical symbol systems hypothesis|Dreyfus' critique of AI|The Emperor's New Mind|AI effect}} | | {{Main|Philosophy of AI|Turing test|Physical symbol systems hypothesis|Dreyfus' critique of AI|The Emperor's New Mind|AI effect}} |
− |
| |
− |
| |
− |
| |
− |
| |
− |
| |
− |
| |
| | | |
| Can a machine be intelligent? Can it "think"? | | Can a machine be intelligent? Can it "think"? |
第1,690行: |
第1,670行: |
| | | |
| 机器是智能的吗?它能“思考”吗? | | 机器是智能的吗?它能“思考”吗? |
− |
| |
− |
| |
− |
| |
| | | |
| | | |
第1,699行: |
第1,676行: |
| Alan Turing's "polite convention": We need not decide if a machine can "think"; we need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Turing test. | | Alan Turing's "polite convention": We need not decide if a machine can "think"; we need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Turing test. |
| | | |
− | 阿兰 · 图灵的'''<font color=#32cd32>“礼貌惯例”</font>''' : 我们不需要决定一台机器是否可以“思考” ; 我们只需要决定一台机器是否可以像人一样聪明地行动。这个AI相关的哲学问题的答案成为了图灵测试的基础。 | + | ;''阿兰 · 图灵的“礼貌惯例'': 阿兰 · 图灵的'''<font color=#32cd32>“礼貌惯例”</font>''' : 我们不需要决定一台机器是否可以“思考”;我们只需要决定一台机器是否可以像人一样聪明地行动。这个对AI相关哲学问题的回应成为了图灵测试的基础。 |
| | | |
| --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])polite convention未找到标准翻译 | | --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])polite convention未找到标准翻译 |
第1,707行: |
第1,684行: |
| The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." This conjecture was printed in the proposal for the Dartmouth Conference of 1956. | | The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." This conjecture was printed in the proposal for the Dartmouth Conference of 1956. |
| | | |
− | 达特茅斯学院提出: “可以通过准确地描述学习的每个方面或智能的任何特征,使得一台机器可以模拟学习和智能。”这个猜想被写在了1956年达特茅斯学院会议的提案中。
| + | ;''达特茅斯提案'':达特茅斯会议提出: “可以通过准确地描述学习的每个方面或智能的任何特征,使得一台机器模拟学习和智能。”这个猜想被写在了1956年达特茅斯学院会议的提案中。 |
| + | |
| | | |
| ;''[[Physical symbol system|Newell and Simon's physical symbol system hypothesis]]'': "A physical symbol system has the necessary and sufficient means of general intelligent action." Newell and Simon argue that intelligence consists of formal operations on symbols.<ref name="Physical symbol system hypothesis"/> [[Hubert Dreyfus]] argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a "feel" for the situation rather than explicit symbolic knowledge. (See [[Dreyfus' critique of AI]].)<ref> | | ;''[[Physical symbol system|Newell and Simon's physical symbol system hypothesis]]'': "A physical symbol system has the necessary and sufficient means of general intelligent action." Newell and Simon argue that intelligence consists of formal operations on symbols.<ref name="Physical symbol system hypothesis"/> [[Hubert Dreyfus]] argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a "feel" for the situation rather than explicit symbolic knowledge. (See [[Dreyfus' critique of AI]].)<ref> |
| | | |
− | Newell and Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action." Newell and Simon argue that intelligence consists of formal operations on symbols. Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a "feel" for the situation rather than explicit symbolic knowledge. (See Dreyfus' critique of AI.)<ref> | + | Newell and Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action." Newell and Simon argue that intelligence consists of formal operations on symbols. Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a "feel" for the situation rather than explicit symbolic knowledge. (See Dreyfus' critique of AI.) |
− | | |
− | 纽威尔和西蒙的物理符号系统假说: 物理符号系统具有通用智能行为的充要途径。纽威尔和西蒙认为智能由符号形式的运算组成。休伯特·德雷福斯)则相反地认为,人类的知识依赖于无意识的本能,而不是有意识的符号运算;依赖于对情境的“感觉”,而不是明确的符号知识。(参见德雷福斯对人工智能的批评。)
| |
− | | |
− | | |
− | | |
− | Dreyfus criticized the [[necessary and sufficient|necessary]] condition of the [[physical symbol system]] hypothesis, which he called the "psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules." {{Harv|Dreyfus|1992|p=156}}</ref><ref name="Dreyfus' critique"/>
| |
− | | |
− | Dreyfus criticized the necessary condition of the physical symbol system hypothesis, which he called the "psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules." </ref>
| |
− | | |
− | 德莱弗斯批评了他称之为“心理假设”物理符号系统假说的必要条件: “头脑可以被看作是一种按照形式化规则,用信息位运算的机器。”
| |
− | | |
− | | |
| | | |
| + | ;纽厄尔和西蒙的物理符号系统假说: 物理符号系统是通往通用智能行为的充分必要途径。纽厄尔和西蒙认为智能由符号形式的运算组成。<ref name="Physical symbol system hypothesis"/> 休伯特·德雷福斯则相反地认为,人类的知识依赖于无意识的本能,而不是有意识的符号运算;依赖于对情境的“感觉”,而不是明确的符号知识。(参见德雷福斯对人工智能的批评。)<ref>Dreyfus criticized the [[necessary and sufficient|necessary]] condition of the [[physical symbol system]] hypothesis, which he called the "psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules." {{Harv|Dreyfus|1992|p=156}}</ref><ref name="Dreyfus' critique"/> |
| | | |
| | | |
| ;''Gödelian arguments'': [[Gödel]] himself,<ref name="Gödel himself"/> [[John Lucas (philosopher)|John Lucas]] (in 1961) and [[Roger Penrose]] (in a more detailed argument from 1989 onwards) made highly technical arguments that human mathematicians can consistently see the truth of their own "Gödel statements" and therefore have computational abilities beyond that of mechanical Turing machines.<ref name="The mathematical objection"/> However, some people do not agree with the "Gödelian arguments".<ref>{{cite web|author1=Graham Oppy|title=Gödel's Incompleteness Theorems|url=http://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec|website=[[Stanford Encyclopedia of Philosophy]]|accessdate=27 April 2016|date=20 January 2015|quote=These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.|author1-link=Graham Oppy}}</ref><ref>{{cite book|author1=Stuart J. Russell|author2-link=Peter Norvig|author2=Peter Norvig|title=Artificial Intelligence: A Modern Approach|date=2010|publisher=[[Prentice Hall]]|location=Upper Saddle River, NJ|isbn=978-0-13-604259-4|edition=3rd|chapter=26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection|quote=even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.|title-link=Artificial Intelligence: A Modern Approach|author1-link=Stuart J. Russell}}</ref><ref>Mark Colyvan. An introduction to the philosophy of mathematics. [[Cambridge University Press]], 2012. From 2.2.2, 'Philosophical significance of Gödel's incompleteness results': "The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail."</ref> | | ;''Gödelian arguments'': [[Gödel]] himself,<ref name="Gödel himself"/> [[John Lucas (philosopher)|John Lucas]] (in 1961) and [[Roger Penrose]] (in a more detailed argument from 1989 onwards) made highly technical arguments that human mathematicians can consistently see the truth of their own "Gödel statements" and therefore have computational abilities beyond that of mechanical Turing machines.<ref name="The mathematical objection"/> However, some people do not agree with the "Gödelian arguments".<ref>{{cite web|author1=Graham Oppy|title=Gödel's Incompleteness Theorems|url=http://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec|website=[[Stanford Encyclopedia of Philosophy]]|accessdate=27 April 2016|date=20 January 2015|quote=These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.|author1-link=Graham Oppy}}</ref><ref>{{cite book|author1=Stuart J. Russell|author2-link=Peter Norvig|author2=Peter Norvig|title=Artificial Intelligence: A Modern Approach|date=2010|publisher=[[Prentice Hall]]|location=Upper Saddle River, NJ|isbn=978-0-13-604259-4|edition=3rd|chapter=26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection|quote=even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.|title-link=Artificial Intelligence: A Modern Approach|author1-link=Stuart J. Russell}}</ref><ref>Mark Colyvan. An introduction to the philosophy of mathematics. [[Cambridge University Press]], 2012. From 2.2.2, 'Philosophical significance of Gödel's incompleteness results': "The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail."</ref> |
| | | |
− | Gödelian arguments: Gödel himself,
| |
− |
| |
− | 哥德尔的观点
| |
| | | |
− | 哥德尔本人、约翰·卢卡斯(在1961年)和罗杰·彭罗斯(在1989年以后的一个更详细的争论中)提出了高度技术性的论点,认为人类数学家始终可以看到他们自己的“'''<font color=#ff8000>哥德尔不完备定理 Gödel Satements</font>'''”的真实性,因此计算能力超过机械图灵机。然而,也有一些人不同意“哥德尔不完备定理”。
| + | ;''哥德尔的论点'':哥德尔本人<ref name="Gödel himself"/> 、约翰·卢卡斯(在1961年)和罗杰·彭罗斯(在1989年以后的一个更详细的争论中)提出了高度技术性的观点,认为人类数学家可以看到他们自己的“'''<font color=#ff8000>哥德尔不完备定理 Gödel Satements</font>'''”的真实性,因此计算能力超过机械图灵机<ref name="The mathematical objection"/>。然而,也有一些人不同意“哥德尔不完备定理”。<ref>{{cite web|author1=Graham Oppy|title=Gödel's Incompleteness Theorems|url=http://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec|website=[[Stanford Encyclopedia of Philosophy]]|accessdate=27 April 2016|date=20 January 2015|quote=These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.|author1-link=Graham Oppy}}</ref><ref>{{cite book|author1=Stuart J. Russell|author2-link=Peter Norvig|author2=Peter Norvig|title=Artificial Intelligence: A Modern Approach|date=2010|publisher=[[Prentice Hall]]|location=Upper Saddle River, NJ|isbn=978-0-13-604259-4|edition=3rd|chapter=26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection|quote=even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.|title-link=Artificial Intelligence: A Modern Approach|author1-link=Stuart J. Russell}}</ref><ref>Mark Colyvan. An introduction to the philosophy of mathematics. [[Cambridge University Press]], 2012. From 2.2.2, 'Philosophical significance of Gödel's incompleteness results': "The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail."</ref> |
| | | |
| | | |
第1,741行: |
第1,705行: |
| The artificial brain argument: The brain can be simulated by machines and because brains are intelligent, simulated brains must also be intelligent; thus machines can be intelligent. Hans Moravec, Ray Kurzweil and others have argued that it is technologically feasible to copy the brain directly into hardware and software and that such a simulation will be essentially identical to the original. | | The artificial brain argument: The brain can be simulated by machines and because brains are intelligent, simulated brains must also be intelligent; thus machines can be intelligent. Hans Moravec, Ray Kurzweil and others have argued that it is technologically feasible to copy the brain directly into hardware and software and that such a simulation will be essentially identical to the original. |
| | | |
− | 人工大脑的观点: 大脑可以被机器模拟,因为大脑是智能的,模拟的大脑也必须是智能的; 因此机器可以是智能的。汉斯·莫拉维克、雷·库兹韦尔和其他人认为,技术层面直接将大脑复制到硬件和软件上是可行的,而且这些拷贝在本质上和原来的大脑是没有区别的。 | + | ;''人工大脑的观点'': 因为大脑可以被机器模拟,且大脑是智能的,模拟的大脑也必须是智能的;因此机器可以是智能的。汉斯·莫拉维克、雷·库兹韦尔和其他人认为,技术层面直接将大脑复制到硬件和软件上是可行的,而且这些拷贝在本质上和原来的大脑是没有区别的。 |
| | | |
| | | |
第1,748行: |
第1,712行: |
| The AI effect: Machines are already intelligent, but observers have failed to recognize it. When Deep Blue beat Garry Kasparov in chess, the machine was acting intelligently. However, onlookers commonly discount the behavior of an artificial intelligence program by arguing that it is not "real" intelligence after all; thus "real" intelligence is whatever intelligent behavior people can do that machines still cannot. This is known as the AI Effect: "AI is whatever hasn't been done yet."<!----> | | The AI effect: Machines are already intelligent, but observers have failed to recognize it. When Deep Blue beat Garry Kasparov in chess, the machine was acting intelligently. However, onlookers commonly discount the behavior of an artificial intelligence program by arguing that it is not "real" intelligence after all; thus "real" intelligence is whatever intelligent behavior people can do that machines still cannot. This is known as the AI Effect: "AI is whatever hasn't been done yet."<!----> |
| | | |
− | AI效应: 机器本来就是智能的,但是观察者却没有意识到这一点。当深蓝在国际象棋比赛中击败加里 · 卡斯帕罗夫时,机器就在做出智能行为。然而,旁观者通常对AI程序的行为不屑一顾,认为它根本不是“真正的”智能; 因此,“真正的”智能就是人任何类能够做到但机器仍然做不到的智能行为。这就是众所周知的AI效应: “AI就是一切尚未完成的事情"。 | + | ;''AI效应'': 机器本来就是智能的,但是观察者却没有意识到这一点。当深蓝在国际象棋比赛中击败加里 · 卡斯帕罗夫时,机器就在做出智能行为。然而,旁观者通常对AI程序的行为不屑一顾,认为它根本不是“真正的”智能; 因此,“真正的”智能就是人任何类能够做到但机器仍然做不到的智能行为。这就是众所周知的AI效应: “AI就是一切尚未完成的事情"。 |
| | | |
| | | |
第2,163行: |
第2,127行: |
| | | |
| 爱德华•弗雷德金认为,“人工智能是进化的下一个阶段”。早在1863年,塞缪尔•巴特勒的《机器中的达尔文》(Darwin among the Machines)就首次提出了这一观点,乔治•戴森在1998年的同名著作中对其进行了延伸。 | | 爱德华•弗雷德金认为,“人工智能是进化的下一个阶段”。早在1863年,塞缪尔•巴特勒的《机器中的达尔文》(Darwin among the Machines)就首次提出了这一观点,乔治•戴森在1998年的同名著作中对其进行了延伸。 |
− |
| |
− |
| |
− |
| |
| | | |
| == 经济学 Economics == | | == 经济学 Economics == |