更改

删除33,456字节 、 2021年8月7日 (六) 23:04
第457行: 第457行:       −
;''[[Computing Machinery and Intelligence|Alan Turing's "polite convention"]]'': We need not decide if a machine can "think"; we need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the [[Turing test]].<ref name="Turing test"/>
+
''Alan Turing的“礼貌惯例'': 我们不需要决定一台机器是否可以“思考”;我们只需要决定一台机器是否可以像人一样聪明地行动。这个对AI相关哲学问题的回应成为了图灵测试的基础。<ref name="Turing test"/>
      −
''阿兰 · 图灵的“礼貌惯例'': 阿兰 · 图灵的'''“礼貌惯例”'''  : 我们不需要决定一台机器是否可以“思考”;我们只需要决定一台机器是否可以像人一样聪明地行动。这个对AI相关哲学问题的回应成为了图灵测试的基础。
+
''达特茅斯提案'':达特茅斯会议提出: “可以通过准确地描述学习的每个方面或智能的任何特征,使得一台机器模拟学习和智能。”这个猜想被写在了1956年达特茅斯学院会议的提案中。<ref name="Dartmouth proposal"/>
   −
  --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])polite convention未找到标准翻译
     −
;''The [[Dartmouth Workshop|Dartmouth proposal]]'': "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." This conjecture was printed in the proposal for the Dartmouth Conference of 1956.<ref name="Dartmouth proposal"/>
+
Newell和Simon的物理符号系统假说: 物理符号系统是通往通用智能行为的充分必要途径。Newell和Simon认为智能由符号形式的运算组成。Hubert Dreyfus则相反地认为,人类的知识依赖于无意识的本能,而不是有意识的符号运算;依赖于对情境的“感觉”,而不是明确的符号知识。
   −
''达特茅斯提案'':达特茅斯会议提出: “可以通过准确地描述学习的每个方面或智能的任何特征,使得一台机器模拟学习和智能。”这个猜想被写在了1956年达特茅斯学院会议的提案中。
      +
''哥德尔的论点'':哥德尔本人<ref name="Gödel himself"/> 、John Lucas(在1961年)和Roger Penrose(在1989年以后的一个更详细的争论中)提出了高度技术性的观点,认为人类数学家可以看到他们自己的“'''哥德尔不完备定理 Gödel Satements'''”的真实性,因此计算能力超过机械图灵机<ref name="The mathematical objection"/>。然而,也有一些人不同意“哥德尔不完备定理”。<ref>{{cite web|author1=Graham Oppy|title=Gödel's Incompleteness Theorems|url=http://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec|website=[[Stanford Encyclopedia of Philosophy]]|accessdate=27 April 2016|date=20 January 2015|quote=These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.|author1-link=Graham Oppy}}</ref><ref>{{cite book|author1=Stuart J. Russell|author2-link=Peter Norvig|author2=Peter Norvig|title=Artificial Intelligence: A Modern Approach|date=2010|publisher=[[Prentice Hall]]|location=Upper Saddle River, NJ|isbn=978-0-13-604259-4|edition=3rd|chapter=26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection|quote=even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.|title-link=Artificial Intelligence: A Modern Approach|author1-link=Stuart J. Russell}}</ref><ref>Mark Colyvan. An introduction to the philosophy of mathematics. [[Cambridge University Press]], 2012. From 2.2.2, 'Philosophical significance of Gödel's incompleteness results': "The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail."</ref>
   −
纽厄尔和西蒙的物理符号系统假说: 物理符号系统是通往通用智能行为的充分必要途径。纽厄尔和西蒙认为智能由符号形式的运算组成。<ref name="Physical symbol system hypothesis"/> 休伯特·德雷福斯则相反地认为,人类的知识依赖于无意识的本能,而不是有意识的符号运算;依赖于对情境的“感觉”,而不是明确的符号知识。(参见德雷福斯对人工智能的批评。)<ref>Dreyfus criticized the [[necessary and sufficient|necessary]] condition of the [[physical symbol system]] hypothesis, which he called the "psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules." {{Harv|Dreyfus|1992|p=156}}</ref><ref name="Dreyfus' critique"/>
     −
 
+
''人工大脑的观点'': 因为大脑可以被机器模拟,且大脑是智能的,模拟的大脑也必须是智能的;因此机器可以是智能的。Hans Moravec、Ray Kurzweil和其他人认为,技术层面直接将大脑复制到硬件和软件上是可行的,而且这些拷贝在本质上和原来的大脑是没有区别的。<ref name="Brain simulation"/>
 
  −
''哥德尔的论点'':哥德尔本人<ref name="Gödel himself"/> 、约翰·卢卡斯(在1961年)和罗杰·彭罗斯(在1989年以后的一个更详细的争论中)提出了高度技术性的观点,认为人类数学家可以看到他们自己的“'''哥德尔不完备定理 Gödel Satements'''”的真实性,因此计算能力超过机械图灵机<ref name="The mathematical objection"/>。然而,也有一些人不同意“哥德尔不完备定理”。<ref>{{cite web|author1=Graham Oppy|title=Gödel's Incompleteness Theorems|url=http://plato.stanford.edu/entries/goedel-incompleteness/#GdeArgAgaMec|website=[[Stanford Encyclopedia of Philosophy]]|accessdate=27 April 2016|date=20 January 2015|quote=These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail.|author1-link=Graham Oppy}}</ref><ref>{{cite book|author1=Stuart J. Russell|author2-link=Peter Norvig|author2=Peter Norvig|title=Artificial Intelligence: A Modern Approach|date=2010|publisher=[[Prentice Hall]]|location=Upper Saddle River, NJ|isbn=978-0-13-604259-4|edition=3rd|chapter=26.1.2: Philosophical Foundations/Weak AI: Can Machines Act Intelligently?/The mathematical objection|quote=even if we grant that computers have limitations on what they can prove, there is no evidence that humans are immune from those limitations.|title-link=Artificial Intelligence: A Modern Approach|author1-link=Stuart J. Russell}}</ref><ref>Mark Colyvan. An introduction to the philosophy of mathematics. [[Cambridge University Press]], 2012. From 2.2.2, 'Philosophical significance of Gödel's incompleteness results': "The accepted wisdom (with which I concur) is that the Lucas-Penrose arguments fail."</ref>
  −
 
  −
 
  −
 
  −
;''The [[artificial brain]] argument'': The brain can be simulated by machines and because brains are intelligent, simulated brains must also be intelligent; thus machines can be intelligent. [[Hans Moravec]], [[Ray Kurzweil]] and others have argued that it is technologically feasible to copy the brain directly into hardware and software and that such a simulation will be essentially identical to the original.<ref name="Brain simulation"/>
  −
 
  −
''人工大脑的观点'': 因为大脑可以被机器模拟,且大脑是智能的,模拟的大脑也必须是智能的;因此机器可以是智能的。汉斯·莫拉维克、雷·库兹韦尔和其他人认为,技术层面直接将大脑复制到硬件和软件上是可行的,而且这些拷贝在本质上和原来的大脑是没有区别的。
  −
 
  −
 
  −
;''The [[AI effect]]'': Machines are ''already'' intelligent, but observers have failed to recognize it. When [[Deep Blue (chess computer)|Deep Blue]] beat [[Garry Kasparov]] in chess, the machine was acting intelligently. However, onlookers commonly discount the behavior of an artificial intelligence program by arguing that it is not "real" intelligence after all; thus "real" intelligence is whatever intelligent behavior people can do that machines still cannot. This is known as the AI Effect: "AI is whatever hasn't been done yet."<!--<ref name="AI Effect"/>-->
        第489行: 第476行:     
===潜在危害===
 
===潜在危害===
  −
   
AI的广泛使用可能会产生危险或导致意外后果。生命未来研究所(Future of Life Institute)等机构的科学家提出了一些短期研究目标,以此了解AI如何影响经济、与AI相关的法律和道德规范,以及如何将AI的安全风险降到最低。从长远来看,科学家们建议继续优化功能,同时最小化新技术带来的可能的安全风险。<ref>Russel, Stuart., Daniel Dewey, and Max Tegmark. Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine 36:4 (2015). 8 December 2016.</ref>
 
AI的广泛使用可能会产生危险或导致意外后果。生命未来研究所(Future of Life Institute)等机构的科学家提出了一些短期研究目标,以此了解AI如何影响经济、与AI相关的法律和道德规范,以及如何将AI的安全风险降到最低。从长远来看,科学家们建议继续优化功能,同时最小化新技术带来的可能的安全风险。<ref>Russel, Stuart., Daniel Dewey, and Max Tegmark. Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine 36:4 (2015). 8 December 2016.</ref>
  −
  −
The potential negative effects of AI and automation were a major issue for [[Andrew Yang]]'s [[Andrew Yang 2020 presidential campaign|2020 presidential campaign]] in the United States.<ref>{{Cite journal|url=https://www.wired.com/story/andrew-yangs-presidential-bid-is-so-very-21st-century/|title=Andrew Yang's Presidential Bid Is So Very 21st Century|journal=Wired|first=Matt|last=Simon|date=1 April 2019|via=www.wired.com}}</ref> Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations, has expressed that "I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm. [Terrorists could cause harm] via digital warfare, or it could be a combination of robotics, drones, with AI and other things as well that could be really dangerous. And, of course, other risks come from things like job losses. If we have massive numbers of people losing jobs and don't find a solution, it will be extremely dangerous. Things like lethal autonomous weapons systems should be properly governed — otherwise there's massive potential of misuse."<ref>{{Cite web | url=https://futurism.com/artificial-intelligence-experts-fear/amp |title = Five experts share what scares them the most about AI|date = 5 September 2018}}</ref>
        第502行: 第484行:  
====存在风险====
 
====存在风险====
    +
物理学家[[斯蒂芬·霍金]]、微软创始人[[比尔·盖茨]]和 SpaceX 公司创始人[[埃隆·马斯克]]对AI进化到人类无法控制的程度表示担忧,霍金认为这可能“会导致人类末日”。<ref>{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=BBC News|accessdate=30 January 2015|url-status=live|archiveurl=https://web.archive.org/web/20150129183607/http://www.bbc.co.uk/news/31047780|archivedate=29 January 2015|df=dmy-all|date=2015-01-29}}</ref><ref name="Holley">{{Cite news|title = Bill Gates on dangers of artificial intelligence: 'I don't understand why some people are not concerned'|url = https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/|work= The Washington Post|date = 28 January 2015|access-date = 30 October 2015|issn = 0190-8286|first = Peter|last = Holley|url-status=live|archiveurl = https://web.archive.org/web/20151030054330/https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/|archivedate = 30 October 2015|df = dmy-all}}</ref><ref>{{Cite news|title = Elon Musk: artificial intelligence is our biggest existential threat|url = https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat|work= The Guardian|accessdate = 30 October 2015|first = Samuel|last = Gibbs|url-status=live|archiveurl = https://web.archive.org/web/20151030054330/http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat|archivedate = 30 October 2015|df = dmy-all|date = 2014-10-27}}</ref>
   −
{{Main|Existential risk from artificial general intelligence}}
  −
  −
  −
  −
Physicist [[Stephen Hawking]], [[Microsoft]] founder [[Bill Gates]], and [[SpaceX]] founder [[Elon Musk]] have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could "[[Global catastrophic risk|spell the end of the human race]]".<ref>{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=BBC News|accessdate=30 January 2015|url-status=live|archiveurl=https://web.archive.org/web/20150129183607/http://www.bbc.co.uk/news/31047780|archivedate=29 January 2015|df=dmy-all|date=2015-01-29}}</ref><ref name="Holley">{{Cite news|title = Bill Gates on dangers of artificial intelligence: 'I don't understand why some people are not concerned'|url = https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/|work= The Washington Post|date = 28 January 2015|access-date = 30 October 2015|issn = 0190-8286|first = Peter|last = Holley|url-status=live|archiveurl = https://web.archive.org/web/20151030054330/https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/|archivedate = 30 October 2015|df = dmy-all}}</ref><ref>{{Cite news|title = Elon Musk: artificial intelligence is our biggest existential threat|url = https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat|work= The Guardian|accessdate = 30 October 2015|first = Samuel|last = Gibbs|url-status=live|archiveurl = https://web.archive.org/web/20151030054330/http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat|archivedate = 30 October 2015|df = dmy-all|date = 2014-10-27}}</ref>
  −
  −
Physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".
  −
  −
物理学家斯蒂芬·霍金、微软创始人比尔·盖茨和 SpaceX 公司创始人埃隆·马斯克对AI进化到人类无法控制的程度表示担忧,霍金认为这可能“会导致人类末日”。<ref>{{cite news|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|work=BBC News|accessdate=30 January 2015|url-status=live|archiveurl=https://web.archive.org/web/20150129183607/http://www.bbc.co.uk/news/31047780|archivedate=29 January 2015|df=dmy-all|date=2015-01-29}}</ref><ref name="Holley">{{Cite news|title = Bill Gates on dangers of artificial intelligence: 'I don't understand why some people are not concerned'|url = https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/|work= The Washington Post|date = 28 January 2015|access-date = 30 October 2015|issn = 0190-8286|first = Peter|last = Holley|url-status=live|archiveurl = https://web.archive.org/web/20151030054330/https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/|archivedate = 30 October 2015|df = dmy-all}}</ref><ref>{{Cite news|title = Elon Musk: artificial intelligence is our biggest existential threat|url = https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat|work= The Guardian|accessdate = 30 October 2015|first = Samuel|last = Gibbs|url-status=live|archiveurl = https://web.archive.org/web/20151030054330/http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat|archivedate = 30 October 2015|df = dmy-all|date = 2014-10-27}}</ref>
  −
  −
  −
In his book ''[[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]'', philosopher [[Nick Bostrom]] provides an argument that artificial intelligence will pose a threat to humankind. He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit [[Instrumental convergence|convergent]] behavior such as acquiring resources or protecting itself from being shut down. If this AI's goals do not fully reflect humanity's—one example is an AI told to compute as many digits of pi as possible—it might harm humanity in order to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal.  Bostrom also emphasizes the difficulty of fully conveying humanity's values to an advanced AI.  He uses the hypothetical example of giving an AI the goal to make humans smile to illustrate a misguided attempt.  If the AI in that scenario were to become superintelligent, Bostrom argues, it may resort to methods that most humans would find horrifying, such as inserting "electrodes into the facial muscles of humans to cause constant, beaming grins" because that would be an efficient way to achieve its goal of making humans smile.<ref>{{cite web|url=https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/transcript|title=What happens when our computers get smarter than we are?|first=Nick|last=Bostrom|publisher=[[TED (conference)]]|date=2015}}</ref>  In his book ''[[Human Compatible]]'', AI researcher [[Stuart J. Russell]] echoes some of Bostrom's concerns while also proposing [[Human Compatible#Russell's three principles|an approach]] to developing provably beneficial machines focused on uncertainty and deference to humans,<ref name="HC">{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |url= |location=United States |publisher=Viking |page= |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322|title-link=Human Compatible }}</ref>{{rp|173}} possibly involving [[Reinforcement learning#Inverse reinforcement learning|inverse reinforcement learning]].<ref name="HC"/>{{rp|191–193}}
  −
  −
在《超级智能》一书中,哲学家尼克·博斯特罗姆提出了一个AI将对人类构成威胁的论点。他认为,如果足够智能的AI选择有目标地行动,它将表现出收敛的行为,如获取资源或保护自己不被关机。如果这个AI的目标没有人性,比如一个AI被告知要尽可能多地计算圆周率的位数,那么它可能会伤害人类,以便获得更多的资源或者防止自身被关闭,最终更好地实现目标。博斯特罗姆还强调了向高级AI充分传达人类价值观存在的困难。他假设了一个例子来说明一种南辕北辙的尝试: 给AI一个让人类微笑的目标。博斯特罗姆认为,如果这种情况下的AI变得非常聪明,它可能会采用大多数人类都会感到恐怖的方法,比如“在人类面部肌肉中插入电极,使其产生持续的笑容” ,因为这将是实现让人类微笑的目标的有效方法。<ref>{{cite web|url=https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/transcript|title=What happens when our computers get smarter than we are?|first=Nick|last=Bostrom|publisher=[[TED (conference)]]|date=2015}}</ref>AI研究人员斯图亚特.J.罗素在他的《人类相容》一书中回应了博斯特罗姆的一些担忧,同时也提出了一种开发可证明有益的机器可能涉及逆强化学习的方法<ref name="HC"/>{{rp|191–193}},这种机器侧重解决不确定性和顺从人类的问题。<ref name="HC">{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |url= |location=United States |publisher=Viking |page= |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322|title-link=Human Compatible }}</ref>{{rp|173}}
  −
  −
Concern over risk from artificial intelligence has led to some high-profile donations and investments. A group of prominent tech titans including [[Peter Thiel]], Amazon Web Services and Musk have committed $1 billion to [[OpenAI]], a nonprofit company aimed at championing responsible AI development.<ref>{{cite web|url=https://www.chicagotribune.com/bluesky/technology/ct-tech-titans-against-terminators-20151214-story.html|title=Tech titans like Elon Musk are spending $1 billion to save you from terminators|first=Washington|last=Post|url-status=live|archiveurl=https://web.archive.org/web/20160607121118/http://www.chicagotribune.com/bluesky/technology/ct-tech-titans-against-terminators-20151214-story.html|archivedate=7 June 2016|df=dmy-all}}</ref> 人工智能领域内专家们的意见是混杂的,担忧和不担忧超越人类能力的AI的观点都占有很大的分额。<ref>{{cite journal}}</ref>  Facebook CEO [[Mark Zuckerberg]] believes AI will "unlock a huge amount of positive things," such as curing disease and increasing the safety of autonomous cars.<ref>{{Cite web|url=https://www.businessinsider.com/mark-zuckerberg-shares-thoughts-elon-musks-ai-2018-5|title=Mark Zuckerberg responds to Elon Musk's paranoia about AI: 'AI is going to... help keep our communities safe.'|last=|first=|date=25 May 2018|website=Business Insider|access-date=2019-05-06}}</ref> In January 2015, Musk donated $10 million to the [[Future of Life Institute]] to fund research on understanding AI decision making. The goal of the institute is to "grow wisdom with which we manage" the growing power of technology. Musk also funds companies developing artificial intelligence such as [[DeepMind]] and [[Vicarious (company)|Vicarious]] to "just keep an eye on what's going on with artificial intelligence.<ref>{{cite web|title = The mysterious artificial intelligence company Elon Musk invested in is developing game-changing smart computers|url = http://www.techinsider.io/mysterious-artificial-intelligence-company-elon-musk-investment-2015-10|website = Tech Insider|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030165333/http://www.techinsider.io/mysterious-artificial-intelligence-company-elon-musk-investment-2015-10|archivedate = 30 October 2015|df = dmy-all}}</ref> I think there is potentially a dangerous outcome there."<ref>{{cite web|title = Musk-Backed Group Probes Risks Behind Artificial Intelligence|url = https://www.bloomberg.com/news/articles/2015-07-01/musk-backed-group-probes-risks-behind-artificial-intelligence|website = Bloomberg.com|accessdate = 30 October 2015|first = Jack|last = Clark|url-status=live|archiveurl = https://web.archive.org/web/20151030202356/http://www.bloomberg.com/news/articles/2015-07-01/musk-backed-group-probes-risks-behind-artificial-intelligence|archivedate = 30 October 2015|df = dmy-all}}</ref><ref>{{cite web|title = Elon Musk Is Donating $10M Of His Own Money To Artificial Intelligence Research|url = http://www.fastcompany.com/3041007/fast-feed/elon-musk-is-donating-10m-of-his-own-money-to-artificial-intelligence-research|website = Fast Company|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030202356/http://www.fastcompany.com/3041007/fast-feed/elon-musk-is-donating-10m-of-his-own-money-to-artificial-intelligence-research|archivedate = 30 October 2015|df = dmy-all|date = 2015-01-15}}</ref>
      +
在《超级智能 Superintelligence: Paths, Dangers, Strategies》一书中,哲学家Nick Bostrom提出了一个AI将对人类构成威胁的论点。他认为,如果足够智能的AI选择有目标地行动,它将表现出收敛的行为,如获取资源或保护自己不被关机。如果这个AI的目标没有人性,比如一个AI被告知要尽可能多地计算圆周率的位数,那么它可能会伤害人类,以便获得更多的资源或者防止自身被关闭,最终更好地实现目标。Bostrom还强调了向高级AI充分传达人类价值观存在的困难。他假设了一个例子来说明一种南辕北辙的尝试: 给AI一个让人类微笑的目标。Bostrom认为,如果这种情况下的AI变得非常聪明,它可能会采用大多数人类都会感到恐怖的方法,比如“在人类面部肌肉中插入电极,使其产生持续的笑容” ,因为这将是实现让人类微笑的目标的有效方法。<ref>{{cite web|url=https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are/transcript|title=What happens when our computers get smarter than we are?|first=Nick|last=Bostrom|publisher=[[TED (conference)]]|date=2015}}</ref>AI研究人员Stuart J. Russell在他的《人类相容 Human Compatible》一书中回应了博斯特罗姆的一些担忧,同时也提出了一种开发可证明有益的机器可能涉及逆强化学习的方法<ref name="HC"/>,这种机器侧重解决不确定性和顺从人类的问题。<ref name="HC">{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |url= |location=United States |publisher=Viking |page= |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322|title-link=Human Compatible }}</ref>
   −
对人工智能潜在风险的担忧引来了一些大额的捐献和投资。一些科技巨头,如彼得·蒂尔、亚马逊云服务、以及马斯克已经把10亿美金给了OpenAI,一个拥护和聚焦于开发可靠的AI的非盈利公司。<ref>{{cite web|url=https://www.chicagotribune.com/bluesky/technology/ct-tech-titans-against-terminators-20151214-story.html|title=Tech titans like Elon Musk are spending $1 billion to save you from terminators|first=Washington|last=Post|url-status=live|archiveurl=https://web.archive.org/web/20160607121118/http://www.chicagotribune.com/bluesky/technology/ct-tech-titans-against-terminators-20151214-story.html|archivedate=7 June 2016|df=dmy-all}}</ref> 人工智能领域内专家们的观点是混杂的,担忧和不担忧超人类AI的意见都占有很大的份额。其他技术行业的领导者相信AI在目前的形式下是有益的,并将继续帮助人类。甲骨文首席执行官马克·赫德表示,AI“实际上将创造更多的就业机会,而不是减少就业机会” ,因为管理AI系统需要人力<ref>{{Cite web|url=https://searcherp.techtarget.com/news/252460208/Oracle-CEO-Mark-Hurd-sees-no-reason-to-fear-ERP-AI|title=Oracle CEO Mark Hurd sees no reason to fear ERP AI|website=SearchERP|language=en|access-date=2019-05-06}}</ref> 。Facebook 首席执行官马克·扎克伯格相信AI将“解锁大量正面的东西” ,比如治愈疾病和提高自动驾驶汽车的安全性<ref>{{Cite web|url=https://www.businessinsider.com/mark-zuckerberg-shares-thoughts-elon-musks-ai-2018-5|title=Mark Zuckerberg responds to Elon Musk's paranoia about AI: 'AI is going to... help keep our communities safe.'|last=|first=|date=25 May 2018|website=Business Insider|access-date=2019-05-06}}</ref>。2015年1月,马斯克向未来生命研究所捐赠了1000万美元,用于研究AI决策。该研究所的目标是“用智能管理”日益增长的技术力量。马斯克还为 DeepMind 和 Vicarious 等开发AI的公司提供资金,以“跟进AI的发展”<ref>{{cite web|title = The mysterious artificial intelligence company Elon Musk invested in is developing game-changing smart computers|url = http://www.techinsider.io/mysterious-artificial-intelligence-company-elon-musk-investment-2015-10|website = Tech Insider|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030165333/http://www.techinsider.io/mysterious-artificial-intelligence-company-elon-musk-investment-2015-10|archivedate = 30 October 2015|df = dmy-all}}</ref>因为认为这个领域“可能会产生危险的后果”<ref>{{cite web|title = Musk-Backed Group Probes Risks Behind Artificial Intelligence|url = https://www.bloomberg.com/news/articles/2015-07-01/musk-backed-group-probes-risks-behind-artificial-intelligence|website = Bloomberg.com|accessdate = 30 October 2015|first = Jack|last = Clark|url-status=live|archiveurl = https://web.archive.org/web/20151030202356/http://www.bloomberg.com/news/articles/2015-07-01/musk-backed-group-probes-risks-behind-artificial-intelligence|archivedate = 30 October 2015|df = dmy-all}}</ref><ref>{{cite web|title = Elon Musk Is Donating $10M Of His Own Money To Artificial Intelligence Research|url = http://www.fastcompany.com/3041007/fast-feed/elon-musk-is-donating-10m-of-his-own-money-to-artificial-intelligence-research|website = Fast Company|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030202356/http://www.fastcompany.com/3041007/fast-feed/elon-musk-is-donating-10m-of-his-own-money-to-artificial-intelligence-research|archivedate = 30 October 2015|df = dmy-all|date = 2015-01-15}}</ref>。
      +
对人工智能潜在风险的担忧引来了一些大额的捐献和投资。一些科技巨头,如 Peter Thiel、亚马逊云服务、以及Musk已经把10亿美金给了OpenAI,一个拥护和聚焦于开发可靠的AI的非盈利公司。<ref>{{cite web|url=https://www.chicagotribune.com/bluesky/technology/ct-tech-titans-against-terminators-20151214-story.html|title=Tech titans like Elon Musk are spending $1 billion to save you from terminators|first=Washington|last=Post|url-status=live|archiveurl=https://web.archive.org/web/20160607121118/http://www.chicagotribune.com/bluesky/technology/ct-tech-titans-against-terminators-20151214-story.html|archivedate=7 June 2016|df=dmy-all}}</ref> 人工智能领域内专家们的观点是混杂的,担忧和不担忧超人类AI的意见都占有很大的份额。其他技术行业的领导者相信AI在目前的形式下是有益的,并将继续帮助人类。甲骨文首席执行官Mark Hurd 表示,AI“实际上将创造更多的就业机会,而不是减少就业机会” ,因为管理AI系统需要人力<ref>{{Cite web|url=https://searcherp.techtarget.com/news/252460208/Oracle-CEO-Mark-Hurd-sees-no-reason-to-fear-ERP-AI|title=Oracle CEO Mark Hurd sees no reason to fear ERP AI|website=SearchERP|language=en|access-date=2019-05-06}}</ref> 。Facebook 首席执行官Mark Zuckerberg相信AI将“解锁大量正面的东西” ,比如治愈疾病和提高自动驾驶汽车的安全性<ref>{{Cite web|url=https://www.businessinsider.com/mark-zuckerberg-shares-thoughts-elon-musks-ai-2018-5|title=Mark Zuckerberg responds to Elon Musk's paranoia about AI: 'AI is going to... help keep our communities safe.'|last=|first=|date=25 May 2018|website=Business Insider|access-date=2019-05-06}}</ref>。2015年1月,马斯克向未来生命研究所捐赠了1000万美元,用于研究AI决策。该研究所的目标是“用智能管理”日益增长的技术力量。Vicarious还为 DeepMind 和 Vicarious 等开发AI的公司提供资金,以“跟进AI的发展”<ref>{{cite web|title = The mysterious artificial intelligence company Elon Musk invested in is developing game-changing smart computers|url = http://www.techinsider.io/mysterious-artificial-intelligence-company-elon-musk-investment-2015-10|website = Tech Insider|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030165333/http://www.techinsider.io/mysterious-artificial-intelligence-company-elon-musk-investment-2015-10|archivedate = 30 October 2015|df = dmy-all}}</ref>因为认为这个领域“可能会产生危险的后果”<ref>{{cite web|title = Musk-Backed Group Probes Risks Behind Artificial Intelligence|url = https://www.bloomberg.com/news/articles/2015-07-01/musk-backed-group-probes-risks-behind-artificial-intelligence|website = Bloomberg.com|accessdate = 30 October 2015|first = Jack|last = Clark|url-status=live|archiveurl = https://web.archive.org/web/20151030202356/http://www.bloomberg.com/news/articles/2015-07-01/musk-backed-group-probes-risks-behind-artificial-intelligence|archivedate = 30 October 2015|df = dmy-all}}</ref><ref>{{cite web|title = Elon Musk Is Donating $10M Of His Own Money To Artificial Intelligence Research|url = http://www.fastcompany.com/3041007/fast-feed/elon-musk-is-donating-10m-of-his-own-money-to-artificial-intelligence-research|website = Fast Company|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030202356/http://www.fastcompany.com/3041007/fast-feed/elon-musk-is-donating-10m-of-his-own-money-to-artificial-intelligence-research|archivedate = 30 October 2015|df = dmy-all|date = 2015-01-15}}</ref>。
   −
For the danger of uncontrolled advanced AI to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching.<ref>{{cite web|title = Is artificial intelligence really an existential threat to humanity?|url = http://thebulletin.org/artificial-intelligence-really-existential-threat-humanity8577|website = Bulletin of the Atomic Scientists|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030054330/http://thebulletin.org/artificial-intelligence-really-existential-threat-humanity8577|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-09}}</ref><ref>{{cite web|title = The case against killer robots, from a guy actually working on artificial intelligence|url = http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/|website = Fusion.net|accessdate = 31 January 2016|url-status=live|archiveurl = https://web.archive.org/web/20160204175716/http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/|archivedate = 4 February 2016|df = dmy-all}}</ref> Other counterarguments revolve around humans being either intrinsically or convergently valuable from the perspective of an artificial intelligence.<ref>{{cite web|title = Will artificial intelligence destroy humanity? Here are 5 reasons not to worry.|url = https://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking|website = Vox|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030092203/http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking|archivedate = 30 October 2015|df = dmy-all|date = 2014-08-22}}</ref>
     −
For the danger of uncontrolled advanced AI to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching. Other counterarguments revolve around humans being either intrinsically or convergently valuable from the perspective of an artificial intelligence.
+
如果要实现不受控制的高级AI,这个假想中的AI必须超越或者说在思想上超越全人类,一小部分专家认为这种可能性在足够遥远未来才会出现,不值得研究。<ref>{{cite web|title = Is artificial intelligence really an existential threat to humanity?|url = http://thebulletin.org/artificial-intelligence-really-existential-threat-humanity8577|website = Bulletin of the Atomic Scientists|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030054330/http://thebulletin.org/artificial-intelligence-really-existential-threat-humanity8577|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-09}}</ref><ref>{{cite web|title = The case against killer robots, from a guy actually working on artificial intelligence|url = http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/|website = Fusion.net|accessdate = 31 January 2016|url-status=live|archiveurl = https://web.archive.org/web/20160204175716/http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/|archivedate = 4 February 2016|df = dmy-all}}</ref> 其他反对意见则认为,从AI的角度来看,人类或者具有内在价值,或者具有可交流的价值。<ref>{{cite web|title = Will artificial intelligence destroy humanity? Here are 5 reasons not to worry.|url = https://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking|website = Vox|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030092203/http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking|archivedate = 30 October 2015|df = dmy-all|date = 2014-08-22}}</ref>
   −
为了实现不受控制的先进人工智能的危险,假设的人工智能必须超越或超越整个人类,一小部分专家认为这种可能性在未来足够遥远,不值得研究。其他反对意见则围绕着从人工智能的角度来看, '''人类有内在或可聚合的价值。'''
  −
  −
如果要实现不受控制的高级AI,这个假想中的AI必须超越或者说在思想上超越全人类,一小部分专家认为这种可能性在足够遥远未来才会出现,不值得研究。其他反对意见则认为,从AI的角度来看,人类或者具有内在价值,或者具有可交流的价值。
  −
  −
  −
--[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]]) 不太能翻译 Other counterarguments revolve around humans being either intrinsically or convergently valuable from the perspective of an artificial intelligence.一句
        第539行: 第500行:  
====人性贬值 ====
 
====人性贬值 ====
   −
{{Main|Computer Power and Human Reason}}
+
Joseph Weizenbaum写道,根据定义,AI应用程序不能模拟人类的同理心,并且在诸如客户服务或心理治疗等领域使用AI技术是严重错误。维森鲍姆还对AI研究人员(以及一些哲学家)将人类思维视为一个计算机程序(现在称为计算主义)而感到困扰。对维森鲍姆来说,这些观点表明AI研究贬低了人类的生命价值。<ref name="Weizenbaum's critique"/>
 
  −
[[Joseph Weizenbaum]] wrote that AI applications cannot, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as [[customer service]] or [[psychotherapy]]<ref>In the early 1970s, [[Kenneth Colby]] presented a version of Weizenbaum's [[ELIZA]] known as DOCTOR which he promoted as a serious therapeutic tool. {{Harv|Crevier|1993|pp=132–144}}</ref> was deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as [[computationalism]]). To Weizenbaum these points suggest that AI research devalues human life.<ref name="Weizenbaum's critique"/>
  −
 
  −
 
  −
约瑟夫·维森鲍姆写道,根据定义,AI应用程序不能模拟人类的同理心,并且在诸如客户服务或心理治疗等领域使用AI技术是严重错误<ref>In the early 1970s, [[Kenneth Colby]] presented a version of Weizenbaum's [[ELIZA]] known as DOCTOR which he promoted as a serious therapeutic tool. {{Harv|Crevier|1993|pp=132–144}}</ref> 。维森鲍姆还对AI研究人员(以及一些哲学家)将人类思维视为一个计算机程序(现在称为计算主义)而感到困扰。对维森鲍姆来说,这些观点表明AI研究贬低了人类的生命价值。<ref name="Weizenbaum's critique"/>
            
====社会正义====
 
====社会正义====
  −
{{further|Algorithmic bias}}
  −
  −
One concern is that AI programs may be programmed to be biased against certain groups, such as women and minorities, because most of the developers are wealthy Caucasian men.<ref>{{Cite web|url=https://www.channelnewsasia.com/news/commentary/artificial-intelligence-big-data-bias-hiring-loans-key-challenge-11097374|title=Commentary: Bad news. Artificial intelligence is biased|website=CNA}}</ref> Support for artificial intelligence is higher among men (with 47% approving) than women (35% approving).
      
人们担心的一个问题是,AI程序可能会对某些群体存在偏见,比如女性和少数族裔,因为大多数开发者都是富有的白人男性<ref>{{Cite web|url=https://www.channelnewsasia.com/news/commentary/artificial-intelligence-big-data-bias-hiring-loans-key-challenge-11097374|title=Commentary: Bad news. Artificial intelligence is biased|website=CNA}}</ref>。男性对AI的支持率(47%)高于女性(35%)。
 
人们担心的一个问题是,AI程序可能会对某些群体存在偏见,比如女性和少数族裔,因为大多数开发者都是富有的白人男性<ref>{{Cite web|url=https://www.channelnewsasia.com/news/commentary/artificial-intelligence-big-data-bias-hiring-loans-key-challenge-11097374|title=Commentary: Bad news. Artificial intelligence is biased|website=CNA}}</ref>。男性对AI的支持率(47%)高于女性(35%)。
  −
  −
  −
Algorithms have a host of applications in today's legal system already, assisting officials ranging from judges to parole officers and public defenders in gauging the predicted likelihood of recidivism of defendants.<ref name="propublica.org">{{Cite web|url=https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm|title=How We Analyzed the COMPAS Recidivism Algorithm|last=Jeff Larson|first=Julia Angwin|date=2016-05-23|website=ProPublica|language=en|access-date=2019-07-23}}</ref> COMPAS (an acronym for Correctional Offender Management Profiling for Alternative Sanctions) counts among the most widely utilized commercially available solutions.<ref name="propublica.org"/> It has been suggested that COMPAS assigns an exceptionally elevated risk of recidivism to black defendants while, conversely, ascribing low risk estimate to white defendants significantly more often than statistically expected.<ref name="propublica.org"/>
        第565行: 第513行:     
==== 劳动力需求降低====
 
==== 劳动力需求降低====
 
+
自动化与就业的关系是复杂的。自动化在减少过时工作的同时,也通过微观经济和宏观经济效应创造了新的就业机会<ref>E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 SSRN, part 2(3)]</ref>。与以往的自动化浪潮不同,许多中产阶级的工作可能会被AI淘汰。《经济学人》杂志指出,“AI对白领工作的影响,就像工业革命时期蒸汽动力对蓝领工作的影响一样,需要我们正视”<ref>{{cite news|title=Automation and anxiety|url=https://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety|accessdate=13 January 2018|work=The Economist|date=9 May 2015}}</ref>。对风险的主观估计差别很大,例如,Michael Osborne和Carl Benedikt Frey估计,美国47% 的工作有较高风险被自动化取代 ,而经合组织的报告认为美国仅有9% 的工作处于“高风险”状态<ref>{{cite news|last1=Lohr|first1=Steve|title=Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says|url=https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-but-not-as-fast-as-some-fear-new-report-says.html|accessdate=13 January 2018|work=The New York Times|date=2017}}</ref><ref>{{Cite journal|date=1 January 2017|title=The future of employment: How susceptible are jobs to computerisation?|journal=Technological Forecasting and Social Change|volume=114|pages=254–280|doi=10.1016/j.techfore.2016.08.019|issn=0040-1625|last1=Frey|first1=Carl Benedikt|last2=Osborne|first2=Michael A|citeseerx=10.1.1.395.416}}</ref><ref>Arntz, Melanie, Terry Gregory, and Ulrich Zierahn. "The risk of automation for jobs in OECD countries: A comparative analysis." OECD Social, Employment, and Migration Working Papers 189 (2016). p. 33.</ref>。从律师助理到快餐厨师等职业都面临着极大的风险,而个人医疗保健、神职人员等护理相关职业的就业需求可能会增加<ref>{{cite news|last1=Mahdawi|first1=Arwa|title=What jobs will still be around in 20 years? Read this to prepare your future|url=https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health|accessdate=13 January 2018|work=The Guardian|date=26 June 2017}}</ref>。作家Martin Ford和其他人进一步指出,许多工作都是常规、重复的,对AI而言是可以预测的。Ford警告道,这些工作可能在未来几十年内实现自动化,而且即便对失业人员进行再培训,许多能力一般的人也不能获得新工作。经济学家指出,在过去技术往往会增加而不是减少总就业人数,但他们承认,AI“正处于未知领域”<ref name="guardian jobs debate">{{cite news|last1=Ford|first1=Martin|last2=Colvin|first2=Geoff|title=Will robots create more jobs than they destroy?|url=https://www.theguardian.com/technology/2015/sep/06/will-robots-create-destroy-jobs|accessdate=13 January 2018|work=The Guardian|date=6 September 2015}}</ref>。
{{Further|Technological unemployment#21st century}}
  −
 
  −
The relationship between automation and employment is complicated. While automation eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects.<ref>E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 SSRN, part 2(3)]</ref> Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; ''[[The Economist]]'' states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".<ref>{{cite news|title=Automation and anxiety|url=https://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety|accessdate=13 January 2018|work=The Economist|date=9 May 2015}}</ref> Subjective estimates of the risk vary widely; for example, Michael Osborne and [[Carl Benedikt Frey]] estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S.<!-- see report p. 33 table 4; 9% is both the OECD average and the US average --> jobs as "high risk".<ref>{{cite news|last1=Lohr|first1=Steve|title=Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says|url=https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-but-not-as-fast-as-some-fear-new-report-says.html|accessdate=13 January 2018|work=The New York Times|date=2017}}</ref><ref>{{Cite journal|date=1 January 2017|title=The future of employment: How susceptible are jobs to computerisation?|journal=Technological Forecasting and Social Change|volume=114|pages=254–280|doi=10.1016/j.techfore.2016.08.019|issn=0040-1625|last1=Frey|first1=Carl Benedikt|last2=Osborne|first2=Michael A|citeseerx=10.1.1.395.416}}</ref><ref>Arntz, Melanie, Terry Gregory, and Ulrich Zierahn. "The risk of automation for jobs in OECD countries: A comparative analysis." OECD Social, Employment, and Migration Working Papers 189 (2016). p. 33.</ref> Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.<ref>{{cite news|last1=Mahdawi|first1=Arwa|title=What jobs will still be around in 20 years? Read this to prepare your future|url=https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health|accessdate=13 January 2018|work=The Guardian|date=26 June 2017}}</ref> Author [[Martin Ford (author)|Martin Ford]] and others go further and argue that many jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be "accessible to people with average capability", even with retraining. Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that "we're in uncharted territory" with AI.<ref name="guardian jobs debate">{{cite news|last1=Ford|first1=Martin|last2=Colvin|first2=Geoff|title=Will robots create more jobs than they destroy?|url=https://www.theguardian.com/technology/2015/sep/06/will-robots-create-destroy-jobs|accessdate=13 January 2018|work=The Guardian|date=6 September 2015}}</ref>
  −
 
  −
 
  −
自动化与就业的关系是复杂的。自动化在减少过时工作的同时,也通过微观经济和宏观经济效应创造了新的就业机会<ref>E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3044448 SSRN, part 2(3)]</ref>。与以往的自动化浪潮不同,许多中产阶级的工作可能会被AI淘汰。《经济学人》杂志指出,“AI对白领工作的影响,就像工业革命时期蒸汽动力对蓝领工作的影响一样,需要我们正视”<ref>{{cite news|title=Automation and anxiety|url=https://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety|accessdate=13 January 2018|work=The Economist|date=9 May 2015}}</ref>。对风险的主观估计差别很大,例如,迈克尔·奥斯本和卡尔·贝内迪克特·弗雷估计,美国47% 的工作有较高风险被自动化取代 ,而经合组织的报告认为美国仅有9% 的工作处于“高风险”状态<ref>{{cite news|last1=Lohr|first1=Steve|title=Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says|url=https://www.nytimes.com/2017/01/12/technology/robots-will-take-jobs-but-not-as-fast-as-some-fear-new-report-says.html|accessdate=13 January 2018|work=The New York Times|date=2017}}</ref><ref>{{Cite journal|date=1 January 2017|title=The future of employment: How susceptible are jobs to computerisation?|journal=Technological Forecasting and Social Change|volume=114|pages=254–280|doi=10.1016/j.techfore.2016.08.019|issn=0040-1625|last1=Frey|first1=Carl Benedikt|last2=Osborne|first2=Michael A|citeseerx=10.1.1.395.416}}</ref><ref>Arntz, Melanie, Terry Gregory, and Ulrich Zierahn. "The risk of automation for jobs in OECD countries: A comparative analysis." OECD Social, Employment, and Migration Working Papers 189 (2016). p. 33.</ref>。从律师助理到快餐厨师等职业都面临着极大的风险,而个人医疗保健、神职人员等护理相关职业的就业需求可能会增加<ref>{{cite news|last1=Mahdawi|first1=Arwa|title=What jobs will still be around in 20 years? Read this to prepare your future|url=https://www.theguardian.com/us-news/2017/jun/26/jobs-future-automation-robots-skills-creative-health|accessdate=13 January 2018|work=The Guardian|date=26 June 2017}}</ref>。作家马丁•福特和其他人进一步指出,许多工作都是常规、重复的,对AI而言是可以预测的。福特警告道,这些工作可能在未来几十年内实现自动化,而且即便对失业人员进行再培训,许多能力一般的人也不能获得新工作。经济学家指出,在过去技术往往会增加而不是减少总就业人数,但他们承认,AI“正处于未知领域”<ref name="guardian jobs debate">{{cite news|last1=Ford|first1=Martin|last2=Colvin|first2=Geoff|title=Will robots create more jobs than they destroy?|url=https://www.theguardian.com/technology/2015/sep/06/will-robots-create-destroy-jobs|accessdate=13 January 2018|work=The Guardian|date=6 September 2015}}</ref>。
         
====自动化武器====
 
====自动化武器====
 +
目前,包括美国、中国、俄罗斯和英国在内的50多个国家正在研究战场机器人。许多人在担心来自超级智能AI的风险的同时,也希望限制人造士兵和无人机的使用。<ref>{{cite web|title = Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence|url = http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|website = Observer|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-19}}</ref>
   −
{{See also|Lethal autonomous weapon}}
  −
  −
Currently, 50+ countries are researching battlefield robots, including the United States, China, Russia, and the United Kingdom. Many people concerned about risk from superintelligent AI also want to limit the use of artificial soldiers and drones.<ref>{{cite web|title = Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence|url = http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|website = Observer|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-19}}</ref>
  −
  −
目前,包括美国、中国、俄罗斯和英国在内的50多个国家正在研究战场机器人。许多人在担心来自超级智能AI的风险的同时,也希望限制人造士兵和无人机的使用。<ref>{{cite web|title = Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence|url = http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|website = Observer|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-19}}</ref>
      
===道德机器===
 
===道德机器===
  −
Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use [[ethics|ethical reasoning]] to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics.<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref> Research in this area includes [[machine ethics]], [[artificial moral agents]], [[friendly AI]] and discussion towards building a [[human rights]] framework is also in talks.<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>
  −
      
具有智能的机器有潜力使用它们的智能来防止伤害和减少风险;它们也有能力利用伦理推理来更好地做出它们在世界上的行动。因此,有必要为AI和机器人制定和规范政策<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref>。这一领域的研究包括机器伦理学、人工道德主题、友好AI以及关于建立人权框架的讨论<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>。
 
具有智能的机器有潜力使用它们的智能来防止伤害和减少风险;它们也有能力利用伦理推理来更好地做出它们在世界上的行动。因此,有必要为AI和机器人制定和规范政策<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref>。这一领域的研究包括机器伦理学、人工道德主题、友好AI以及关于建立人权框架的讨论<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>。
第592行: 第527行:  
====人工道德智能主体 ====
 
====人工道德智能主体 ====
   −
 
+
Wendell Wallach在他的著作《道德机器 Moral Machines》<ref>Wendell Wallach (2010). ''Moral Machines'', Oxford University Press.</ref>中提出了人工道德智能主体(AMA)的概念。在两个核心问题的指导下,AMA 已经成为AI研究领域的一部分。他将这两个核心问题定义为“人类是否希望计算机做出道德决策”和“机器人真的可以拥有道德吗”。对于Wallach来说,这个问题的重点并不是机器能否适应社会,表现与社会对AMA发展所施加的限制相对应的道德行为。
Wendell Wallach introduced the concept of [[artificial moral agents]] (AMA) in his book ''Moral Machines''<ref>Wendell Wallach (2010). ''Moral Machines'', Oxford University Press.</ref> For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as "Does Humanity Want Computers Making Moral Decisions"<ref>Wallach, pp 37–54.</ref> and "Can (Ro)bots Really Be Moral".<ref>Wallach, pp 55–73.</ref> For Wallach, the question is not centered on the issue of ''whether'' machines can demonstrate the equivalent of moral behavior in contrast to the ''constraints'' which society may place on the development of AMAs.<ref>Wallach, Introduction chapter.</ref>
  −
 
  −
温德尔•沃勒克在他的著作《道德机器》(Moral Machines)<ref>Wendell Wallach (2010). ''Moral Machines'', Oxford University Press.</ref>中提出了人工道德智能主体(AMA)的概念。在两个核心问题的指导下,AMA 已经成为AI研究领域的一部分。他将这两个核心问题定义为“人类是否希望计算机做出道德决策”和“机器人真的可以拥有道德吗”。对于沃勒克来说,这个问题的重点并不是机器能否适应社会,表现与社会对AMA发展所施加的限制相对应的道德行为。
         
==== 机器伦理学====
 
==== 机器伦理学====
  −
{{Main|Machine ethics}}
  −
  −
  −
  −
The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.<ref name="autogenerated1">Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press.</ref> The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: "Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitate this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about Ethics."<ref name="autogenerated2">{{cite web|url=http://www.aaai.org/Library/Symposia/Fall/fs05-06 |title=Machine Ethics |work=aaai.org |url-status=dead |archiveurl=https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06 |archivedate=29 November 2014 }}</ref> Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality. A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics"<ref name="autogenerated1"/> that stems from the AAAI Fall 2005 Symposium on Machine Ethics.<ref name="autogenerated2"/>
      
机器伦理学领域关注的是给予机器伦理原则,或者一种用于解决它们可能遇到的伦理困境的方法,使它们能够通过自己的伦理决策以一种符合伦理的方式运作.<ref name="autogenerated1">Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press.</ref>。2005年秋季AAAI机器伦理研讨会描述了这一领域: ”过去关于技术与伦理学之间关系的研究主要侧重于人类是否应该对技术的使用负责,只有少数人对人类应当如何对待机器感兴趣。任何时候都只有人类会参与伦理推理。现在是时候给至少一些机器增加道德层面的考虑了。这势必要的,因为我们认识到了机器行为的道德后果,以及机器自主性领域最新和潜在的发展。与计算机黑客行为、软件产权问题、隐私问题和其他通常归因于计算机道德的主题不同,机器道德关注的是机器对人类用户和其他机器的行为。机器伦理学的研究是减轻人们对自主系统担忧的关键——可以说,人们对机器智能担忧的根源是自主机器概念没有道德维度。此外,在机器伦理学的研究中可以发现当前伦理学理论存在的问题,加深我们对伦理学的思考。”<ref name="autogenerated2">{{cite web|url=http://www.aaai.org/Library/Symposia/Fall/fs05-06 |title=Machine Ethics |work=aaai.org |url-status=dead |archiveurl=https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06 |archivedate=29 November 2014 }}</ref> 机器伦理学有时被称为机器道德、计算伦理学或计算伦理学<ref name="autogenerated1"/>。这个新兴领域的各种观点可以在 AAAI 秋季2005年机器伦理学研讨会上收集的“机器伦理学”版本中找到。<ref name="autogenerated2"/>
 
机器伦理学领域关注的是给予机器伦理原则,或者一种用于解决它们可能遇到的伦理困境的方法,使它们能够通过自己的伦理决策以一种符合伦理的方式运作.<ref name="autogenerated1">Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press.</ref>。2005年秋季AAAI机器伦理研讨会描述了这一领域: ”过去关于技术与伦理学之间关系的研究主要侧重于人类是否应该对技术的使用负责,只有少数人对人类应当如何对待机器感兴趣。任何时候都只有人类会参与伦理推理。现在是时候给至少一些机器增加道德层面的考虑了。这势必要的,因为我们认识到了机器行为的道德后果,以及机器自主性领域最新和潜在的发展。与计算机黑客行为、软件产权问题、隐私问题和其他通常归因于计算机道德的主题不同,机器道德关注的是机器对人类用户和其他机器的行为。机器伦理学的研究是减轻人们对自主系统担忧的关键——可以说,人们对机器智能担忧的根源是自主机器概念没有道德维度。此外,在机器伦理学的研究中可以发现当前伦理学理论存在的问题,加深我们对伦理学的思考。”<ref name="autogenerated2">{{cite web|url=http://www.aaai.org/Library/Symposia/Fall/fs05-06 |title=Machine Ethics |work=aaai.org |url-status=dead |archiveurl=https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06 |archivedate=29 November 2014 }}</ref> 机器伦理学有时被称为机器道德、计算伦理学或计算伦理学<ref name="autogenerated1"/>。这个新兴领域的各种观点可以在 AAAI 秋季2005年机器伦理学研讨会上收集的“机器伦理学”版本中找到。<ref name="autogenerated2"/>
第611行: 第537行:  
====善AI与恶AI ====
 
====善AI与恶AI ====
   −
{{Main|Friendly AI}}
+
政治科学家Charles T. Rubin认为,AI既不可能被设计成是友好的,也不能保证会是友好的<ref>{{cite journal|last=Rubin |first=Charles |authorlink=Charles T. Rubin |date=Spring 2003 |title=Artificial Intelligence and Human Nature|journal=The New Atlantis |volume=1 |pages=88–100 |url=http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |url-status=dead |archiveurl=https://web.archive.org/web/20120611115223/http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |archivedate=11 June 2012 |df=dmy}}</ref>。他认为“任何足够的友善都可能难以与邪恶区分。”人类不应该假设机器或机器人会对我们友好,因为没有先验理由认为他们会对我们的道德体系有共鸣。这个体系是在我们特定的生物进化过程中产生的(AI没有这个过程)。超智能软件不一定会认同人类的继续存在,且我们将极难停止超级AI的运转。最近一些学术出版物也开始讨论这个话题,认为它是对文明、人类和地球造成风险的真正来源。
 
  −
Political scientist [[Charles T. Rubin]] believes that AI can be neither designed nor guaranteed to be benevolent.<ref>{{cite journal|last=Rubin |first=Charles |authorlink=Charles T. Rubin |date=Spring 2003 |title=Artificial Intelligence and Human Nature|journal=The New Atlantis |volume=1 |pages=88–100 |url=http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |url-status=dead |archiveurl=https://web.archive.org/web/20120611115223/http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |archivedate=11 June 2012 |df=dmy}}</ref> He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no ''a priori'' reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.
  −
 
  −
政治科学家查尔斯 · 鲁宾认为,AI既不可能被设计成是友好的,也不能保证会是友好的<ref>{{cite journal|last=Rubin |first=Charles |authorlink=Charles T. Rubin |date=Spring 2003 |title=Artificial Intelligence and Human Nature|journal=The New Atlantis |volume=1 |pages=88–100 |url=http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |url-status=dead |archiveurl=https://web.archive.org/web/20120611115223/http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |archivedate=11 June 2012 |df=dmy}}</ref>。他认为“任何足够的友善都可能难以与邪恶区分。”人类不应该假设机器或机器人会对我们友好,因为没有先验理由认为他们会对我们的道德体系有共鸣。这个体系是在我们特定的生物进化过程中产生的(AI没有这个过程)。超智能软件不一定会认同人类的继续存在,且我们将极难停止超级AI的运转。最近一些学术出版物也开始讨论这个话题,认为它是对文明、人类和地球造成风险的真正来源。
  −
 
     −
One proposal to deal with this is to ensure that the first generally intelligent AI is '[[Friendly AI]]' and will be able to control subsequently developed AIs. Some question whether this kind of check could actually remain in place.
      
解决这个问题的一个建议是确保第一个具有通用智能的AI是“友好的AI”,并能够控制后面研发的AI。一些人质疑这种“友好”是否真的能够保持不变。
 
解决这个问题的一个建议是确保第一个具有通用智能的AI是“友好的AI”,并能够控制后面研发的AI。一些人质疑这种“友好”是否真的能够保持不变。
      −
Leading AI researcher [[Rodney Brooks]] writes, "I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence."<ref>{{cite web|last=Brooks|first=Rodney|title=artificial intelligence is a tool, not a threat|date=10 November 2014|url=http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|url-status=dead|archiveurl=https://web.archive.org/web/20141112130954/http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|archivedate=12 November 2014|df=dmy-all}}</ref>
+
首席AI研究员Rodney Brooks写道: “我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我觉得这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快,但构建有意识有感情的智能则是件庞杂且艰巨的任务。”<ref>{{cite web|last=Brooks|first=Rodney|title=artificial intelligence is a tool, not a threat|date=10 November 2014|url=http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|url-status=dead|archiveurl=https://web.archive.org/web/20141112130954/http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|archivedate=12 November 2014|df=dmy-all}}</ref>
   −
首席AI研究员罗德尼 · 布鲁克斯写道: “我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我觉得这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快,但构建有意识有感情的智能则是件庞杂且艰巨的任务。”<ref>{{cite web|last=Brooks|first=Rodney|title=artificial intelligence is a tool, not a threat|date=10 November 2014|url=http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|url-status=dead|archiveurl=https://web.archive.org/web/20141112130954/http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|archivedate=12 November 2014|df=dmy-all}}</ref>
      
===机器意识、知觉和思维 ===
 
===机器意识、知觉和思维 ===
  −
{{Main|Artificial consciousness}}
  −
  −
  −
If an AI system replicates all key aspects of human intelligence, will that system also be [[Sentience|sentient]]—will it have a [[mind]] which has [[consciousness|conscious experiences]]? This question is closely related to the philosophical problem as to the nature of human consciousness, generally referred to as the [[hard problem of consciousness]].
  −
   
如果一个AI系统复制了人类智能的所有关键部分,那么这个系统是否也能有意识——它是否能拥有一个有知觉的头脑?这个问题与人类意识本质的哲学问题密切相关,一般称之为意识难题。
 
如果一个AI系统复制了人类智能的所有关键部分,那么这个系统是否也能有意识——它是否能拥有一个有知觉的头脑?这个问题与人类意识本质的哲学问题密切相关,一般称之为意识难题。
       
====意识 ====
 
====意识 ====
 +
David Chalmers在理解心智方面提出了两个问题,他称之为意识的“困难”和“容易”问题。 <ref name=Chalmers>{{cite journal |url=http://www.imprint.co.uk/chalmers.html |title=Facing up to the problem of consciousness |last=Chalmers |first=David |authorlink=David Chalmers |journal=[[Journal of Consciousness Studies]] |volume= 2 |issue=3 |year=1995 |pages=200–219}} See also [http://consc.net/papers/facing.html this link]</ref>
   −
  −
{{Main|Hard problem of consciousness|Theory of mind}}
  −
  −
  −
[[David Chalmers]] identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.<ref name=Chalmers>{{cite journal |url=http://www.imprint.co.uk/chalmers.html |title=Facing up to the problem of consciousness |last=Chalmers |first=David |authorlink=David Chalmers |journal=[[Journal of Consciousness Studies]] |volume= 2 |issue=3 |year=1995 |pages=200–219}} See also [http://consc.net/papers/facing.html this link]</ref>
  −
  −
  −
大卫 · 查尔默斯在理解心智方面提出了两个问题,他称之为意识的“困难”和“容易”问题。 <ref name=Chalmers>{{cite journal |url=http://www.imprint.co.uk/chalmers.html |title=Facing up to the problem of consciousness |last=Chalmers |first=David |authorlink=David Chalmers |journal=[[Journal of Consciousness Studies]] |volume= 2 |issue=3 |year=1995 |pages=200–219}} See also [http://consc.net/papers/facing.html this link]</ref>
  −
  −
The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this ''feels'' or why it should feel like anything at all. Human [[information processing]] is easy to explain, however human [[subjective experience]] is difficult to explain.
      
“容易”的问题是理解大脑如何处理信号,制定计划和控制行为。“困难”的问题是如何解释这种感觉或者为什么它会有这种感觉。人类的信息处理过程很容易解释,然而人类的主观体验却很难解释。
 
“容易”的问题是理解大脑如何处理信号,制定计划和控制行为。“困难”的问题是如何解释这种感觉或者为什么它会有这种感觉。人类的信息处理过程很容易解释,然而人类的主观体验却很难解释。
  −
  −
For example, consider what happens when a person is shown a color swatch and identifies it, saying "it's red". The easy problem only requires understanding the machinery in the brain that makes it possible for a person to know that the color swatch is red. The hard problem is that people also know something else—they also know ''what red looks like''. (Consider that a person born blind can know that something is red without knowing what red looks like.){{efn|This is based on [[Mary's Room]], a thought experiment first proposed by [[Frank Cameron Jackson|Frank Jackson]] in 1982}} Everyone knows subjective experience exists, because they do it every day (e.g., all sighted people know what red looks like). The hard problem is explaining how the brain creates it, why it exists, and how it is different from knowledge and other aspects of the brain.
        第661行: 第562行:  
====计算主义和功能主义====
 
====计算主义和功能主义====
   −
{{Main|Computationalism|Functionalism (philosophy of mind)}}
+
计算主义站在心智哲学的立场,认为人类心智或人类大脑(都)是一个信息处理系统,思维是一种计算形式<ref>Steven Horst, (2005) [http://plato.stanford.edu/entries/computational-mind/ "The Computational Theory of Mind"] in ''The Stanford Encyclopedia of Philosophy''</ref> 。计算主义认为,思想和身体之间的关系与软件和硬件之间的关系是相似或相同的,因此这也许能帮助解决“意识和身体问题”。这一哲学立场受20世纪60年代AI研究人员和认知科学家的工作的启发,最初由哲学家Jerry Fodor和Hilary Putnam提出。
 
  −
Computationalism is the position in the [[philosophy of mind]] that the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing.<ref>[[Steven Horst|Horst, Steven]], (2005) [http://plato.stanford.edu/entries/computational-mind/ "The Computational Theory of Mind"] in ''The Stanford Encyclopedia of Philosophy''</ref> Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the [[mind-body problem]]. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers [[Jerry Fodor]] and [[Hilary Putnam]].
  −
 
  −
计算主义站在心智哲学的立场,认为人类心智或人类大脑(都)是一个信息处理系统,思维是一种计算形式<ref>[[Steven Horst|Horst, Steven]], (2005) [http://plato.stanford.edu/entries/computational-mind/ "The Computational Theory of Mind"] in ''The Stanford Encyclopedia of Philosophy''</ref> 。计算主义认为,思想和身体之间的关系与软件和硬件之间的关系是相似或相同的,因此这也许能帮助解决“意识和身体问题”。这一哲学立场受20世纪60年代AI研究人员和认知科学家的工作的启发,最初由哲学家杰里·福多和希拉里·普特南提出。
         
====强人工智能假说 ====
 
====强人工智能假说 ====
   −
{{Main|Chinese room}}
+
“具有正确输入和输出程序的计算机,将因此拥有与人脑意义完全相同的头脑。”John Searle称这种哲学立场为“强人工智能”,然后用他的中文屋论点反驳了这种说法,他让人们看看电脑内部,并试图找出“思维”可能在哪里。<ref name="Chinese room"/>
 
  −
The philosophical position that [[John Searle]] has named [[strong AI hypothesis|"strong AI"]] states: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."<ref name="Searle's strong AI"/> Searle counters this assertion with his [[Chinese room]] argument, which asks us to look ''inside'' the computer and try to find where the "mind" might be.<ref name="Chinese room"/>
  −
 
  −
 
  −
“具有正确输入和输出程序的计算机,将因此拥有与人脑意义完全相同的头脑。”约翰·塞尔称这种哲学立场为“强人工智能”<ref name="Searle's strong AI"/>,然后用他的中文屋论点反驳了这种说法,他让人们看看电脑内部,并试图找出“思维”可能在哪里。<ref name="Chinese room"/>
         
====机器人的权利====
 
====机器人的权利====
 +
如果可以创造出一台有智能的机器,那么它是否也有感觉呢?如果它有感觉,它是否拥有与人类同样的权利?这个目前被称为“机器人权利”的问题正在被人们考虑<ref name="Robot rights"/>,例如,加利福尼亚的未来研究所就在从事相关研究,尽管许多批评论家认为这种讨论为时过早<ref Name="Evans 2015">{{cite journal | last = Evans | first = Woody | authorlink = Woody Evans | title = Posthuman Rights: Dimensions of Transhuman Worlds | journal = Teknokultura | volume = 12 | issue = 2 | date = 2015 | df = dmy-all | doi = 10.5209/rev_TK.2015.v12.n2.49072 | doi-access = free }}</ref>。2010年的纪录片《插头与祷告 Plug & Pray》<ref>{{cite web|url=http://www.plugandpray-film.de/en/content.html|title=Content: Plug & Pray Film – Artificial Intelligence – Robots -|author=maschafilm|work=plugandpray-film.de|url-status=live|archiveurl=https://web.archive.org/web/20160212040134/http://www.plugandpray-film.de/en/content.html|archivedate=12 February 2016|df=dmy-all}}</ref>以及《星际迷航: 下一代 Star Trek Next Generation》等许多科幻媒体都对这个主题进行了深入讨论。《星际迷航 Star Trek》中有个指挥官角色叫Data ,他希望“变成人类”,和为了不被拆解而抗争。
   −
{{Main|Robot rights}}
  −
  −
If a machine can be created that has intelligence, could it also ''[[sentience|feel]]''? If it can feel, does it have the same rights as a human? This issue, now known as "[[robot rights]]", is currently being considered by, for example, California's [[Institute for the Future]], although many critics believe that the discussion is premature.<ref name="Robot rights"/> Some critics of [[transhumanism]] argue that any hypothetical robot rights would lie on a spectrum with [[animal rights]] and human rights. <ref Name="Evans 2015">{{cite journal | last = Evans | first = Woody | authorlink = Woody Evans | title = Posthuman Rights: Dimensions of Transhuman Worlds | journal = Teknokultura | volume = 12 | issue = 2 | date = 2015 | df = dmy-all | doi = 10.5209/rev_TK.2015.v12.n2.49072 | doi-access = free }}</ref> The subject is profoundly discussed in the 2010 documentary film ''[[Plug & Pray]]'',<ref>{{cite web|url=http://www.plugandpray-film.de/en/content.html|title=Content: Plug & Pray Film – Artificial Intelligence – Robots -|author=maschafilm|work=plugandpray-film.de|url-status=live|archiveurl=https://web.archive.org/web/20160212040134/http://www.plugandpray-film.de/en/content.html|archivedate=12 February 2016|df=dmy-all}}</ref> and many sci fi media such as [[Star Trek]] Next Generation, with the character of [[Commander Data]], who fought being disassembled for research, and wanted to "become human", and the robotic holograms in Voyager.
  −
  −
如果可以创造出一台有智能的机器,那么它是否也有感觉呢?如果它有感觉,它是否拥有与人类同样的权利?这个目前被称为“机器人权利”的问题正在被人们考虑<ref name="Robot rights"/>,例如,加利福尼亚的未来研究所就在从事相关研究,尽管许多批评论家认为这种讨论为时过早<ref Name="Evans 2015">{{cite journal | last = Evans | first = Woody | authorlink = Woody Evans | title = Posthuman Rights: Dimensions of Transhuman Worlds | journal = Teknokultura | volume = 12 | issue = 2 | date = 2015 | df = dmy-all | doi = 10.5209/rev_TK.2015.v12.n2.49072 | doi-access = free }}</ref>。2010年的纪录片《插头与祷告》(Plug & Pray)<ref>{{cite web|url=http://www.plugandpray-film.de/en/content.html|title=Content: Plug & Pray Film – Artificial Intelligence – Robots -|author=maschafilm|work=plugandpray-film.de|url-status=live|archiveurl=https://web.archive.org/web/20160212040134/http://www.plugandpray-film.de/en/content.html|archivedate=12 February 2016|df=dmy-all}}</ref>以及《星际迷航: 下一代》(Star Trek Next Generation)等许多科幻媒体都对这个主题进行了深入讨论。《星际迷航》中有个指挥官角色叫戴塔(Data) ,他希望“变成人类”,和为了不被拆解而抗争。
      
===超级智能 ===
 
===超级智能 ===
  −
{{Main|Superintelligence}}
  −
  −
Are there limits to how intelligent machines—or human-machine hybrids—can be? A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. ''Superintelligence'' may also refer to the form or degree of intelligence possessed by such an agent.<ref name="Roberts"/>
      
智能机器——或者说人机混合体——能达到的怎样的程度有限吗?超级智能、超智能或者超人智能是一种假想的智能主体,它拥有的智能远远超过最聪明、最有天赋的人类智慧。超级智能也可以指这种智能体所拥有的智能的形式或程度。<ref name="Roberts"/>
 
智能机器——或者说人机混合体——能达到的怎样的程度有限吗?超级智能、超智能或者超人智能是一种假想的智能主体,它拥有的智能远远超过最聪明、最有天赋的人类智慧。超级智能也可以指这种智能体所拥有的智能的形式或程度。<ref name="Roberts"/>
第696行: 第580行:     
====技术奇点====
 
====技术奇点====
  −
{{Main|Technological singularity|Moore's law}}
  −
  −
If research into [[artificial general intelligence|Strong AI]] produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to [[Intelligence explosion|recursive self-improvement]].<ref name="recurse"/> The new intelligence could thus increase exponentially and dramatically surpass humans. Science fiction writer [[Vernor Vinge]] named this scenario "[[technological singularity|singularity]]".<ref name=Singularity/> Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.<ref name=Singularity/><ref name="Roberts"/>
  −
   
如果对强人工智能的研究造出了足够智能的软件,那么它也许能做到重新编程并改进自己。改进后的软件甚至可以更好地改进自己,从而实现递归的自我改进。这种新的智能因此可以呈指数增长,并大大超过人类<ref name="recurse"/>。科幻作家弗诺·文奇将这种情况命名为“奇点”<ref name=Singularity/> :技术的加速发展将导致AI超越人类智力和控制能力的失控局面,从而彻底改变甚至终结人类文明。因为这样的智能人类难以理解,所有技术奇点出现后发生的事是不可预测,或者说深不可测的。<ref name=Singularity/><ref name="Roberts"/>
 
如果对强人工智能的研究造出了足够智能的软件,那么它也许能做到重新编程并改进自己。改进后的软件甚至可以更好地改进自己,从而实现递归的自我改进。这种新的智能因此可以呈指数增长,并大大超过人类<ref name="recurse"/>。科幻作家弗诺·文奇将这种情况命名为“奇点”<ref name=Singularity/> :技术的加速发展将导致AI超越人类智力和控制能力的失控局面,从而彻底改变甚至终结人类文明。因为这样的智能人类难以理解,所有技术奇点出现后发生的事是不可预测,或者说深不可测的。<ref name=Singularity/><ref name="Roberts"/>
      −
[[Ray Kurzweil]] has used [[Moore's law]] (which describes the relentless exponential improvement in digital technology) to calculate that [[desktop computer]]s will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.<ref name=Singularity/>
+
Ray Kurzweil利用摩尔定律(描述了数字技术指数增长的现象)计算出,到2029年,台式电脑的处理能力将与人类大脑相当,并预测奇点将出现在2045年。<ref name=Singularity/>
 
  −
雷·库兹韦尔利用摩尔定律(描述了数字技术指数增长的现象)计算出,到2029年,台式电脑的处理能力将与人类大脑相当,并预测奇点将出现在2045年。
         
====超人类主义 ====
 
====超人类主义 ====
 
+
机器人设计师Hans Moravec、控制论专家Kevin Warwick和发明家Ray Kurzweil预言,人类和机器将在未来融合成为比两者都更强的半机器人<ref name="Transhumanism"/>。这种观点被称为“超人类主义”,这种观点起源于Aldous Huxley和Robert Ettinger。
{{Main|Transhumanism}}
  −
 
  −
Robot designer [[Hans Moravec]], cyberneticist [[Kevin Warwick]] and inventor [[Ray Kurzweil]] have predicted that humans and machines will merge in the future into [[cyborg]]s that are more capable and powerful than either.<ref name="Transhumanism"/> This idea, called [[transhumanism]], has roots in [[Aldous Huxley]] and [[Robert Ettinger]].
  −
 
  −
机器人设计师汉斯·莫拉维克、控制论专家凯文·沃里克和发明家雷·库兹韦尔预言,人类和机器将在未来融合成为比两者都更强的半机器人<ref name="Transhumanism"/>。这种观点被称为“超人类主义”,这种观点起源于阿道司·赫胥黎和罗伯特•艾廷格。
        −
[[Edward Fredkin]] argues that "artificial intelligence is the next stage in evolution", an idea first proposed by [[Samuel Butler (novelist)|Samuel Butler]]'s "[[Darwin among the Machines]]" as far back as 1863, and expanded upon by [[George Dyson (science historian)|George Dyson]] in his book of the same name in 1998.<ref name="AI as evolution"/>
+
Edward Fredkin认为,“人工智能是进化的下一个阶段”。早在1863年Samuel Butler的《机器中的达尔文 Darwin among the Machines》就首次提出了这一观点,George Dyson在1998年的同名著作中对其进行了延伸。
   −
爱德华•弗雷德金认为,“人工智能是进化的下一个阶段”。早在1863年,塞缪尔•巴特勒的《机器中的达尔文》(Darwin among the Machines)就首次提出了这一观点,乔治•戴森在1998年的同名著作中对其进行了延伸。
+
<br>
    
== 经济学 Economics ==
 
== 经济学 Economics ==
7,129

个编辑