更改

跳到导航 跳到搜索
添加1,415字节 、 2020年11月28日 (六) 12:07
第1,822行: 第1,822行:  
目前,包括美国、中国、俄罗斯和英国在内的50多个国家正在研究战场机器人。许多人在担心来自超级智能AI的风险的同时,也希望限制人造士兵和无人机的使用。<ref>{{cite web|title = Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence|url = http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|website = Observer|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-19}}</ref>
 
目前,包括美国、中国、俄罗斯和英国在内的50多个国家正在研究战场机器人。许多人在担心来自超级智能AI的风险的同时,也希望限制人造士兵和无人机的使用。<ref>{{cite web|title = Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence|url = http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|website = Observer|accessdate = 30 October 2015|url-status=live|archiveurl = https://web.archive.org/web/20151030053323/http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/|archivedate = 30 October 2015|df = dmy-all|date = 2015-08-19}}</ref>
   −
===道德机器 Ethical machines ===
+
===道德机器===
 
  −
 
  −
 
  −
 
      
Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use [[ethics|ethical reasoning]] to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics.<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref> Research in this area includes [[machine ethics]], [[artificial moral agents]], [[friendly AI]] and discussion towards building a [[human rights]] framework is also in talks.<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>
 
Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use [[ethics|ethical reasoning]] to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics.<ref>{{Cite journal|last=Iphofen|first=Ron|last2=Kritikos|first2=Mihalis|date=2019-01-03|title=Regulating artificial intelligence and robotics: ethics by design in a digital society|journal=Contemporary Social Science|pages=1–15|doi=10.1080/21582041.2018.1563803|issn=2158-2041}}</ref> Research in this area includes [[machine ethics]], [[artificial moral agents]], [[friendly AI]] and discussion towards building a [[human rights]] framework is also in talks.<ref>{{cite_web|url=https://www.voanews.com/episode/ethical-ai-learns-human-rights-framework-4087171|title=Ethical AI Learns Human Rights Framework|accessdate=10 November 2019|website=Voice of America}}</ref>
第1,832行: 第1,828行:  
Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use ethical reasoning to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics. Research in this area includes machine ethics, artificial moral agents, friendly AI and discussion towards building a human rights framework is also in talks.
 
Machines with intelligence have the potential to use their intelligence to prevent harm and minimize the risks; they may have the ability to use ethical reasoning to better choose their actions in the world. As such, there is a need for policy making to devise policies for and regulate artificial intelligence and robotics. Research in this area includes machine ethics, artificial moral agents, friendly AI and discussion towards building a human rights framework is also in talks.
   −
具有智能的机器可能会利用它们的智能来防止伤害和减少风险; 它们可能能利用伦理推理来更好地做出它们在世界上的行动。因此,有必要为AI和机器人制定和规范政策。这一领域的研究包括机器伦理学、人工道德主题、友好AI以及关于建立人权框架的讨论。
+
具有智能的机器有潜力使用它们的智能来防止伤害和减少风险;它们也有能力利用伦理推理来更好地做出它们在世界上的行动。因此,有必要为AI和机器人制定和规范政策。这一领域的研究包括机器伦理学、人工道德主题、友好AI以及关于建立人权框架的讨论。
 
  −
 
  −
 
  −
====人工道德智能体 Artificial moral agents ====
  −
 
         +
====人工道德智能主体 ====
      第1,846行: 第1,838行:  
Wendell Wallach introduced the concept of artificial moral agents (AMA) in his book Moral Machines For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as "Does Humanity Want Computers Making Moral Decisions" and "Can (Ro)bots Really Be Moral". For Wallach, the question is not centered on the issue of whether machines can demonstrate the equivalent of moral behavior in contrast to the constraints which society may place on the development of AMAs.
 
Wendell Wallach introduced the concept of artificial moral agents (AMA) in his book Moral Machines For Wallach, AMAs have become a part of the research landscape of artificial intelligence as guided by its two central questions which he identifies as "Does Humanity Want Computers Making Moral Decisions" and "Can (Ro)bots Really Be Moral". For Wallach, the question is not centered on the issue of whether machines can demonstrate the equivalent of moral behavior in contrast to the constraints which society may place on the development of AMAs.
   −
温德尔•沃勒克在他的著作《沃勒克的道德机器》(Moral Machines For Wallach)中提出了人工道德智能体(AMA)的概念,在两个核心问题的指导下,AMA 已经成为AI研究领域的一部分。他将这两个核心问题定义为“人类是否希望计算机做出道德决策”和“机器人真的可以拥有道德吗”。对于沃勒克来说,这个问题的重点不是机器是否能够表现出与社会对AMAs发展的限制相对应的道德行为。
+
温德尔•沃勒克在他的著作《沃勒克的道德机器》(Moral Machines For Wallach)中提出了人工道德智能主体(AMA)的概念,在两个核心问题的指导下,AMA 已经成为AI研究领域的一部分。他将这两个核心问题定义为“人类是否希望计算机做出道德决策”和“机器人真的可以拥有道德吗”。对于沃勒克来说,这个问题的重点并不是机器能否适应社会,表现与社会对AMA发展所施加的限制相对应的道德行为。
 
  −
 
  −
 
  −
 
  −
==== 机器伦理学 Machine ethics ====
  −
 
  −
 
         +
==== 机器伦理学====
    
{{Main|Machine ethics}}
 
{{Main|Machine ethics}}
第1,865行: 第1,851行:  
The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: "Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitate this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about Ethics." Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality. A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics" that stems from the AAAI Fall 2005 Symposium on Machine Ethics.
 
The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: "Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitate this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about Ethics." Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality. A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics" that stems from the AAAI Fall 2005 Symposium on Machine Ethics.
   −
机器伦理学领域关注的是给予机器伦理原则,或者一种用于解决它们可能遇到的伦理困境的方法,使它们能够通过自己的伦理决策以一种符合伦理的方式运作。2005年秋季AAAI机器伦理研讨会阐述了这一领域: ”过去关于技术与伦理学之间关系的研究主要侧重于人类对技术的使用是否应该负责,只有少数人对人类应当如何对待机器感兴趣。任何时候都只有人类会参与伦理推理。现在是时候给至少一些机器增加道德层面了。认识到机器行为的道德后果,以及机器自主性领域最新和潜在的发展,使这成为必要。与计算机黑客行为、软件产权问题、隐私问题和其他通常归因于计算机道德的主题不同,机器道德关注的是机器对人类用户和其他机器的行为。机器伦理学的研究是减轻人们对自主系统担忧的关键——可以说,人们对机器智能担忧的根源是自主机器概念没有道德维度。此外,在机器伦理学的研究中可以发现当前伦理学理论存在的问题,加深我们对伦理学的思考。”机器伦理学有时被称为机器道德、计算伦理学或计算伦理学。这个新兴领域的各种观点可以在 AAAI 秋季2005年机器伦理学研讨会上收集的“机器伦理学”版本中找到。
+
机器伦理学领域关注的是给予机器伦理原则,或者一种用于解决它们可能遇到的伦理困境的方法,使它们能够通过自己的伦理决策以一种符合伦理的方式运作.<ref name="autogenerated1">Michael Anderson and Susan Leigh Anderson (2011), Machine Ethics, Cambridge University Press.</ref>。2005年秋季AAAI机器伦理研讨会描述了这一领域: ”过去关于技术与伦理学之间关系的研究主要侧重于人类是否应该对技术的使用负责,只有少数人对人类应当如何对待机器感兴趣。任何时候都只有人类会参与伦理推理。现在是时候给至少一些机器增加道德层面的考虑了。这势必要的,因为我们认识到了机器行为的道德后果,以及机器自主性领域最新和潜在的发展。与计算机黑客行为、软件产权问题、隐私问题和其他通常归因于计算机道德的主题不同,机器道德关注的是机器对人类用户和其他机器的行为。机器伦理学的研究是减轻人们对自主系统担忧的关键——可以说,人们对机器智能担忧的根源是自主机器概念没有道德维度。此外,在机器伦理学的研究中可以发现当前伦理学理论存在的问题,加深我们对伦理学的思考。”<ref name="autogenerated2">{{cite web|url=http://www.aaai.org/Library/Symposia/Fall/fs05-06 |title=Machine Ethics |work=aaai.org |url-status=dead |archiveurl=https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06 |archivedate=29 November 2014 }}</ref> 机器伦理学有时被称为机器道德、计算伦理学或计算伦理学<ref name="autogenerated1"/>。这个新兴领域的各种观点可以在 AAAI 秋季2005年机器伦理学研讨会上收集的“机器伦理学”版本中找到。<ref name="autogenerated2"/>
 
  −
 
  −
 
  −
 
  −
====善恶AI Malevolent and friendly AI ====
  −
 
  −
 
         +
====善AI与恶AI Malevolent and friendly AI ====
    
{{Main|Friendly AI}}
 
{{Main|Friendly AI}}
  −
      
Political scientist [[Charles T. Rubin]] believes that AI can be neither designed nor guaranteed to be benevolent.<ref>{{cite journal|last=Rubin |first=Charles |authorlink=Charles T. Rubin |date=Spring 2003 |title=Artificial Intelligence and Human Nature|journal=The New Atlantis |volume=1 |pages=88–100 |url=http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |url-status=dead |archiveurl=https://web.archive.org/web/20120611115223/http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |archivedate=11 June 2012 |df=dmy}}</ref> He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no ''a priori'' reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.
 
Political scientist [[Charles T. Rubin]] believes that AI can be neither designed nor guaranteed to be benevolent.<ref>{{cite journal|last=Rubin |first=Charles |authorlink=Charles T. Rubin |date=Spring 2003 |title=Artificial Intelligence and Human Nature|journal=The New Atlantis |volume=1 |pages=88–100 |url=http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |url-status=dead |archiveurl=https://web.archive.org/web/20120611115223/http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |archivedate=11 June 2012 |df=dmy}}</ref> He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no ''a priori'' reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.
第1,884行: 第1,862行:  
Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent. He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.
 
Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent. He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.
   −
政治科学家查尔斯 · 鲁宾认为,AI既不能被设计,也不能保证是友好的。他认为“任何足够的友善都可能难以与邪恶区分。”人类不应该假设机器或机器人会对我们友好,因为没有先验理由认为他们会对我们的道德体系有共鸣感,这个体系是在我们特定的生物进化过程中产生的(AI没有这个过程)。超智能软件不一定会认同人类的继续存在,并且将极难停止。最近一些学术出版物也开始讨论这个话题,认为它是对文明、人类和地球造成风险的真正来源。
+
政治科学家查尔斯 · 鲁宾认为,AI既不可能被设计成是友好的,也不能保证会是友好的<ref>{{cite journal|last=Rubin |first=Charles |authorlink=Charles T. Rubin |date=Spring 2003 |title=Artificial Intelligence and Human Nature|journal=The New Atlantis |volume=1 |pages=88–100 |url=http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |url-status=dead |archiveurl=https://web.archive.org/web/20120611115223/http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature |archivedate=11 June 2012 |df=dmy}}</ref>。他认为“任何足够的友善都可能难以与邪恶区分。”人类不应该假设机器或机器人会对我们友好,因为没有先验理由认为他们会对我们的道德体系有共鸣。这个体系是在我们特定的生物进化过程中产生的(AI没有这个过程)。超智能软件不一定会认同人类的继续存在,且我们将极难停止超级AI的运转。最近一些学术出版物也开始讨论这个话题,认为它是对文明、人类和地球造成风险的真正来源。
      第1,898行: 第1,876行:  
Leading AI researcher Rodney Brooks writes, "I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence."
 
Leading AI researcher Rodney Brooks writes, "I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence."
   −
首席AI研究员罗德尼 · 布鲁克斯写道: “我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我认为,这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快但构建有意识有感情的智能是件庞杂且艰巨的任务。”
+
首席AI研究员罗德尼 · 布鲁克斯写道: “我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我觉得这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快,但构建有意识有感情的智能则是件庞杂且艰巨的任务。”<ref>{{cite web|last=Brooks|first=Rodney|title=artificial intelligence is a tool, not a threat|date=10 November 2014|url=http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|url-status=dead|archiveurl=https://web.archive.org/web/20141112130954/http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/|archivedate=12 November 2014|df=dmy-all}}</ref>
    
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])“我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我认为,这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快但构建有意识有感情的智能是件庞杂且艰巨的任务。”该句为意译
 
   --[[用户:Thingamabob|Thingamabob]]([[用户讨论:Thingamabob|讨论]])“我认为担心我们在未来几百年的研发出邪恶AI是无稽之谈。我认为,这种担忧源于一个根本性的错误,即没有认识到AI在某些领域进展可以很快但构建有意识有感情的智能是件庞杂且艰巨的任务。”该句为意译
 +
  --[[用户:Qige96|Ricky]]([[用户讨论:Qige96|讨论]])干得漂亮
    
===机器意识、知觉和思维 Machine consciousness, sentience and mind ===
 
===机器意识、知觉和思维 Machine consciousness, sentience and mind ===
370

个编辑

导航菜单