更改

删除38字节 、 2021年7月31日 (六) 22:45
第二章润色
第28行: 第28行:  
| archive-url= https://web.archive.org/web/20080518185630/http://www.wired.com/wired/archive/11.12/love.html| archive-date= 18 May 2008 | url-status= live}}
 
| archive-url= https://web.archive.org/web/20080518185630/http://www.wired.com/wired/archive/11.12/love.html| archive-date= 18 May 2008 | url-status= live}}
 
</ref> One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate [[empathy]]. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.
 
</ref> One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate [[empathy]]. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.
  −
  −
  −
   
  One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.
 
  One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.
   −
'''情感计算''' '''Affective computing ('''也被称为人工情感智能或情感AI)是基于系统和设备的研究和开发来识别、理解、处理和模拟人的情感。这是一个融合计算机科学、心理学和认知科学的跨学科领域<ref name="TaoTan" />。虽然该领域的一些核心思想可以追溯到早期对情感<ref name=":0" />的哲学研究,但计算机科学的更现代分支起源于罗莎琳德·皮卡德1995年关于情感计算的论文【3】和她的由麻省理工出版社【5】【6】出版的《情感计算》【4】。这项研究的动机之一是赋予机器情商的能力,包括模拟移情。机器应该解读人类的情绪状态,并使其行为适应人类的情绪,对这些情绪作出适当的反应。
+
'''情感计算''' '''Affective computing ('''也被称为人工情感智能或情感AI)是基于系统和设备的研究和开发来识别、理解、处理和模拟人的情感。这是一个融合计算机科学、心理学和认知科学的跨学科领域<ref name="TaoTan" />。虽然该领域的一些核心思想可以追溯到早期对情感<ref name=":0" />的哲学研究,但计算机科学的现代分支研究起源于罗莎琳德·皮卡德1995年关于情感计算的论文【3】和她的由麻省理工出版社【5】【6】出版的《情感计算》【4】。这项研究的动机之一是赋予机器情感智能,包括具备同理心。机器应能够解读人类的情绪状态,适应人类的情绪,并对这些情绪作出适当的反应。
    
== Areas ==
 
== Areas ==
第40行: 第36行:  
== Areas ==
 
== Areas ==
   −
= = 面积 = =
+
= = 研究范围 = =
    
===Detecting and recognizing emotional information===
 
===Detecting and recognizing emotional information===
 +
 +
=== 检测和识别情感信息 ===
 
Detecting emotional information usually begins with passive [[sensors]] that capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture, and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring [[physiological]] data, such as skin temperature and [[galvanic skin response|galvanic resistance]].<ref>{{cite journal
 
Detecting emotional information usually begins with passive [[sensors]] that capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture, and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring [[physiological]] data, such as skin temperature and [[galvanic skin response|galvanic resistance]].<ref>{{cite journal
 
  | last = Garay
 
  | last = Garay
第60行: 第58行:  
Detecting emotional information usually begins with passive sensors that capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture, and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance.
 
Detecting emotional information usually begins with passive sensors that capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture, and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance.
   −
检测情感信息通常从被动传感器开始,这些传感器捕捉关于用户身体状态或行为的数据,而不解释输入信息。收集的数据类似于人类用来感知他人情感的线索。例如,摄像机可以捕捉面部表情、身体姿势和手势,而麦克风可以捕捉语音。其他传感器通过直接测量生理数据(如皮肤温度和电流电阻)来探测情感信号【7】。
+
检测情感信息通常从被动式传感器开始,这些传感器捕捉关于用户身体状态或行为的数据,而不解释输入信息。收集的数据类似于人类用来感知他人情感的线索。例如,摄像机可以捕捉面部表情、身体姿势和手势,而麦克风可以捕捉语音。一些传感器可以通过直接测量生理数据(如皮肤温度和电流电阻)来探测情感信号【7】。
    
Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different [[Modality (human–computer interaction)|modalities]], such as [[speech recognition]], [[natural language processing]], or [[face recognition|facial expression detection]].  The goal of most of these techniques is to produce labels that would match the labels a human perceiver would give in the same situation:  For example, if a person makes a facial expression furrowing their brow, then the computer vision system might be taught to label their face as appearing "confused" or as "concentrating" or "slightly negative" (as opposed to positive, which it might say if they were smiling in a happy-appearing way).  These labels may or may not correspond to what the person is actually feeling.
 
Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different [[Modality (human–computer interaction)|modalities]], such as [[speech recognition]], [[natural language processing]], or [[face recognition|facial expression detection]].  The goal of most of these techniques is to produce labels that would match the labels a human perceiver would give in the same situation:  For example, if a person makes a facial expression furrowing their brow, then the computer vision system might be taught to label their face as appearing "confused" or as "concentrating" or "slightly negative" (as opposed to positive, which it might say if they were smiling in a happy-appearing way).  These labels may or may not correspond to what the person is actually feeling.
第66行: 第64行:  
Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different modalities, such as speech recognition, natural language processing, or facial expression detection.  The goal of most of these techniques is to produce labels that would match the labels a human perceiver would give in the same situation:  For example, if a person makes a facial expression furrowing their brow, then the computer vision system might be taught to label their face as appearing "confused" or as "concentrating" or "slightly negative" (as opposed to positive, which it might say if they were smiling in a happy-appearing way).  These labels may or may not correspond to what the person is actually feeling.
 
Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different modalities, such as speech recognition, natural language processing, or facial expression detection.  The goal of most of these techniques is to produce labels that would match the labels a human perceiver would give in the same situation:  For example, if a person makes a facial expression furrowing their brow, then the computer vision system might be taught to label their face as appearing "confused" or as "concentrating" or "slightly negative" (as opposed to positive, which it might say if they were smiling in a happy-appearing way).  These labels may or may not correspond to what the person is actually feeling.
   −
识别情感信息需要从收集到的数据中提取出有意义的模式。这是通过处理不同模式的机器学习技术完成的,如语音识别、自然语言处理或面部表情检测。大多数这些技术的目标是产生与人类感知者在相同情况下给出的标签相匹配的标签: 例如,如果一个人做出皱眉的面部表情,那么计算机视觉系统可能会被教导将他们的脸标记为看起来“困惑”、“专注”或“轻微消极”(与积极相反,它可能会说,如果他们正在以一种快乐的方式微笑)。这些标签可能与人们的真实感受相符,也可能不相符。
+
识别情感信息需要从收集到的数据中提取出有意义的模式。这通常要使用多模态机器学习技术,如语音识别、自然语言处理或面部表情检测等。大多数这些技术的目标是给出与人类感情相一致的标签: 例如,如果一个人做出皱眉的面部表情,那么计算机视觉系统可能会被教导将他们的脸标记为“困惑”、“专注”或“轻微消极”(与象征着积极的快乐微笑相反)。这些标签可能与人们的真实感受相符,也可能不相符。
    
===Emotion in machines===
 
===Emotion in machines===
 +
 +
=== 机器中的情感 ===
 
Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents in order to enrich and facilitate interactivity between human and machine.<ref>{{Cite book|last=Heise|first=David|contribution=Enculturating agents with expressive role behavior|year=2004|title=Agent Culture: Human-Agent Interaction in a Mutlicultural World|editor1=Sabine Payr|pages=127–142|publisher=Lawrence Erlbaum Associates|editor2-first=Robert |editor2-last=Trappl}}</ref>
 
Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents in order to enrich and facilitate interactivity between human and machine.<ref>{{Cite book|last=Heise|first=David|contribution=Enculturating agents with expressive role behavior|year=2004|title=Agent Culture: Human-Agent Interaction in a Mutlicultural World|editor1=Sabine Payr|pages=127–142|publisher=Lawrence Erlbaum Associates|editor2-first=Robert |editor2-last=Trappl}}</ref>
    
Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents in order to enrich and facilitate interactivity between human and machine.
 
Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents in order to enrich and facilitate interactivity between human and machine.
   −
情感计算的另一个领域是计算设备的设计,旨在展示先天的情感能力或能够令人信服地模拟情感。基于当前的技术能力,一个更加实用的方法是模拟会话代理中的情绪,以丰富和促进人与机器之间的互动【8】。
+
情感计算的另一个研究领域是设计出能够展示天然的感情(或令人信服地模拟情感)的计算设备。基于当前的技术,一个更加可行的方法是模拟对话机器人的情感,以丰富和促进人与机器之间的互动【8】。
    
[[Marvin Minsky]], one of the pioneering computer scientists in [[artificial intelligence]], relates emotions to the broader issues of machine intelligence stating in ''[[The Emotion Machine]]'' that emotion is "not especially different from the processes that we call 'thinking.'"<ref>{{cite news|url=https://www.washingtonpost.com/wp-dyn/content/article/2006/12/14/AR2006121401554.html|title=Mind Over Matter|last=Restak|first=Richard|date=2006-12-17|work=The Washington Post|access-date=2008-05-13}}</ref>
 
[[Marvin Minsky]], one of the pioneering computer scientists in [[artificial intelligence]], relates emotions to the broader issues of machine intelligence stating in ''[[The Emotion Machine]]'' that emotion is "not especially different from the processes that we call 'thinking.'"<ref>{{cite news|url=https://www.washingtonpost.com/wp-dyn/content/article/2006/12/14/AR2006121401554.html|title=Mind Over Matter|last=Restak|first=Richard|date=2006-12-17|work=The Washington Post|access-date=2008-05-13}}</ref>
12

个编辑