更改

跳到导航 跳到搜索
删除237字节 、 2021年7月20日 (二) 17:38
第41行: 第41行:  
== Areas ==
 
== Areas ==
   −
= = 面积 = =  
+
= = 面积 = =
    
===Detecting and recognizing emotional information===
 
===Detecting and recognizing emotional information===
第61行: 第61行:  
Detecting emotional information usually begins with passive sensors that capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture, and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance.
 
Detecting emotional information usually begins with passive sensors that capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture, and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance.
   −
检测和识别情感信息通常是从被动传感器开始的,这些传感器捕捉有关用户身体状态或行为的数据,而不解释输入信息。收集的数据类似于人类用来感知他人情感的线索。例如,摄像机可以捕捉面部表情、身体姿势和手势,而麦克风可以捕捉语音。其他传感器通过直接测量生理数据(如皮肤温度和电流电阻)来探测情感信号。
+
检测情感信息通常从被动传感器开始,这些传感器捕捉关于用户身体状态或行为的数据,而不解释输入信息。收集的数据类似于人类用来感知他人情感的线索。例如,摄像机可以捕捉面部表情、身体姿势和手势,而麦克风可以捕捉语音。其他传感器通过直接测量生理数据(如皮肤温度和电流电阻)来探测情感信号。
    
Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different [[Modality (human–computer interaction)|modalities]], such as [[speech recognition]], [[natural language processing]], or [[face recognition|facial expression detection]].  The goal of most of these techniques is to produce labels that would match the labels a human perceiver would give in the same situation:  For example, if a person makes a facial expression furrowing their brow, then the computer vision system might be taught to label their face as appearing "confused" or as "concentrating" or "slightly negative" (as opposed to positive, which it might say if they were smiling in a happy-appearing way).  These labels may or may not correspond to what the person is actually feeling.
 
Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different [[Modality (human–computer interaction)|modalities]], such as [[speech recognition]], [[natural language processing]], or [[face recognition|facial expression detection]].  The goal of most of these techniques is to produce labels that would match the labels a human perceiver would give in the same situation:  For example, if a person makes a facial expression furrowing their brow, then the computer vision system might be taught to label their face as appearing "confused" or as "concentrating" or "slightly negative" (as opposed to positive, which it might say if they were smiling in a happy-appearing way).  These labels may or may not correspond to what the person is actually feeling.
第74行: 第74行:  
Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents in order to enrich and facilitate interactivity between human and machine.
 
Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents in order to enrich and facilitate interactivity between human and machine.
   −
情感计算的另一个领域是设计计算机设备,这些设备要么展示了与生俱来的情感能力,要么能够令人信服地模拟情感。基于当前的技术能力,一个更加实用的方法是模拟会话代理中的情绪,以丰富和促进人与机器之间的互动。
+
情感计算的另一个领域是计算机设备的设计,提出要么展示先天的情感能力或能够令人信服地模拟情绪。基于当前的技术能力,一个更加实用的方法是模拟会话代理中的情绪,以丰富和促进人与机器之间的互动。
    
[[Marvin Minsky]], one of the pioneering computer scientists in [[artificial intelligence]], relates emotions to the broader issues of machine intelligence stating in ''[[The Emotion Machine]]'' that emotion is "not especially different from the processes that we call 'thinking.'"<ref>{{cite news|url=https://www.washingtonpost.com/wp-dyn/content/article/2006/12/14/AR2006121401554.html|title=Mind Over Matter|last=Restak|first=Richard|date=2006-12-17|work=The Washington Post|access-date=2008-05-13}}</ref>
 
[[Marvin Minsky]], one of the pioneering computer scientists in [[artificial intelligence]], relates emotions to the broader issues of machine intelligence stating in ''[[The Emotion Machine]]'' that emotion is "not especially different from the processes that we call 'thinking.'"<ref>{{cite news|url=https://www.washingtonpost.com/wp-dyn/content/article/2006/12/14/AR2006121401554.html|title=Mind Over Matter|last=Restak|first=Richard|date=2006-12-17|work=The Washington Post|access-date=2008-05-13}}</ref>
第87行: 第87行:  
In psychology, cognitive science, and in neuroscience, there have been two main approaches for describing how humans perceive and classify emotion: continuous or categorical. The continuous approach tends to use dimensions such as negative vs. positive, calm vs. aroused.
 
In psychology, cognitive science, and in neuroscience, there have been two main approaches for describing how humans perceive and classify emotion: continuous or categorical. The continuous approach tends to use dimensions such as negative vs. positive, calm vs. aroused.
   −
在心理学、认知科学和神经科学中,描述人类如何感知和分类情绪有两种主要的方法: 连续的和分类的。持续的方法倾向于使用诸如消极与积极、平静与被唤醒之类的维度。
+
在心理学、认知科学和神经科学中,描述人类如何感知和分类情绪的方法主要有两种: 连续的和分类的。持续的方法倾向于使用诸如消极与积极、平静与被唤醒之类的维度。
    
The categorical approach tends to use discrete classes such as happy, sad, angry, fearful, surprise, disgust.  Different kinds of machine learning regression and classification models can be used for having machines produce continuous or discrete labels.  Sometimes models are also built that allow combinations across the categories, e.g. a happy-surprised face or a fearful-surprised face.<ref>{{Cite journal|title = A model of the perception of facial expressions of emotion by humans: Research overview and perspectives.|last = Aleix, and Shichuan Du|first = Martinez|date = 2012|journal = The Journal of Machine Learning Research |volume=13 |issue=1 |pages=1589–1608|url=https://www.jmlr.org/papers/volume13/martinez12a/martinez12a.pdf}}</ref>
 
The categorical approach tends to use discrete classes such as happy, sad, angry, fearful, surprise, disgust.  Different kinds of machine learning regression and classification models can be used for having machines produce continuous or discrete labels.  Sometimes models are also built that allow combinations across the categories, e.g. a happy-surprised face or a fearful-surprised face.<ref>{{Cite journal|title = A model of the perception of facial expressions of emotion by humans: Research overview and perspectives.|last = Aleix, and Shichuan Du|first = Martinez|date = 2012|journal = The Journal of Machine Learning Research |volume=13 |issue=1 |pages=1589–1608|url=https://www.jmlr.org/papers/volume13/martinez12a/martinez12a.pdf}}</ref>
第106行: 第106行:  
Various changes in the autonomic nervous system can indirectly alter a person's speech, and affective technologies can leverage this information to recognize emotion. For example, speech produced in a state of fear, anger, or joy becomes fast, loud, and precisely enunciated, with a higher and wider range in pitch, whereas emotions such as tiredness, boredom, or sadness tend to generate slow, low-pitched, and slurred speech.Breazeal, C. and Aryananda, L. Recognition of affective communicative intent in robot-directed speech. Autonomous Robots 12 1, 2002. pp. 83–104. Some emotions have been found to be more easily computationally identified, such as anger or approval.
 
Various changes in the autonomic nervous system can indirectly alter a person's speech, and affective technologies can leverage this information to recognize emotion. For example, speech produced in a state of fear, anger, or joy becomes fast, loud, and precisely enunciated, with a higher and wider range in pitch, whereas emotions such as tiredness, boredom, or sadness tend to generate slow, low-pitched, and slurred speech.Breazeal, C. and Aryananda, L. Recognition of affective communicative intent in robot-directed speech. Autonomous Robots 12 1, 2002. pp. 83–104. Some emotions have been found to be more easily computationally identified, such as anger or approval.
   −
自主神经系统的各种变化可以间接地改变一个人的语言,情感技术可以利用这些信息来识别情感。例如,在恐惧、愤怒或高兴的状态下发言变得快速、响亮、清晰,音调变得越来越高、越来越宽,而诸如疲倦、厌倦或悲伤等情绪往往会产生缓慢、低沉、含糊不清的发言。机器人导向语音中情感交流意图的识别。Autonomous Robots 12 1, 2002. pp.83–104.一些情绪被发现更容易被计算识别,比如愤怒或认可。
+
自主神经系统的各种变化可以间接地改变一个人的语言,情感技术可以利用这些信息来识别情绪。例如,在恐惧、愤怒或高兴的状态下发言变得快速、响亮、清晰,音调变得越来越高、越来越宽,而诸如疲倦、厌倦或悲伤等情绪往往会产生缓慢、低沉、含糊不清的发言。机器人导向语音中情感交流意图的识别。Autonomous Robots 12 1, 2002. pp.83–104.一些情绪被发现更容易被计算识别,比如愤怒或认可。
    
Emotional speech processing technologies recognize the user's emotional state using computational analysis of speech features. Vocal parameters and [[prosody (linguistics)|prosodic]] features such as pitch variables and speech rate can be analyzed through pattern recognition techniques.<ref name="Dellaert">Dellaert, F., Polizin, t., and Waibel, A., Recognizing Emotion in Speech", In Proc. Of ICSLP 1996, Philadelphia, PA, pp.1970–1973, 1996</ref><ref name="Lee">Lee, C.M.; Narayanan, S.; Pieraccini, R., Recognition of Negative Emotion in the Human Speech Signals, Workshop on Auto. Speech Recognition and Understanding, Dec 2001</ref>
 
Emotional speech processing technologies recognize the user's emotional state using computational analysis of speech features. Vocal parameters and [[prosody (linguistics)|prosodic]] features such as pitch variables and speech rate can be analyzed through pattern recognition techniques.<ref name="Dellaert">Dellaert, F., Polizin, t., and Waibel, A., Recognizing Emotion in Speech", In Proc. Of ICSLP 1996, Philadelphia, PA, pp.1970–1973, 1996</ref><ref name="Lee">Lee, C.M.; Narayanan, S.; Pieraccini, R., Recognition of Negative Emotion in the Human Speech Signals, Workshop on Auto. Speech Recognition and Understanding, Dec 2001</ref>
第124行: 第124行:  
====Algorithms====
 
====Algorithms====
   −
= = = 算法 = = =  
+
= = = 算法 = = =
    
The process of speech/text affect detection requires the creation of a reliable [[database]], [[knowledge base]], or [[vector space model]],<ref name = "Osgood75">
 
The process of speech/text affect detection requires the creation of a reliable [[database]], [[knowledge base]], or [[vector space model]],<ref name = "Osgood75">
第185行: 第185行:  
====Databases====
 
====Databases====
   −
= = = = 数据库 = = =  
+
= = = = 数据库 = = =
    
The vast majority of present systems are data-dependent. This creates one of the biggest challenges in detecting emotions based on speech, as it implicates choosing an appropriate database used to train the classifier. Most of the currently possessed data was obtained from actors and is thus a representation of archetypal emotions. Those so-called acted databases are usually based on the Basic Emotions theory (by [[Paul Ekman]]), which assumes the existence of six basic emotions (anger, fear, disgust, surprise, joy, sadness), the others simply being a mix of the former ones.<ref name="Ekman, P. 1969">Ekman, P. & Friesen, W. V (1969). [http://www.communicationcache.com/uploads/1/0/8/8/10887248/the-repertoire-of-nonverbal-behavior-categories-origins-usage-and-coding.pdf The repertoire of nonverbal behavior: Categories, origins, usage, and coding]. Semiotica, 1, 49–98.</ref> Nevertheless, these still offer high audio quality and balanced classes (although often too few), which contribute to high success rates in recognizing emotions.
 
The vast majority of present systems are data-dependent. This creates one of the biggest challenges in detecting emotions based on speech, as it implicates choosing an appropriate database used to train the classifier. Most of the currently possessed data was obtained from actors and is thus a representation of archetypal emotions. Those so-called acted databases are usually based on the Basic Emotions theory (by [[Paul Ekman]]), which assumes the existence of six basic emotions (anger, fear, disgust, surprise, joy, sadness), the others simply being a mix of the former ones.<ref name="Ekman, P. 1969">Ekman, P. & Friesen, W. V (1969). [http://www.communicationcache.com/uploads/1/0/8/8/10887248/the-repertoire-of-nonverbal-behavior-categories-origins-usage-and-coding.pdf The repertoire of nonverbal behavior: Categories, origins, usage, and coding]. Semiotica, 1, 49–98.</ref> Nevertheless, these still offer high audio quality and balanced classes (although often too few), which contribute to high success rates in recognizing emotions.
第209行: 第209行:  
====Speech descriptors====
 
====Speech descriptors====
   −
= = = 语言描述符 = = =  
+
= = = 语言描述符 = = =
    
The complexity of the affect recognition process increases with the number of classes (affects) and speech descriptors used within the classifier. It is, therefore, crucial to select only the most relevant features in order to assure the ability of the model to successfully identify emotions, as well as increasing the performance, which is particularly significant to real-time detection. The range of possible choices is vast, with some studies mentioning the use of over 200 distinct features.<ref name="Scherer-2010-p241"/> It is crucial to identify those that are redundant and undesirable in order to optimize the system and increase the success rate of correct emotion detection. The most common speech characteristics are categorized into the following groups.<ref name="Steidl-2011"/><ref name="Scherer-2010-p243"/>
 
The complexity of the affect recognition process increases with the number of classes (affects) and speech descriptors used within the classifier. It is, therefore, crucial to select only the most relevant features in order to assure the ability of the model to successfully identify emotions, as well as increasing the performance, which is particularly significant to real-time detection. The range of possible choices is vast, with some studies mentioning the use of over 200 distinct features.<ref name="Scherer-2010-p241"/> It is crucial to identify those that are redundant and undesirable in order to optimize the system and increase the success rate of correct emotion detection. The most common speech characteristics are categorized into the following groups.<ref name="Steidl-2011"/><ref name="Scherer-2010-p243"/>
第268行: 第268行:  
The detection and processing of facial expression are achieved through various methods such as optical flow, hidden Markov models, neural network processing or active appearance models. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody, facial expressions and hand gestures, or facial expressions with speech and text for multimodal data and metadata analysis) to provide a more robust estimation of the subject's emotional state. Affectiva is a company (co-founded by Rosalind Picard and Rana El Kaliouby) directly related to affective computing and aims at investigating solutions and software for facial affect detection.
 
The detection and processing of facial expression are achieved through various methods such as optical flow, hidden Markov models, neural network processing or active appearance models. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody, facial expressions and hand gestures, or facial expressions with speech and text for multimodal data and metadata analysis) to provide a more robust estimation of the subject's emotional state. Affectiva is a company (co-founded by Rosalind Picard and Rana El Kaliouby) directly related to affective computing and aims at investigating solutions and software for facial affect detection.
   −
面部情感检测通过光流、隐马尔可夫模型、神经网络处理或主动外观模型等多种方法实现面部表情的检测和处理。多模式识别可以组合或融合多种模式。面部表情和语音韵律,面部表情和手势,或面部表情与语音和文本的多模态数据和元数据分析) ,以提供一个更稳健的估计主题的情绪状态。Affectiva 是一家与情感计算直接相关的公司(由 Rosalind Picard 和 Rana El Kaliouby 共同创办) ,旨在研究面部情感检测的解决方案和软件。
+
面部表情的检测和处理通过光流、隐马尔可夫模型、神经网络处理或主动外观模型等多种方法实现。多模式识别可以组合或融合多种模式。面部表情和语音韵律,面部表情和手势,或面部表情与语音和文本的多模态数据和元数据分析) ,以提供一个更稳健的估计主题的情绪状态。Affectiva 是一家与情感计算直接相关的公司(由 Rosalind Picard 和 Rana El Kaliouby 共同创办) ,旨在研究面部情感检测的解决方案和软件。
    
==== Facial expression databases ====
 
==== Facial expression databases ====
第277行: 第277行:  
Creation of an emotion database is a difficult and time-consuming task. However, database creation is an essential step in the creation of a system that will recognize human emotions. Most of the publicly available emotion databases include posed facial expressions only. In posed expression databases, the participants are asked to display different basic emotional expressions, while in spontaneous expression database, the expressions are natural. Spontaneous emotion elicitation requires significant effort in the selection of proper stimuli which can lead to a rich display of intended emotions. Secondly, the process involves tagging of emotions by trained individuals manually which makes the databases highly reliable. Since perception of expressions and their intensity is subjective in nature, the annotation by experts is essential for the purpose of validation.
 
Creation of an emotion database is a difficult and time-consuming task. However, database creation is an essential step in the creation of a system that will recognize human emotions. Most of the publicly available emotion databases include posed facial expressions only. In posed expression databases, the participants are asked to display different basic emotional expressions, while in spontaneous expression database, the expressions are natural. Spontaneous emotion elicitation requires significant effort in the selection of proper stimuli which can lead to a rich display of intended emotions. Secondly, the process involves tagging of emotions by trained individuals manually which makes the databases highly reliable. Since perception of expressions and their intensity is subjective in nature, the annotation by experts is essential for the purpose of validation.
   −
面部表情数据库创建情感数据库是一项困难而耗时的任务。然而,创建数据库是创建识别人类情感的系统的关键步骤。大多数公开的情感数据库只包含摆出的面部表情。在提出的表达式数据库中,要求参与者显示不同的基本情感表达,而在自发表达式数据库中,表达是自然的。自发的情绪诱导需要在选择合适的刺激物时付出巨大的努力,这会导致丰富的预期情绪的展示。其次,这个过程包括由训练有素的人手动标记情绪,使数据库高度可靠。由于对表达式及其强度的感知在本质上是主观的,专家的注释对于验证是必不可少的。
+
情感数据库的建立是一项既困难又耗时的工作。然而,创建数据库是创建识别人类情感的系统的关键步骤。大多数公开的情感数据库只包含摆出的面部表情。在提出的表达式数据库中,要求参与者显示不同的基本情感表达,而在自发表达式数据库中,表达是自然的。自发的情绪诱导需要在选择合适的刺激物时付出巨大的努力,这会导致丰富的预期情绪的展示。其次,这个过程包括由训练有素的人手动标记情绪,使数据库高度可靠。由于对表达式及其强度的感知在本质上是主观的,专家的注释对于验证是必不可少的。
    
Researchers work with three types of databases, such as a database of peak expression images only, a database of image sequences portraying an emotion from neutral to its peak, and video clips with emotional annotations. Many facial expression databases have been created and made public for expression recognition purpose. Two of the widely used databases are CK+ and JAFFE.
 
Researchers work with three types of databases, such as a database of peak expression images only, a database of image sequences portraying an emotion from neutral to its peak, and video clips with emotional annotations. Many facial expression databases have been created and made public for expression recognition purpose. Two of the widely used databases are CK+ and JAFFE.
第294行: 第294行:  
He therefore officially put forth six basic emotions, in 1972:
 
He therefore officially put forth six basic emotions, in 1972:
   −
20世纪60年代末,通过对福尔部落巴布亚新几内亚的跨文化研究,Paul Ekman 提出了这样一个观点: 面部表情不是由文化决定的,而是具有普遍性的。因此,他认为它们是生物起源的,因此可以安全而正确地归类。因此,他在1972年正式提出了六种基本情感:
+
20世纪60年代末,Paul Ekman 在福尔部落做了跨文化研究,提出面部表情不是由文化决定的,而是普遍存在的巴布亚新几内亚。因此,他认为它们是生物起源的,因此可以安全而正确地归类。因此,他在1972年正式提出了六种基本情感:
    
* [[Anger]]
 
* [[Anger]]
第404行: 第404行:  
As with every computational practice, in affect detection by facial processing, some obstacles need to be surpassed, in order to fully unlock the hidden potential of the overall algorithm or method employed. In the early days of almost every kind of AI-based detection (speech recognition, face recognition, affect recognition), the accuracy of modeling and tracking has been an issue. As hardware evolves, as more data are collected and as new discoveries are made and new practices introduced, this lack of accuracy fades, leaving behind noise issues. However, methods for noise removal exist including neighborhood averaging, linear Gaussian smoothing, median filtering, or newer methods such as the Bacterial Foraging Optimization Algorithm.Clever Algorithms. "Bacterial Foraging Optimization Algorithm – Swarm Algorithms – Clever Algorithms" . Clever Algorithms. Retrieved 21 March 2011."Soft Computing". Soft Computing. Retrieved 18 March 2011.
 
As with every computational practice, in affect detection by facial processing, some obstacles need to be surpassed, in order to fully unlock the hidden potential of the overall algorithm or method employed. In the early days of almost every kind of AI-based detection (speech recognition, face recognition, affect recognition), the accuracy of modeling and tracking has been an issue. As hardware evolves, as more data are collected and as new discoveries are made and new practices introduced, this lack of accuracy fades, leaving behind noise issues. However, methods for noise removal exist including neighborhood averaging, linear Gaussian smoothing, median filtering, or newer methods such as the Bacterial Foraging Optimization Algorithm.Clever Algorithms. "Bacterial Foraging Optimization Algorithm – Swarm Algorithms – Clever Algorithms" . Clever Algorithms. Retrieved 21 March 2011."Soft Computing". Soft Computing. Retrieved 18 March 2011.
   −
= = = = 面部检测的挑战 = = = = = = 每一个计算实践,在通过面部处理的影响检测,一些障碍需要被超越,以便充分发掘所使用的整体算法或方法的隐藏潜力。在几乎所有基于人工智能的检测(语音识别、人脸识别、情感识别)的早期,建模和跟踪的准确性一直是个问题。随着硬件的发展,随着更多的数据被收集,随着新的发现和新的实践的引入,这种缺乏准确性的现象逐渐消失,留下了噪音问题。然而,现有的去噪方法包括邻域平均法、线性高斯平滑法、中值滤波法,或者更新的方法如细菌觅食优化算法。聪明的算法。“细菌觅食优化算法-群算法-巧妙算法”。聪明的算法。2011年3月21日。「软电脑」。软计算。2011年3月18日。
+
正如每一个计算实践,在人脸处理的情感检测中,一些障碍需要被超越,以便充分释放所使用的整体算法或方法的隐藏潜力。在几乎所有基于人工智能的检测(语音识别、人脸识别、情感识别)的早期,建模和跟踪的准确性一直是个问题。随着硬件的发展,随着更多的数据被收集,随着新的发现和新的实践的引入,这种缺乏准确性的现象逐渐消失,留下了噪音问题。然而,现有的去噪方法包括邻域平均法、线性高斯平滑法、中值滤波法,或者更新的方法如细菌觅食优化算法。聪明的算法。“细菌觅食优化算法-群算法-巧妙算法”。聪明的算法。2011年3月21日。「软电脑」。软计算。2011年3月18日。
    
Other challenges include
 
Other challenges include
第437行: 第437行:  
Gestures could be efficiently used as a means of detecting a particular emotional state of the user, especially when used in conjunction with speech and face recognition. Depending on the specific action, gestures could be simple reflexive responses, like lifting your shoulders when you don't know the answer to a question, or they could be complex and meaningful as when communicating with sign language. Without making use of any object or surrounding environment, we can wave our hands, clap or beckon. On the other hand, when using objects, we can point at them, move, touch or handle these. A computer should be able to recognize these, analyze the context and respond in a meaningful way, in order to be efficiently used for Human–Computer Interaction.
 
Gestures could be efficiently used as a means of detecting a particular emotional state of the user, especially when used in conjunction with speech and face recognition. Depending on the specific action, gestures could be simple reflexive responses, like lifting your shoulders when you don't know the answer to a question, or they could be complex and meaningful as when communicating with sign language. Without making use of any object or surrounding environment, we can wave our hands, clap or beckon. On the other hand, when using objects, we can point at them, move, touch or handle these. A computer should be able to recognize these, analyze the context and respond in a meaningful way, in order to be efficiently used for Human–Computer Interaction.
   −
手势可以有效地用作检测用户特定情绪状态的手段,特别是与语音和面部识别结合使用时。根据具体的动作,手势可以是简单的反射性反应,比如当你不知道一个问题的答案时抬起你的肩膀,或者它们可以是复杂和有意义的,比如当与手语交流时。不需要利用任何物体或周围环境,我们可以挥手、拍手或招手。另一方面,当我们使用物体时,我们可以指向它们,移动,触摸或者处理它们。计算机应该能够识别这些,分析上下文,并以一种有意义的方式作出响应,以便有效地用于人机交互。
+
手势可以有效地作为一种检测用户特定情绪状态的手段,特别是与语音和面部识别结合使用时。根据具体的动作,手势可以是简单的反射性反应,比如当你不知道一个问题的答案时抬起你的肩膀,或者它们可以是复杂和有意义的,比如当与手语交流时。不需要利用任何物体或周围环境,我们可以挥手、拍手或招手。另一方面,当我们使用物体时,我们可以指向它们,移动,触摸或者处理它们。计算机应该能够识别这些,分析上下文,并以一种有意义的方式作出响应,以便有效地用于人机交互。
    
There are many proposed methods<ref name="JK">J. K. Aggarwal, Q. Cai, Human Motion Analysis: A Review, Computer Vision and Image Understanding, Vol. 73, No. 3, 1999</ref> to detect the body gesture. Some literature differentiates 2 different approaches in gesture recognition: a 3D model based and an appearance-based.<ref name="Vladimir">{{cite journal | first1 = Vladimir I. | last1 = Pavlovic | first2 = Rajeev | last2 = Sharma | first3 = Thomas S. | last3 = Huang | url = http://www.cs.rutgers.edu/~vladimir/pub/pavlovic97pami.pdf | title = Visual Interpretation of Hand Gestures for Human–Computer Interaction: A Review | journal = [[IEEE Transactions on Pattern Analysis and Machine Intelligence]] | volume = 19 | issue = 7 | pages = 677–695 | year = 1997 | doi = 10.1109/34.598226 }}</ref> The foremost method makes use of 3D information of key elements of the body parts in order to obtain several important parameters, like palm position or joint angles. On the other hand, appearance-based systems use images or videos to for direct interpretation. Hand gestures have been a common focus of body gesture detection methods.<ref name="Vladimir"/>
 
There are many proposed methods<ref name="JK">J. K. Aggarwal, Q. Cai, Human Motion Analysis: A Review, Computer Vision and Image Understanding, Vol. 73, No. 3, 1999</ref> to detect the body gesture. Some literature differentiates 2 different approaches in gesture recognition: a 3D model based and an appearance-based.<ref name="Vladimir">{{cite journal | first1 = Vladimir I. | last1 = Pavlovic | first2 = Rajeev | last2 = Sharma | first3 = Thomas S. | last3 = Huang | url = http://www.cs.rutgers.edu/~vladimir/pub/pavlovic97pami.pdf | title = Visual Interpretation of Hand Gestures for Human–Computer Interaction: A Review | journal = [[IEEE Transactions on Pattern Analysis and Machine Intelligence]] | volume = 19 | issue = 7 | pages = 677–695 | year = 1997 | doi = 10.1109/34.598226 }}</ref> The foremost method makes use of 3D information of key elements of the body parts in order to obtain several important parameters, like palm position or joint angles. On the other hand, appearance-based systems use images or videos to for direct interpretation. Hand gestures have been a common focus of body gesture detection methods.<ref name="Vladimir"/>
第450行: 第450行:  
This could be used to detect a user's affective state by monitoring and analyzing their physiological signs. These signs range from changes in heart rate and skin conductance to minute contractions of the facial muscles and changes in facial blood flow. This area is gaining momentum and we are now seeing real products that implement the techniques. The four main physiological signs that are usually analyzed are blood volume pulse, galvanic skin response, facial electromyography, and facial color patterns.
 
This could be used to detect a user's affective state by monitoring and analyzing their physiological signs. These signs range from changes in heart rate and skin conductance to minute contractions of the facial muscles and changes in facial blood flow. This area is gaining momentum and we are now seeing real products that implement the techniques. The four main physiological signs that are usually analyzed are blood volume pulse, galvanic skin response, facial electromyography, and facial color patterns.
   −
= = = 生理监测 = = 这可以用来检测用户的情感状态监测和分析他们的生理迹象。这些信号包括心率和皮肤导电反应的变化,面部肌肉的微小收缩和面部血流的变化。这个领域的发展势头越来越强劲,我们现在看到了实现这些技术的真正产品。通常被分析的4个主要生理特征是血容量脉搏、皮肤电反应、面部肌电图和面部颜色模式。
+
这可以用来检测用户的情感状态,通过监测和分析他们的生理信号。这些信号包括心率和皮肤导电反应的变化,面部肌肉的微小收缩和面部血流的变化。这个领域的发展势头越来越强劲,我们现在看到了实现这些技术的真正产品。通常被分析的4个主要生理特征是血容量脉搏、皮肤电反应、面部肌电图和面部颜色模式。
    
====Blood volume pulse====
 
====Blood volume pulse====
第456行: 第456行:  
====Blood volume pulse====
 
====Blood volume pulse====
   −
= = = 血容量脉搏 = = =  
+
= = = 血容量脉搏 = = =
    
=====Overview=====
 
=====Overview=====
第462行: 第462行:  
=====Overview=====
 
=====Overview=====
   −
= = = 概述 = = =  
+
= = = 概述 = = =
    
A subject's blood volume pulse (BVP) can be measured by a process called photoplethysmography, which produces a graph indicating blood flow through the extremities.<ref name="Picard, Rosalind 1998">Picard, Rosalind (1998). Affective Computing. MIT.</ref> The peaks of the waves indicate a cardiac cycle where the heart has pumped blood to the extremities. If the subject experiences fear or is startled, their heart usually 'jumps' and beats quickly for some time, causing the amplitude of the cardiac cycle to increase. This can clearly be seen on a photoplethysmograph when the distance between the trough and the peak of the wave has decreased. As the subject calms down, and as the body's inner core expands, allowing more blood to flow back to the extremities, the cycle will return to normal.
 
A subject's blood volume pulse (BVP) can be measured by a process called photoplethysmography, which produces a graph indicating blood flow through the extremities.<ref name="Picard, Rosalind 1998">Picard, Rosalind (1998). Affective Computing. MIT.</ref> The peaks of the waves indicate a cardiac cycle where the heart has pumped blood to the extremities. If the subject experiences fear or is startled, their heart usually 'jumps' and beats quickly for some time, causing the amplitude of the cardiac cycle to increase. This can clearly be seen on a photoplethysmograph when the distance between the trough and the peak of the wave has decreased. As the subject calms down, and as the body's inner core expands, allowing more blood to flow back to the extremities, the cycle will return to normal.
第474行: 第474行:  
=====Methodology=====
 
=====Methodology=====
   −
= = = = 方法论 = = =  
+
= = = = 方法论 = = =
    
Infra-red light is shone on the skin by special sensor hardware, and the amount of light reflected is measured. The amount of reflected and transmitted light correlates to the BVP as light is absorbed by hemoglobin which is found richly in the bloodstream.
 
Infra-red light is shone on the skin by special sensor hardware, and the amount of light reflected is measured. The amount of reflected and transmitted light correlates to the BVP as light is absorbed by hemoglobin which is found richly in the bloodstream.
第486行: 第486行:  
=====Disadvantages=====
 
=====Disadvantages=====
   −
= = = = 劣势 = = = =  
+
= = = = 劣势 = = = =
    
It can be cumbersome to ensure that the sensor shining an infra-red light and monitoring the reflected light is always pointing at the same extremity, especially seeing as subjects often stretch and readjust their position while using a computer.
 
It can be cumbersome to ensure that the sensor shining an infra-red light and monitoring the reflected light is always pointing at the same extremity, especially seeing as subjects often stretch and readjust their position while using a computer.
第540行: 第540行:  
====Facial color====
 
====Facial color====
   −
= = = 面部表情 = =  
+
= = = 面部表情 = =
    
=====Overview=====
 
=====Overview=====
第546行: 第546行:  
=====Overview=====
 
=====Overview=====
   −
= = = 概述 = = =  
+
= = = 概述 = = =
    
The surface of the human face is innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Whether or not facial emotions activate facial muscles, variations in blood flow, blood pressure, glucose levels, and other changes occur. Also, the facial color signal is independent from that provided by facial muscle movements.<ref name="face">Carlos F. Benitez-Quiroz, Ramprakash Srinivasan, Aleix M. Martinez, [https://www.pnas.org/content/115/14/3581 Facial color is an efficient mechanism to visually transmit emotion], PNAS. April 3, 2018 115 (14) 3581–3586; first published March 19, 2018 https://doi.org/10.1073/pnas.1716084115.</ref>
 
The surface of the human face is innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Whether or not facial emotions activate facial muscles, variations in blood flow, blood pressure, glucose levels, and other changes occur. Also, the facial color signal is independent from that provided by facial muscle movements.<ref name="face">Carlos F. Benitez-Quiroz, Ramprakash Srinivasan, Aleix M. Martinez, [https://www.pnas.org/content/115/14/3581 Facial color is an efficient mechanism to visually transmit emotion], PNAS. April 3, 2018 115 (14) 3581–3586; first published March 19, 2018 https://doi.org/10.1073/pnas.1716084115.</ref>
第558行: 第558行:  
=====Methodology=====
 
=====Methodology=====
   −
= = = = 方法论 = = =  
+
= = = = 方法论 = = =
    
Approaches are based on facial color changes. Delaunay triangulation is used to create the triangular local areas. Some of these triangles which define the interior of the mouth and eyes (sclera and iris) are removed. Use the left triangular areas’ pixels to create feature vectors.<ref name="face"/> It shows that converting the pixel color of the standard RGB color space to a color space such as oRGB color space<ref name="orgb">M. Bratkova, S. Boulos, and P. Shirley, [https://ieeexplore.ieee.org/document/4736456 oRGB: a practical opponent color space for computer graphics], IEEE Computer Graphics and Applications, 29(1):42–55, 2009.</ref> or LMS channels perform better when dealing with faces.<ref name="mec">Hadas Shahar, [[Hagit Hel-Or]], [http://openaccess.thecvf.com/content_ICCVW_2019/papers/CVPM/Shahar_Micro_Expression_Classification_using_Facial_Color_and_Deep_Learning_Methods_ICCVW_2019_paper.pdf Micro Expression Classification using Facial Color and Deep Learning Methods], The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 0–0.</ref> So, map the above vector onto the better color space and decompose into red-green and yellow-blue channels. Then use deep learning methods to find equivalent emotions.
 
Approaches are based on facial color changes. Delaunay triangulation is used to create the triangular local areas. Some of these triangles which define the interior of the mouth and eyes (sclera and iris) are removed. Use the left triangular areas’ pixels to create feature vectors.<ref name="face"/> It shows that converting the pixel color of the standard RGB color space to a color space such as oRGB color space<ref name="orgb">M. Bratkova, S. Boulos, and P. Shirley, [https://ieeexplore.ieee.org/document/4736456 oRGB: a practical opponent color space for computer graphics], IEEE Computer Graphics and Applications, 29(1):42–55, 2009.</ref> or LMS channels perform better when dealing with faces.<ref name="mec">Hadas Shahar, [[Hagit Hel-Or]], [http://openaccess.thecvf.com/content_ICCVW_2019/papers/CVPM/Shahar_Micro_Expression_Classification_using_Facial_Color_and_Deep_Learning_Methods_ICCVW_2019_paper.pdf Micro Expression Classification using Facial Color and Deep Learning Methods], The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 0–0.</ref> So, map the above vector onto the better color space and decompose into red-green and yellow-blue channels. Then use deep learning methods to find equivalent emotions.
第571行: 第571行:  
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring the aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing website as a data source.Ritendra Datta, Dhiraj Joshi, Jia Li and James Z. Wang, Studying Aesthetics in Photographic Images Using a Computational Approach, Lecture Notes in Computer Science, vol. 3953, Proceedings of the European Conference on Computer Vision, Part III, pp. 288–301, Graz, Austria, May 2006. They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images.
 
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring the aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing website as a data source.Ritendra Datta, Dhiraj Joshi, Jia Li and James Z. Wang, Studying Aesthetics in Photographic Images Using a Computational Approach, Lecture Notes in Computer Science, vol. 3953, Proceedings of the European Conference on Computer Vision, Part III, pp. 288–301, Graz, Austria, May 2006. They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images.
   −
视觉美学在艺术和摄影的世界里,指的是自然的原则和审美的原则。判断美和其他审美品质是一项高度主观的任务。宾夕法尼亚州立大学的计算机科学家将利用图片的视觉内容自动推断图片的审美质量的挑战视为一个机器学习问题,同行评级的在线照片共享网站则是一个数据源。利 · 达塔,Dhiraj Joshi,贾力和詹姆斯 · 王,用计算方法研究摄影图像美学,计算机科学讲义,第卷。3953,Proceedings of the European Conference on Computer Vision,Part III,pp.288-301,格拉茨,2006年5月。他们提取某些视觉特征是基于这样一种直觉,即他们可以区分美观的和不美观的图像。
+
美学,在艺术和摄影的世界里,是指自然的原则和审美的原则。判断美和其他审美品质是一项高度主观的任务。宾夕法尼亚州立大学的计算机科学家将利用图片的视觉内容自动推断图片的审美质量的挑战视为一个机器学习问题,同行评级的在线照片共享网站则是一个数据源。利 · 达塔,Dhiraj Joshi,贾力和詹姆斯 · 王,用计算方法研究摄影图像美学,计算机科学讲义,第卷。3953,Proceedings of the European Conference on Computer Vision,Part III,pp.288-301,格拉茨,2006年5月。他们提取某些视觉特征是基于这样一种直觉,即他们可以区分美观的和不美观的图像。
    
==Potential applications==
 
==Potential applications==
第581行: 第581行:  
http://www.learntechlib.org/p/173785/
 
http://www.learntechlib.org/p/173785/
   −
= = = 潜在的应用 = = = = = = 教育 = = = 情感影响学习者的学习状态。利用情感计算技术,计算机可以通过学习者的面部表情识别来判断学习者的情感和学习状态。在教学中,教师可以利用分析结果了解学生的学习和接受能力,制定合理的教学计划。同时关注学生的内心感受,有利于学生的心理健康。特别是在远程教育中,由于时间和空间的分离,师生之间缺乏双向交流的情感激励。没有了传统课堂学习带来的氛围,学生很容易感到无聊,影响学习效果。将情感计算应用于远程教育系统可以有效地改善这种状况。http://www.learntechlib.org/p/173785/
+
情感影响学习者的学习状态。利用情感计算技术,计算机可以通过学习者的面部表情识别来判断学习者的情感和学习状态。在教学中,教师可以利用分析结果了解学生的学习和接受能力,制定合理的教学计划。同时关注学生的内心感受,有利于学生的心理健康。特别是在远程教育中,由于时间和空间的分离,师生之间缺乏双向交流的情感激励。没有了传统课堂学习带来的氛围,学生很容易感到无聊,影响学习效果。将情感计算应用于远程教育系统可以有效地改善这种状况。http://www.learntechlib.org/p/173785/
    
=== Healthcare ===
 
=== Healthcare ===
第588行: 第588行:  
Social robots, as well as a growing number of robots used in health care benefit from emotional awareness because they can better judge users' and patient's emotional states and alter their actions/programming appropriately. This is especially important in those countries with growing aging populations and/or a lack of younger workers to address their needs.
 
Social robots, as well as a growing number of robots used in health care benefit from emotional awareness because they can better judge users' and patient's emotional states and alter their actions/programming appropriately. This is especially important in those countries with growing aging populations and/or a lack of younger workers to address their needs.
   −
= = = 医疗保健机器人,以及越来越多的用于医疗保健的机器人从情感意识中受益,因为它们可以更好地判断使用者和病人的情感状态,并适当地改变他们的行为/程序。在人口老龄化日益严重和/或缺乏年轻工人满足其需要的国家,这一点尤为重要。
+
社会机器人,以及越来越多的机器人在医疗保健中的应用都受益于情感意识,因为它们可以更好地判断用户和病人的情感状态,并适当地改变他们的行为/编程。在人口老龄化日益严重和/或缺乏年轻工人满足其需要的国家,这一点尤为重要。
    
Affective computing is also being applied to the development of communicative technologies for use by people with autism.<ref>[http://affect.media.mit.edu/projects.php Projects in Affective Computing]</ref> The affective component of a text is also increasingly gaining attention, particularly its role in the so-called emotional or [[emotive Internet]].<ref>Shanahan, James; Qu, Yan; Wiebe, Janyce (2006). ''Computing Attitude and Affect in Text: Theory and Applications''. Dordrecht: Springer Science & Business Media. p. 94. {{ISBN|1402040261}}</ref>
 
Affective computing is also being applied to the development of communicative technologies for use by people with autism.<ref>[http://affect.media.mit.edu/projects.php Projects in Affective Computing]</ref> The affective component of a text is also increasingly gaining attention, particularly its role in the so-called emotional or [[emotive Internet]].<ref>Shanahan, James; Qu, Yan; Wiebe, Janyce (2006). ''Computing Attitude and Affect in Text: Theory and Applications''. Dordrecht: Springer Science & Business Media. p. 94. {{ISBN|1402040261}}</ref>
第600行: 第600行:  
=== Video games ===
 
=== Video games ===
   −
= = = 电子游戏 = =  
+
= = = 电子游戏 = =
    
Affective video games can access their players' emotional states through [[biofeedback]] devices.<ref>{{cite conference |title=Affective Videogames and Modes of Affective Gaming: Assist Me, Challenge Me, Emote Me |first1=Kiel Mark |last1=Gilleade |first2=Alan |last2=Dix |first3=Jen |last3=Allanson |year=2005 |conference=Proc. [[Digital Games Research Association|DiGRA]] Conf. |url=http://comp.eprints.lancs.ac.uk/1057/1/Gilleade_Affective_Gaming_DIGRA_2005.pdf |access-date=2016-12-10 |archive-url=https://web.archive.org/web/20150406200454/http://comp.eprints.lancs.ac.uk/1057/1/Gilleade_Affective_Gaming_DIGRA_2005.pdf |archive-date=2015-04-06 |url-status=dead }}</ref> A particularly simple form of biofeedback is available through [[gamepad]]s that measure the pressure with which a button is pressed: this has been shown to correlate strongly with the players' level of [[arousal]];<ref>{{Cite conference| doi = 10.1145/765891.765957| title = Affective gaming: Measuring emotion through the gamepad| conference = CHI '03 Extended Abstracts on Human Factors in Computing Systems| year = 2003| last1 = Sykes | first1 = Jonathan| last2 = Brown | first2 = Simon| isbn = 1581136374| citeseerx = 10.1.1.92.2123}}</ref> at the other end of the scale are [[brain–computer interface]]s.<ref>{{Cite journal | doi = 10.1016/j.entcom.2009.09.007| title = Turning shortcomings into challenges: Brain–computer interfaces for games| journal = Entertainment Computing| volume = 1| issue = 2| pages = 85–94| year = 2009| last1 = Nijholt | first1 = Anton| last2 = Plass-Oude Bos | first2 = Danny| last3 = Reuderink | first3 = Boris| bibcode = 2009itie.conf..153N| url = http://wwwhome.cs.utwente.nl/~anijholt/artikelen/intetain_bci_2009.pdf}}</ref><ref>{{Cite conference| doi = 10.1007/978-3-642-02315-6_23| title = Affective Pacman: A Frustrating Game for Brain–Computer Interface Experiments| conference = Intelligent Technologies for Interactive Entertainment (INTETAIN)| pages = 221–227| year = 2009| last1 = Reuderink | first1 = Boris| last2 = Nijholt | first2 = Anton| last3 = Poel | first3 = Mannes| isbn = 978-3-642-02314-9}}</ref> Affective games have been used in medical research to support the emotional development of [[autism|autistic]] children.<ref>{{Cite journal
 
Affective video games can access their players' emotional states through [[biofeedback]] devices.<ref>{{cite conference |title=Affective Videogames and Modes of Affective Gaming: Assist Me, Challenge Me, Emote Me |first1=Kiel Mark |last1=Gilleade |first2=Alan |last2=Dix |first3=Jen |last3=Allanson |year=2005 |conference=Proc. [[Digital Games Research Association|DiGRA]] Conf. |url=http://comp.eprints.lancs.ac.uk/1057/1/Gilleade_Affective_Gaming_DIGRA_2005.pdf |access-date=2016-12-10 |archive-url=https://web.archive.org/web/20150406200454/http://comp.eprints.lancs.ac.uk/1057/1/Gilleade_Affective_Gaming_DIGRA_2005.pdf |archive-date=2015-04-06 |url-status=dead }}</ref> A particularly simple form of biofeedback is available through [[gamepad]]s that measure the pressure with which a button is pressed: this has been shown to correlate strongly with the players' level of [[arousal]];<ref>{{Cite conference| doi = 10.1145/765891.765957| title = Affective gaming: Measuring emotion through the gamepad| conference = CHI '03 Extended Abstracts on Human Factors in Computing Systems| year = 2003| last1 = Sykes | first1 = Jonathan| last2 = Brown | first2 = Simon| isbn = 1581136374| citeseerx = 10.1.1.92.2123}}</ref> at the other end of the scale are [[brain–computer interface]]s.<ref>{{Cite journal | doi = 10.1016/j.entcom.2009.09.007| title = Turning shortcomings into challenges: Brain–computer interfaces for games| journal = Entertainment Computing| volume = 1| issue = 2| pages = 85–94| year = 2009| last1 = Nijholt | first1 = Anton| last2 = Plass-Oude Bos | first2 = Danny| last3 = Reuderink | first3 = Boris| bibcode = 2009itie.conf..153N| url = http://wwwhome.cs.utwente.nl/~anijholt/artikelen/intetain_bci_2009.pdf}}</ref><ref>{{Cite conference| doi = 10.1007/978-3-642-02315-6_23| title = Affective Pacman: A Frustrating Game for Brain–Computer Interface Experiments| conference = Intelligent Technologies for Interactive Entertainment (INTETAIN)| pages = 221–227| year = 2009| last1 = Reuderink | first1 = Boris| last2 = Nijholt | first2 = Anton| last3 = Poel | first3 = Mannes| isbn = 978-3-642-02314-9}}</ref> Affective games have been used in medical research to support the emotional development of [[autism|autistic]] children.<ref>{{Cite journal
第621行: 第621行:  
=== Other applications ===
 
=== Other applications ===
   −
= = = 其他应用 = = =  
+
= = = 其他应用 = = =
    
Other potential applications are centered around social monitoring.  For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry.<ref>{{cite web|url=https://gizmodo.com/in-car-facial-recognition-detects-angry-drivers-to-prev-1543709793|title=In-Car Facial Recognition Detects Angry Drivers To Prevent Road Rage|date=30 August 2018|website=Gizmodo}}</ref>  Affective computing has potential applications in [[human computer interaction|human–computer interaction]], such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.<ref>{{cite journal|last1=Janssen|first1=Joris H.|last2=van den Broek|first2=Egon L.|date=July 2012|title=Tune in to Your Emotions: A Robust Personalized Affective Music Player|journal=User Modeling and User-Adapted Interaction|volume=22|issue=3|pages=255–279|doi=10.1007/s11257-011-9107-7|doi-access=free}}</ref>
 
Other potential applications are centered around social monitoring.  For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry.<ref>{{cite web|url=https://gizmodo.com/in-car-facial-recognition-detects-angry-drivers-to-prev-1543709793|title=In-Car Facial Recognition Detects Angry Drivers To Prevent Road Rage|date=30 August 2018|website=Gizmodo}}</ref>  Affective computing has potential applications in [[human computer interaction|human–computer interaction]], such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.<ref>{{cite journal|last1=Janssen|first1=Joris H.|last2=van den Broek|first2=Egon L.|date=July 2012|title=Tune in to Your Emotions: A Robust Personalized Affective Music Player|journal=User Modeling and User-Adapted Interaction|volume=22|issue=3|pages=255–279|doi=10.1007/s11257-011-9107-7|doi-access=free}}</ref>
第645行: 第645行:  
==Cognitivist vs. interactional approaches==
 
==Cognitivist vs. interactional approaches==
   −
= = = Cognitivist vs. interactionapproach = =  
+
= = = Cognitivist vs. interactionapproach = =
    
Within the field of [[human–computer interaction]], Rosalind Picard's [[cognitivism (psychology)|cognitivist]] or "information model" concept of emotion has been criticized by and contrasted with the "post-cognitivist" or "interactional" [[pragmatism|pragmatist]] approach taken by Kirsten Boehner and others which views emotion as inherently social.<ref>{{cite journal|last1=Battarbee|first1=Katja|last2=Koskinen|first2=Ilpo|title=Co-experience: user experience as interaction|journal=CoDesign|date=2005|volume=1|issue=1|pages=5–18|url=http://www2.uiah.fi/~ikoskine/recentpapers/mobile_multimedia/coexperience_reprint_lr_5-18.pdf|doi=10.1080/15710880412331289917|citeseerx=10.1.1.294.9178|s2cid=15296236}}</ref>
 
Within the field of [[human–computer interaction]], Rosalind Picard's [[cognitivism (psychology)|cognitivist]] or "information model" concept of emotion has been criticized by and contrasted with the "post-cognitivist" or "interactional" [[pragmatism|pragmatist]] approach taken by Kirsten Boehner and others which views emotion as inherently social.<ref>{{cite journal|last1=Battarbee|first1=Katja|last2=Koskinen|first2=Ilpo|title=Co-experience: user experience as interaction|journal=CoDesign|date=2005|volume=1|issue=1|pages=5–18|url=http://www2.uiah.fi/~ikoskine/recentpapers/mobile_multimedia/coexperience_reprint_lr_5-18.pdf|doi=10.1080/15710880412331289917|citeseerx=10.1.1.294.9178|s2cid=15296236}}</ref>
第695行: 第695行:  
*  
 
*  
   −
== General sources ==
      
*  
 
*  
第720行: 第719行:  
* openSMILE: popular state-of-the-art open-source toolkit for large-scale feature extraction for affect recognition and computational paralinguistics
 
* openSMILE: popular state-of-the-art open-source toolkit for large-scale feature extraction for affect recognition and computational paralinguistics
   −
= = = = = =
+
 
 
* MIT 媒体实验室情感计算研究小组  
 
* MIT 媒体实验室情感计算研究小组  
* 南加州大学情感处理单元-EPU  
+
* USC 计算情感小组
 +
* 情感处理单元-EPU  
 
* 曼菲斯大学情感计算小组  
 
* 曼菲斯大学情感计算小组  
* 2011年国际情感计算与智能交互会议
+
* 2011年国际情感计算和智能交互会议
* 脑、身体与字节: 心理生理学用户交互 CHI 2010研讨会(2010年4月10-15日)  
+
* 大脑,身体和字节: 精神生理学用户交互 CHI 2010研讨会(10-15,2010年4月)  
* 情感计算汇刊(IEEE)  
+
* IEEE 情感计算会刊(TAC)  
* opentac smile: 流行的最先进的开源工具包,用于大规模的情感识别和计算语言特征提取
+
* openSMILE: 流行的最先进的开源工具包,用于大规模的情感识别和计算语言学特征提取
    
{{Navboxes
 
{{Navboxes
1,592

个编辑

导航菜单