“情感计算”的版本间的差异

来自集智百科 - 复杂系统|人工智能|复杂科学|复杂网络|自组织
跳到导航 跳到搜索
第63行: 第63行:
 
在心理学、认知科学和神经科学中,描述人类如何感知和分类情绪的方法主要有两种: 连续的和分类的。连续的方法倾向于使用诸如消极与积极、平静与激动之类的维度。
 
在心理学、认知科学和神经科学中,描述人类如何感知和分类情绪的方法主要有两种: 连续的和分类的。连续的方法倾向于使用诸如消极与积极、平静与激动之类的维度。
  
The categorical approach tends to use discrete classes such as happy, sad, angry, fearful, surprise, disgust.  Different kinds of machine learning regression and classification models can be used for having machines produce continuous or discrete labels.  Sometimes models are also built that allow combinations across the categories, e.g. a happy-surprised face or a fearful-surprised face.<ref name=":7">{{Cite journal|title = A model of the perception of facial expressions of emotion by humans: Research overview and perspectives.|last = Aleix, and Shichuan Du|first = Martinez|date = 2012|journal = The Journal of Machine Learning Research |volume=13 |issue=1 |pages=1589–1608|url=https://www.jmlr.org/papers/volume13/martinez12a/martinez12a.pdf}}</ref>
+
分类方法倾向于使用离散的类别,如快乐,悲伤,愤怒,恐惧,惊讶,厌恶。不同类型的机器学习回归和分类模型可以用于让机器产生连续或离散的标签。有时还会构建跨类别组合的模型,例如 一张高兴而惊讶的脸或一张害怕而惊讶的脸。<ref name=":7">{{Cite journal|title = A model of the perception of facial expressions of emotion by humans: Research overview and perspectives.|last = Aleix, and Shichuan Du|first = Martinez|date = 2012|journal = The Journal of Machine Learning Research |volume=13 |issue=1 |pages=1589–1608|url=https://www.jmlr.org/papers/volume13/martinez12a/martinez12a.pdf}}</ref>
  
分类方法倾向于使用离散的类别,如快乐,悲伤,愤怒,恐惧,惊讶,厌恶。不同类型的机器学习回归和分类模型可以用于让机器产生连续或离散的标签。有时还会构建跨类别组合的模型,例如 一张高兴而惊讶的脸或一张害怕而惊讶的脸<ref name=":7" />。
 
 
The following sections consider many of the kinds of input data used for the task of [[emotion recognition]].
 
  
 
接下来将讨论用于情感识别的不同种类的输入数据。
 
接下来将讨论用于情感识别的不同种类的输入数据。
  
 
=== 语音情感 ===
 
=== 语音情感 ===
Various changes in the autonomic nervous system can indirectly alter a person's speech, and affective technologies can leverage this information to recognize emotion. For example, speech produced in a state of fear, anger, or joy becomes fast, loud, and precisely enunciated, with a higher and wider range in pitch, whereas emotions such as tiredness, boredom, or sadness tend to generate slow, low-pitched, and slurred speech.<ref name=":8">Breazeal, C. and Aryananda, L. [http://web.media.mit.edu/~cynthiab/Papers/breazeal-aryananda-AutoRo02.pdf Recognition of affective communicative intent in robot-directed speech]. Autonomous Robots 12 1, 2002. pp. 83–104.</ref> Some emotions have been found to be more easily computationally identified, such as anger<ref name="Dellaert" /> or approval.<ref name=":9">{{Cite book|last1=Roy|first1=D.|last2=Pentland|first2=A.|date=1996-10-01|title=Automatic spoken affect classification and analysis|journal=Proceedings of the Second International Conference on Automatic Face and Gesture Recognition|pages=363–367|doi=10.1109/AFGR.1996.557292|isbn=978-0-8186-7713-7|s2cid=23157273}}</ref>
 
  
[https://zh.wikipedia.org/zh-sg/%E8%87%AA%E4%B8%BB%E7%A5%9E%E7%BB%8F%E7%B3%BB%E7%BB%9F 自主神经系统]的各种变化可以间接地改变一个人的语言,情感技术可以利用这些信息来识别情绪。例如,在恐惧、愤怒或高兴的状态下发言变得快速、响亮、清晰,音调变得越来越高,音域越来越宽;而诸如疲倦、厌倦或悲伤等情绪往往会产生缓慢、低沉、含糊不清的语音<ref name=":8" /> 。有些情绪更容易被计算识别,比如愤怒<ref name="Dellaert" /> 或赞同<ref name=":9" />。
+
[https://zh.wikipedia.org/zh-sg/%E8%87%AA%E4%B8%BB%E7%A5%9E%E7%BB%8F%E7%B3%BB%E7%BB%9F 自主神经系统]的各种变化可以间接地改变一个人的语言,情感技术可以利用这些信息来识别情绪。例如,在恐惧、愤怒或高兴的状态下发言变得快速、响亮、清晰,音调变得越来越高,音域越来越宽;而诸如疲倦、厌倦或悲伤等情绪往往会产生缓慢、低沉、含糊不清的语音<ref name=":8">Breazeal, C. and Aryananda, L. [http://web.media.mit.edu/~cynthiab/Papers/breazeal-aryananda-AutoRo02.pdf Recognition of affective communicative intent in robot-directed speech]. Autonomous Robots 12 1, 2002. pp. 83–104.</ref>。有些情绪更容易被计算识别,比如愤怒<ref name="Dellaert" /> 或赞同<ref name=":9">{{Cite book|last1=Roy|first1=D.|last2=Pentland|first2=A.|date=1996-10-01|title=Automatic spoken affect classification and analysis|journal=Proceedings of the Second International Conference on Automatic Face and Gesture Recognition|pages=363–367|doi=10.1109/AFGR.1996.557292|isbn=978-0-8186-7713-7|s2cid=23157273}}</ref>。
  
Emotional speech processing technologies recognize the user's emotional state using computational analysis of speech features. Vocal parameters and [[prosody (linguistics)|prosodic]] features such as pitch variables and speech rate can be analyzed through pattern recognition techniques.<ref name="Dellaert">Dellaert, F., Polizin, t., and Waibel, A., Recognizing Emotion in Speech", In Proc. Of ICSLP 1996, Philadelphia, PA, pp.1970–1973, 1996</ref><ref name="Lee">Lee, C.M.; Narayanan, S.; Pieraccini, R., Recognition of Negative Emotion in the Human Speech Signals, Workshop on Auto. Speech Recognition and Understanding, Dec 2001</ref>
+
情感语音处理技术通过对语音特征的计算分析来识别用户的情感状态。通过模式识别技术<ref name="Dellaert">Dellaert, F., Polizin, t., and Waibel, A., Recognizing Emotion in Speech", In Proc. Of ICSLP 1996, Philadelphia, PA, pp.1970–1973, 1996</ref><ref name="Lee">Lee, C.M.; Narayanan, S.; Pieraccini, R., Recognition of Negative Emotion in the Human Speech Signals, Workshop on Auto. Speech Recognition and Understanding, Dec 2001</ref>
 +
可以分析声音参数和韵律特征,如音调高低和语速等。
  
情感语音处理技术通过对语音特征的计算分析来识别用户的情感状态。通过模式识别技术<ref name="Dellaert" /><ref name="Lee" />可以分析声音参数和韵律特征,如音调高低和语速等。
+
语音分析是一种有效的情感状态识别方法,在最近的研究中,语音分析的平均报告准确率为70%-80%.<ref name=":10">{{Cite journal|last1=Neiberg|first1=D|last2=Elenius|first2=K|last3=Laskowski|first3=K|date=2006|title=Emotion recognition in spontaneous speech using GMMs|url=http://www.speech.kth.se/prod/publications/files/1192.pdf|journal=Proceedings of Interspeech}}</ref><ref name=":11">{{Cite journal|last1=Yacoub|first1=Sherif|last2=Simske|first2=Steve|last3=Lin|first3=Xiaofan|last4=Burns|first4=John|date=2003|title=Recognition of Emotions in Interactive Voice Response Systems|journal=Proceedings of Eurospeech|pages=729–732|citeseerx=10.1.1.420.8158}}</ref>。这些系统往往比人类的平均准确率(大约60%<ref name="Dellaert" />)更高,但是不如使用其他情绪检测方式准确,比如生理状态或面部表情<ref name="Hudlicka-2003-p24">{{harvnb|Hudlicka|2003|p=24}}</ref>。然而,由于许多言语特征是独立于语义或文化的,这种技术被认为是一个很有前景的研究方向<ref name="Hudlicka-2003-p25">{{harvnb|Hudlicka|2003|p=25}}</ref>
  
Speech analysis is an effective method of identifying affective state, having an average reported accuracy of 70 to 80% in recent research.<ref name=":10">{{Cite journal|last1=Neiberg|first1=D|last2=Elenius|first2=K|last3=Laskowski|first3=K|date=2006|title=Emotion recognition in spontaneous speech using GMMs|url=http://www.speech.kth.se/prod/publications/files/1192.pdf|journal=Proceedings of Interspeech}}</ref><ref name=":11">{{Cite journal|last1=Yacoub|first1=Sherif|last2=Simske|first2=Steve|last3=Lin|first3=Xiaofan|last4=Burns|first4=John|date=2003|title=Recognition of Emotions in Interactive Voice Response Systems|journal=Proceedings of Eurospeech|pages=729–732|citeseerx=10.1.1.420.8158}}</ref> These systems tend to outperform average human accuracy (approximately 60%<ref name="Dellaert" />) but are less accurate than systems which employ other modalities for emotion detection, such as physiological states or facial expressions.<ref name="Hudlicka-2003-p24">{{harvnb|Hudlicka|2003|p=24}}</ref> However, since many speech characteristics are independent of semantics or culture, this technique is considered to be a promising route for further research.<ref name="Hudlicka-2003-p25">{{harvnb|Hudlicka|2003|p=25}}</ref>
+
==== 算法 ====
 
 
语音分析是一种有效的情感状态识别方法,在最近的研究中,语音分析的平均报告准确率为70%-80%.<ref name=":10" /><ref name=":11" />  。这些系统往往比人类的平均准确率(大约60%<ref name="Dellaert" />)更高,但是不如使用其他情绪检测方式准确,比如生理状态或面部表情<ref name="Hudlicka-2003-p24" /> 。然而,由于许多言语特征是独立于语义或文化的,这种技术被认为是一个很有前景的研究方向<ref name="Hudlicka-2003-p25" />。
 
  
==== 算法 ====
+
The process of speech/text affect detection requires the creation of a reliable [[database]], [[knowledge base]], or [[vector space model]], broad enough to fit every need for its application, as well as the selection of a successful classifier which will allow for quick and accurate emotion identification.
  
The process of speech/text affect detection requires the creation of a reliable [[database]], [[knowledge base]], or [[vector space model]],<ref name="Osgood75">
+
语音/文本的情感检测程需要创建可靠的'''数据库'''、'''知识库'''或者'''向量空间模型'''<ref name="Osgood75">
 
{{cite book
 
{{cite book
 
  | author = Charles Osgood
 
  | author = Charles Osgood
第97行: 第92行:
 
  | year = 1975
 
  | year = 1975
 
}}
 
}}
</ref> broad enough to fit every need for its application, as well as the selection of a successful classifier which will allow for quick and accurate emotion identification.
+
</ref>,为了适应各种应用,这些数据库的范围需要足够广泛;同时还需要选择一个又快又准的分类器,这样才能快速准确地识别情感。
 
 
语音/文本的情感检测程需要创建可靠的'''数据库'''、'''知识库'''或者'''向量空间模型'''<ref name="Osgood75" />,为了适应各种应用,这些数据库的范围需要足够广泛;同时还需要选择一个又快又准的分类器,这样才能快速准确地识别情感。
 
  
Currently, the most frequently used classifiers are linear discriminant classifiers (LDC), k-nearest neighbor (k-NN), Gaussian mixture model (GMM), support vector machines (SVM), artificial neural networks (ANN), decision tree algorithms and hidden Markov models (HMMs).<ref name="Scherer-2010-p241">{{harvnb|Scherer|Bänziger|Roesch|2010|p=241}}</ref> Various studies showed that choosing the appropriate classifier can significantly enhance the overall performance of the system.<ref name="Hudlicka-2003-p24"/> The list below gives a brief description of each algorithm:
 
  
 
目前常用的分类器有'''线性判别分类器'''(LDC)、 '''k- 近邻分类器'''(k-NN)、'''高斯混合模型'''(GMM)、'''支持向量机'''(SVM)、'''人工神经网络'''(ANN)、'''决策树算法'''和'''隐马尔可夫模型'''(HMMs)<ref name="Scherer-2010-p241" />。各种研究表明,选择合适的分类器可以显著提高系统的整体性能。下面的列表给出了每个算法的简要描述:
 
目前常用的分类器有'''线性判别分类器'''(LDC)、 '''k- 近邻分类器'''(k-NN)、'''高斯混合模型'''(GMM)、'''支持向量机'''(SVM)、'''人工神经网络'''(ANN)、'''决策树算法'''和'''隐马尔可夫模型'''(HMMs)<ref name="Scherer-2010-p241" />。各种研究表明,选择合适的分类器可以显著提高系统的整体性能。下面的列表给出了每个算法的简要描述:
 
* [[Linear classifier|LDC]] – Classification happens based on the value obtained from the linear combination of the feature values, which are usually provided in the form of vector features.
 
* [[K-nearest neighbor algorithm|k-NN]] – Classification happens by locating the object in the feature space, and comparing it with the k nearest neighbors (training examples). The majority vote decides on the classification.
 
* [[Gaussian mixture model|GMM]] – is a probabilistic model used for representing the existence of subpopulations within the overall population. Each sub-population is described using the mixture distribution, which allows for classification of observations into the sub-populations.<ref name=":12">[http://cnx.org/content/m13205/latest/ "Gaussian Mixture Model"]. Connexions – Sharing Knowledge and Building Communities. Retrieved 10 March 2011.</ref>
 
* [[Support vector machine|SVM]] – is a type of (usually binary) linear classifier which decides in which of the two (or more) possible classes, each input may fall into.
 
* [[Artificial neural network|ANN]] – is a mathematical model, inspired by biological neural networks, that can better grasp possible non-linearities of the feature space.
 
* [[Decision tree learning|Decision tree algorithms]] – work based on following a decision tree in which leaves represent the classification outcome, and branches represent the conjunction of subsequent features that lead to the classification.
 
* [[Hidden Markov model|HMMs]] – a statistical Markov model in which the states and state transitions are not directly available to observation. Instead, the series of outputs dependent on the states are visible. In the case of affect recognition, the outputs represent the sequence of speech feature vectors, which allow the deduction of states' sequences through which the model progressed. The states can consist of various intermediate steps in the expression of an emotion, and each of them has a probability distribution over the possible output vectors. The states' sequences allow us to predict the affective state which we are trying to classify, and this is one of the most commonly used techniques within the area of speech affect detection.
 
  
 
* LDC:特征以向量形式表示,通过计算特征的线性组合来分类。
 
* LDC:特征以向量形式表示,通过计算特征的线性组合来分类。
 
* k-NN:计算并选取特征空间中的点,将其与k个最近的数据点相比较,频数最大的类即为分类结果。
 
* k-NN:计算并选取特征空间中的点,将其与k个最近的数据点相比较,频数最大的类即为分类结果。
* GMM:是一种概率模型,用于表示总体中子群的存在。 利用特征的多个高斯概率密度函数混合来分类<ref name=":12" />。
+
* GMM:是一种概率模型,用于表示总体中子群的存在。 利用特征的多个高斯概率密度函数混合来分类<ref name=":12">[http://cnx.org/content/m13205/latest/ "Gaussian Mixture Model"]. Connexions – Sharing Knowledge and Building Communities. Retrieved 10 March 2011.</ref>。
 
* SVM:是一种(通常为二分的)线性分类器,它决定每个输入可能属于两个(或多个)可能类别中的哪一个。
 
* SVM:是一种(通常为二分的)线性分类器,它决定每个输入可能属于两个(或多个)可能类别中的哪一个。
 
* ANN:是一种受生物神经网络启发的数学模型,能够更好地处理特征空间可能存在的非线性。
 
* ANN:是一种受生物神经网络启发的数学模型,能够更好地处理特征空间可能存在的非线性。
第121行: 第105行:
 
* HMMs:一种统计马尔可夫模型,其中的状态和状态转变不能直接用于观测。相反,依赖于状态的一系列输出是可见的。在情感识别领域,输出代表了语音特征向量的序列,这样可以推导出模型所经过的状态序列。这些状态包括情感表达中的各中间步骤,每个状态在输出向量上都有一个概率分布。状态序列是我们能够预测正在试图分类的情感状态,这也是语音情感识别中最为常用的技术之一。
 
* HMMs:一种统计马尔可夫模型,其中的状态和状态转变不能直接用于观测。相反,依赖于状态的一系列输出是可见的。在情感识别领域,输出代表了语音特征向量的序列,这样可以推导出模型所经过的状态序列。这些状态包括情感表达中的各中间步骤,每个状态在输出向量上都有一个概率分布。状态序列是我们能够预测正在试图分类的情感状态,这也是语音情感识别中最为常用的技术之一。
  
It is proved that having enough acoustic evidence available the emotional state of a person can be classified by a set of majority voting classifiers. The proposed set of classifiers is based on three main classifiers: kNN, C4.5 and SVM-RBF Kernel. This set achieves better performance than each basic classifier taken separately. It is compared with two other sets of classifiers: one-against-all (OAA) multiclass SVM with Hybrid kernels and the set of classifiers which consists of the following two basic classifiers: C5.0 and Neural Network. The proposed variant achieves better performance than the other two sets of classifiers.<ref name=":13">{{cite journal|url=http://ntv.ifmo.ru/en/article/11200/raspoznavanie_i_prognozirovanie_dlitelnyh__emociy_v_rechi_(na_angl._yazyke).htm|title=Extended speech emotion recognition and prediction|author=S.E. Khoruzhnikov|journal=Scientific and Technical Journal of Information Technologies, Mechanics and Optics|volume=14|issue=6|page=137|year=2014|display-authors=etal}}</ref>
+
研究证明,如果有足够的声音样本,人的情感可以被大多数主流分类器所正确分类。分类器模型由三个主要分类器组合而成: kNN、 C4.5和 SVM-RBF 核。该分类器比单独采集的基本分类器具有更好的分类性能。另外两组分类器为:1)具有混合内核的一对多 (OAA) 多类 SVM ,2)由C5.0 和神经网络两个基本分类器组成的分类器组,所提出的变体比这两组分类器有更好的性能<ref name=":13">{{cite journal|url=http://ntv.ifmo.ru/en/article/11200/raspoznavanie_i_prognozirovanie_dlitelnyh__emociy_v_rechi_(na_angl._yazyke).htm|title=Extended speech emotion recognition and prediction|author=S.E. Khoruzhnikov|journal=Scientific and Technical Journal of Information Technologies, Mechanics and Optics|volume=14|issue=6|page=137|year=2014|display-authors=etal}}</ref>。
 
 
研究证明,如果有足够的声音样本,人的情感可以被大多数主流分类器所正确分类。分类器模型由三个主要分类器组合而成: kNN、 C4.5和 SVM-RBF 核。该分类器比单独采集的基本分类器具有更好的分类性能。另外两组分类器为:1)具有混合内核的一对多 (OAA) 多类 SVM ,2)由C5.0 和神经网络两个基本分类器组成的分类器组,所提出的变体比这两组分类器有更好的性能<ref name=":13" />。
 
  
 
==== 数据库 ====
 
==== 数据库 ====
  
The vast majority of present systems are data-dependent. This creates one of the biggest challenges in detecting emotions based on speech, as it implicates choosing an appropriate database used to train the classifier. Most of the currently possessed data was obtained from actors and is thus a representation of archetypal emotions. Those so-called acted databases are usually based on the Basic Emotions theory (by [[Paul Ekman]]), which assumes the existence of six basic emotions (anger, fear, disgust, surprise, joy, sadness), the others simply being a mix of the former ones.<ref name="Ekman, P. 1969">Ekman, P. & Friesen, W. V (1969). [http://www.communicationcache.com/uploads/1/0/8/8/10887248/the-repertoire-of-nonverbal-behavior-categories-origins-usage-and-coding.pdf The repertoire of nonverbal behavior: Categories, origins, usage, and coding]. Semiotica, 1, 49–98.</ref> Nevertheless, these still offer high audio quality and balanced classes (although often too few), which contribute to high success rates in recognizing emotions.
 
  
绝大多数现有系统都依赖于数据。 选择一个恰当的数据库来训练分类器因而成为语音情感识别的首要问题。 目前拥有的大部分数据都是从演员那里获得的,都是一些典型的情绪表现。这些所谓的行为数据库通常是基于基本情绪理论(保罗 · 埃克曼) ,该理论假定存在六种基本情绪(愤怒、恐惧、厌恶、惊讶、喜悦、悲伤) ,其他情绪只是前者的混合体<ref name="Ekman, P. 1969" />。尽管如此,这仍然提供较高的音质和均衡的类别(尽管通常太少),有助于提高识别情绪的成功率。
+
绝大多数现有系统都依赖于数据。 选择一个恰当的数据库来训练分类器因而成为语音情感识别的首要问题。 目前拥有的大部分数据都是从演员那里获得的,都是一些典型的情绪表现。这些所谓的行为数据库通常是基于基本情绪理论(保罗 · 埃克曼) ,该理论假定存在六种基本情绪(愤怒、恐惧、厌恶、惊讶、喜悦、悲伤) ,其他情绪只是前者的混合体<ref name="Ekman, P. 1969">Ekman, P. & Friesen, W. V (1969). [http://www.communicationcache.com/uploads/1/0/8/8/10887248/the-repertoire-of-nonverbal-behavior-categories-origins-usage-and-coding.pdf The repertoire of nonverbal behavior: Categories, origins, usage, and coding]. Semiotica, 1, 49–98.</ref>。尽管如此,这仍然提供较高的音质和均衡的类别(尽管通常太少),有助于提高识别情绪的成功率。
 
 
However, for real life application, naturalistic data is preferred. A naturalistic database can be produced by observation and analysis of subjects in their natural context. Ultimately, such database should allow the system to recognize emotions based on their context as well as work out the goals and outcomes of the interaction. The nature of this type of data allows for authentic real life implementation, due to the fact it describes states naturally occurring during the [[human–computer interaction]] (HCI).
 
  
 
然而,对于现实生活应用,自然数据是首选的。自然数据库可以通过在自然环境中观察和分析对象来产生。最终,自然数据库会帮助系统识别情境下的情绪,也可以用来发现交互的目标和结果。由于这类数据的自然性,可以真实自然地反映'''人机交互'''下的情感状态,也就可以应用于现实生活中的系统实现。
 
然而,对于现实生活应用,自然数据是首选的。自然数据库可以通过在自然环境中观察和分析对象来产生。最终,自然数据库会帮助系统识别情境下的情绪,也可以用来发现交互的目标和结果。由于这类数据的自然性,可以真实自然地反映'''人机交互'''下的情感状态,也就可以应用于现实生活中的系统实现。
  
Despite the numerous advantages which naturalistic data has over acted data, it is difficult to obtain and usually has low emotional intensity. Moreover, data obtained in a natural context has lower signal quality, due to surroundings noise and distance of the subjects from the microphone. The first attempt to produce such database was the FAU Aibo Emotion Corpus for CEICES (Combining Efforts for Improving Automatic Classification of Emotional User States), which was developed based on a realistic context of children (age 10–13) playing with Sony's Aibo robot pet.<ref name="Steidl-2011">{{cite web | last = Steidl | first = Stefan | title = FAU Aibo Emotion Corpus | publisher = Pattern Recognition Lab | date = 5 March 2011 | url = http://www5.cs.fau.de/de/mitarbeiter/steidl-stefan/fau-aibo-emotion-corpus/ }}</ref><ref name="Scherer-2010-p243">{{harvnb|Scherer|Bänziger|Roesch|2010|p=243}}</ref> Likewise, producing one standard database for all emotional research would provide a method of evaluating and comparing different affect recognition systems.
+
尽管自然数据比表演数据具有许多优势,但很难获得并且通常情绪强度较低。此外,由于环境噪声的存在、人员与麦克风的距离较远,在自然环境中获得的数据具有较低的信号质量。埃尔朗根-纽约堡大学的AIBO情感资料库(FAU Aibo Emotion Corpus for CEICES, CEICES: Combining Efforts for Improving Automatic Classification of Emotional User States)是建立'''自然情感数据库'''的首次尝试,其采集基于10—13岁儿童与索尼AIBO宠物机器人玩耍的真实情境。<ref name="Steidl-2011">{{cite web | last = Steidl | first = Stefan | title = FAU Aibo Emotion Corpus | publisher = Pattern Recognition Lab | date = 5 March 2011 | url = http://www5.cs.fau.de/de/mitarbeiter/steidl-stefan/fau-aibo-emotion-corpus/ }}</ref><ref name="Scherer-2010-p243">{{harvnb|Scherer|Bänziger|Roesch|2010|p=243}}</ref>同样,在情感研究领域,建立任何一个标准数据库,都需要提供评估方法,以比较不同情感识别系统的差异。
 
 
尽管自然数据比表演数据具有许多优势,但很难获得并且通常情绪强度较低。此外,由于环境噪声的存在、人员与麦克风的距离较远,在自然环境中获得的数据具有较低的信号质量。埃尔朗根-纽约堡大学的AIBO情感资料库(FAU Aibo Emotion Corpus for CEICES, CEICES: Combining Efforts for Improving Automatic Classification of Emotional User States)是建立'''自然情感数据库'''的首次尝试,其采集基于10—13岁儿童与索尼AIBO宠物机器人玩耍的真实情境。同样,在情感研究领域,建立任何一个标准数据库,都需要提供评估方法,以比较不同情感识别系统的差异。
 
  
 
==== 语音叙词 ====
 
==== 语音叙词 ====
 
The complexity of the affect recognition process increases with the number of classes (affects) and speech descriptors used within the classifier. It is, therefore, crucial to select only the most relevant features in order to assure the ability of the model to successfully identify emotions, as well as increasing the performance, which is particularly significant to real-time detection. The range of possible choices is vast, with some studies mentioning the use of over 200 distinct features.<ref name="Scherer-2010-p241"/> It is crucial to identify those that are redundant and undesirable in order to optimize the system and increase the success rate of correct emotion detection. The most common speech characteristics are categorized into the following groups.<ref name="Steidl-2011"/><ref name="Scherer-2010-p243"/>
 
  
 
情感识别过程的复杂性随着分类器中使用的类(情感)和语音叙词的数量的增加而增加。因此,为了保证模型能够成功地识别情绪并提高性能,只选择最相关的特征,这对于实时检测尤为重要。可选择范围很广,有些研究提到使用了200多种不同的特征<ref name="Scherer-2010-p241" />。识别冗余的情感信息对于优化系统、提高情感检测的成功率至关重要。最常见的言语特征可分为以下几类<ref name="Steidl-2011" /><ref name="Scherer-2010-p243" />。
 
情感识别过程的复杂性随着分类器中使用的类(情感)和语音叙词的数量的增加而增加。因此,为了保证模型能够成功地识别情绪并提高性能,只选择最相关的特征,这对于实时检测尤为重要。可选择范围很广,有些研究提到使用了200多种不同的特征<ref name="Scherer-2010-p241" />。识别冗余的情感信息对于优化系统、提高情感检测的成功率至关重要。最常见的言语特征可分为以下几类<ref name="Steidl-2011" /><ref name="Scherer-2010-p243" />。
  
# Frequency characteristics<ref name=":14">{{Cite book |doi=10.1109/ICCCI50826.2021.9402569|isbn=978-1-7281-5875-4|chapter=Non-linear frequency warping using constant-Q transformation for speech emotion recognition|title=2021 International Conference on Computer Communication and Informatics (ICCCI)|pages=1–4|year=2021|last1=Singh|first1=Premjeet|last2=Saha|first2=Goutam|last3=Sahidullah|first3=Md|arxiv=2102.04029}}</ref>
 
#* Accent shape – affected by the rate of change of the fundamental frequency.
 
#* Average pitch – description of how high/low the speaker speaks relative to the normal speech.
 
#* Contour slope – describes the tendency of the frequency change over time, it can be rising, falling or level.
 
#* Final lowering – the amount by which the frequency falls at the end of an utterance.
 
#* Pitch range – measures the spread between the maximum and minimum frequency of an utterance.
 
# Time-related features:
 
#* Speech rate – describes the rate of words or syllables uttered over a unit of time
 
#* Stress frequency – measures the rate of occurrences of pitch accented utterances
 
# Voice quality parameters and energy descriptors:
 
#* Breathiness – measures the aspiration noise in speech
 
#* Brilliance – describes the dominance of high Or low frequencies In the speech
 
#* Loudness – measures the amplitude of the speech waveform, translates to the energy of an utterance
 
#* Pause Discontinuity – describes the transitions between sound and silence
 
#* Pitch Discontinuity – describes the transitions of the fundamental frequency
 
  
# 频率特性<ref name=":14" />
+
# 频率特性<ref name=":14">{{Cite book |doi=10.1109/ICCCI50826.2021.9402569|isbn=978-1-7281-5875-4|chapter=Non-linear frequency warping using constant-Q transformation for speech emotion recognition|title=2021 International Conference on Computer Communication and Informatics (ICCCI)|pages=1–4|year=2021|last1=Singh|first1=Premjeet|last2=Saha|first2=Goutam|last3=Sahidullah|first3=Md|arxiv=2102.04029}}</ref>
 
* 音调形状(Accent shape ):受基础频率变化的影响。
 
* 音调形状(Accent shape ):受基础频率变化的影响。
 
* 平均音调(Average pitch):描述说话者相对于正常语言的音调高低。
 
* 平均音调(Average pitch):描述说话者相对于正常语言的音调高低。
第178行: 第138行:
  
 
=== 面部情感检测 ===
 
=== 面部情感检测 ===
The detection and processing of facial expression are achieved through various methods such as [[optical flow]], [[hidden Markov model]]s, [[neural network|neural network processing]] or active appearance models. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody,<ref name="face-prosody">{{cite conference | url = http://www.image.ece.ntua.gr/php/savepaper.php?id=447 | first1 = G. | last1 = Caridakis | first2 = L. | last2 = Malatesta | first3 = L. | last3 = Kessous | first4 = N. | last4 = Amir | first5 = A. | last5 = Raouzaiou | first6 = K. | last6 = Karpouzis | title = Modeling naturalistic affective states via facial and vocal expressions recognition | conference = International Conference on Multimodal Interfaces (ICMI'06) | location = Banff, Alberta, Canada | date = November 2–4, 2006 }}</ref> facial expressions and hand gestures,<ref name="face-gesture">{{cite book | chapter-url = http://www.image.ece.ntua.gr/php/savepaper.php?id=334 | first1 = T. | last1 = Balomenos | first2 = A. | last2 = Raouzaiou | first3 = S. | last3 = Ioannou | first4 = A. | last4 = Drosopoulos | first5 = K. | last5 = Karpouzis | first6 = S. | last6 = Kollias | chapter = Emotion Analysis in Man-Machine Interaction Systems | editor1-first = Samy | editor1-last = Bengio | editor2-first = Herve | editor2-last = Bourlard | title = Machine Learning for Multimodal Interaction | series = [[Lecture Notes in Computer Science]] | volume = 3361| year = 2004 | pages = 318–328 | publisher = [[Springer-Verlag]] }}</ref> or facial expressions with speech and text for multimodal data and metadata analysis) to provide a more robust estimation of the subject's emotional state. [[Affectiva]] is a company (co-founded by [[Rosalind Picard]] and [[Rana el Kaliouby|Rana El Kaliouby]]) directly related to affective computing and aims at investigating solutions and software for facial affect detection.
 
  
面部表情的检测和处理通过[[wikipedia:Optical_flow|'''光流''']]、'''隐马尔可夫模型'''、'''神经网络'''或'''主动外观模型'''等多种方法实现。可以组合或融合多种模态(多模态识别,例如面部表情和语音韵律<ref name="face-prosody" />、面部表情和手势<ref name="face-gesture" />,或用于多模态数据和元数据分析的带有语音和文本的面部表情),以提供对受试者情绪的更可靠估计。Affectiva 是一家与情感计算直接相关的公司(由 Rosalind Picard 和 Rana El Kaliouby 共同创办) ,旨在研究面部情感检测的解决方案和软件。
+
面部表情的检测和处理通过[[wikipedia:Optical_flow|'''光流''']]、'''隐马尔可夫模型'''、'''神经网络'''或'''主动外观模型'''等多种方法实现。可以组合或融合多种模态(多模态识别,例如面部表情和语音韵律<ref name="face-prosody">{{cite conference | url = http://www.image.ece.ntua.gr/php/savepaper.php?id=447 | first1 = G. | last1 = Caridakis | first2 = L. | last2 = Malatesta | first3 = L. | last3 = Kessous | first4 = N. | last4 = Amir | first5 = A. | last5 = Raouzaiou | first6 = K. | last6 = Karpouzis | title = Modeling naturalistic affective states via facial and vocal expressions recognition | conference = International Conference on Multimodal Interfaces (ICMI'06) | location = Banff, Alberta, Canada | date = November 2–4, 2006 }}</ref>、面部表情和手势<ref name="face-gesture">{{cite book | chapter-url = http://www.image.ece.ntua.gr/php/savepaper.php?id=334 | first1 = T. | last1 = Balomenos | first2 = A. | last2 = Raouzaiou | first3 = S. | last3 = Ioannou | first4 = A. | last4 = Drosopoulos | first5 = K. | last5 = Karpouzis | first6 = S. | last6 = Kollias | chapter = Emotion Analysis in Man-Machine Interaction Systems | editor1-first = Samy | editor1-last = Bengio | editor2-first = Herve | editor2-last = Bourlard | title = Machine Learning for Multimodal Interaction | series = [[Lecture Notes in Computer Science]] | volume = 3361| year = 2004 | pages = 318–328 | publisher = [[Springer-Verlag]] }}</ref>,或用于多模态数据和元数据分析的带有语音和文本的面部表情),以提供对受试者情绪的更可靠估计。Affectiva 是一家与情感计算直接相关的公司(由 Rosalind Picard 和 Rana El Kaliouby 共同创办) ,旨在研究面部情感检测的解决方案和软件。
  
 
==== 面部表情数据库 ====
 
==== 面部表情数据库 ====
{{Main|Facial expression databases}}
 
Creation of an emotion database is a difficult and time-consuming task. However, database creation is an essential step in the creation of a system that will recognize human emotions. Most of the publicly available emotion databases include posed facial expressions only. In posed expression databases, the participants are asked to display different basic emotional expressions, while in spontaneous expression database, the expressions are natural. Spontaneous emotion elicitation requires significant effort in the selection of proper stimuli which can lead to a rich display of intended emotions. Secondly, the process involves tagging of emotions by trained individuals manually which makes the databases highly reliable. Since perception of expressions and their intensity is subjective in nature, the annotation by experts is essential for the purpose of validation.
 
  
 
情感数据库的建立是一项既困难又耗时的工作。然而,情感数据库是创建识别人类情感的系统的关键步骤。大多数公开的情感数据库只包含摆拍的面部表情,在这样的数据库中,参与者被要求展示不同的基本情绪表情;而在自然表情数据库中,面部表情是自发的。自然表情的发生需要选取恰当的刺激,这样才能引起目标表情的丰富展示。其次,这个过程需要受过训练的工作者为数据做标注,以实现数据库的高度可靠。因为表情及其强度的感知本质上是主观的,专家的标注对验证而言是十分重要的。
 
情感数据库的建立是一项既困难又耗时的工作。然而,情感数据库是创建识别人类情感的系统的关键步骤。大多数公开的情感数据库只包含摆拍的面部表情,在这样的数据库中,参与者被要求展示不同的基本情绪表情;而在自然表情数据库中,面部表情是自发的。自然表情的发生需要选取恰当的刺激,这样才能引起目标表情的丰富展示。其次,这个过程需要受过训练的工作者为数据做标注,以实现数据库的高度可靠。因为表情及其强度的感知本质上是主观的,专家的标注对验证而言是十分重要的。
  
Researchers work with three types of databases, such as a database of peak expression images only, a database of image sequences portraying an emotion from neutral to its peak, and video clips with emotional annotations. Many facial expression databases have been created and made public for expression recognition purpose. Two of the widely used databases are CK+ and JAFFE.
 
  
 
研究人员使用三种类型的数据库:峰值表情数据库、中性到峰值的情绪图像序列数据库以及带有情绪注释的视频片段。面部表情数据库是面部表情识别领域的一个重要研究课题,两个广泛使用的数据库是 CK+和 JAFFE。
 
研究人员使用三种类型的数据库:峰值表情数据库、中性到峰值的情绪图像序列数据库以及带有情绪注释的视频片段。面部表情数据库是面部表情识别领域的一个重要研究课题,两个广泛使用的数据库是 CK+和 JAFFE。
  
 
==== 情感分类 ====
 
==== 情感分类 ====
{{Main|Emotion classification}}
 
By doing cross-cultural research in Papua New Guinea, on the Fore Tribesmen, at the end of the 1960s, [[Paul Ekman]] proposed the idea that facial expressions of emotion are not culturally determined, but universal. Thus, he suggested that they are biological in origin and can, therefore, be safely and correctly categorized.<ref name="Ekman, P. 1969"/>
 
He therefore officially put forth six basic emotions, in 1972:<ref name=":15">{{cite conference | last = Ekman | first =  Paul | author-link = Paul Ekman | year = 1972 | title = Universals and Cultural Differences in Facial Expression of Emotion | editor-first = J. | editor-last = Cole | conference = Nebraska Symposium on Motivation | location = Lincoln, Nebraska | publisher = University of Nebraska Press | pages = 207–283 }}</ref>
 
 
二十世纪六十年代末,保罗·埃克曼 (Paul Ekman) 在巴布亚新几内亚的法雷人部落( Fore Tribesmen) 上进行跨文化研究,提出了一种观点,即情感所对应的面部表情不是由文化决定的,而是普遍存在的。因此,他认为面部表情是生物本能,能够可靠地分类。 因此,他在 1972 年正式提出了六种基本情绪<ref name=":15" />:
 
  
* [[Anger]]
+
二十世纪六十年代末,保罗·埃克曼 (Paul Ekman) 在巴布亚新几内亚的法雷人部落( Fore Tribesmen) 上进行跨文化研究,提出了一种观点,即情感所对应的面部表情不是由文化决定的,而是普遍存在的。因此,他认为面部表情是生物本能,能够可靠地分类。<ref name="Ekman, P. 1969"/>因此,他在 1972 年正式提出了六种基本情绪<ref name=":15">{{cite conference | last = Ekman | first =  Paul | author-link = Paul Ekman | year = 1972 | title = Universals and Cultural Differences in Facial Expression of Emotion | editor-first = J. | editor-last = Cole | conference = Nebraska Symposium on Motivation | location = Lincoln, Nebraska | publisher = University of Nebraska Press | pages = 207–283 }}</ref>
* [[Disgust]]
 
* [[Fear]]
 
* [[Happiness]]
 
* [[Sadness]]
 
* [[Surprise (emotion)|Surprise]]<br />
 
  
 
* 愤怒  
 
* 愤怒  
第213行: 第159行:
 
* 惊喜
 
* 惊喜
  
However, in the 1990s Ekman expanded his list of basic emotions, including a range of positive and negative emotions not all of which are encoded in facial muscles.<ref>{{Cite book|last=Ekman |first=Paul |author-link=Paul Ekman |year=1999 |url=http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf |contribution=Basic Emotions |editor1-first=T |editor1-last=Dalgleish |editor2-first=M |editor2-last=Power |title=Handbook of Cognition and Emotion |place=Sussex, UK |publisher=John Wiley & Sons |url-status=dead |archive-url=https://web.archive.org/web/20101228085345/http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf |archive-date=2010-12-28 }}.</ref> The newly included emotions are:
 
# [[Amusement]]
 
# [[Contempt]]
 
# [[Contentment]]
 
# [[Embarrassment]]
 
# [[Anticipation (emotion)|Excitement]]
 
# [[Guilt (emotion)|Guilt]]
 
# [[Pride| Pride in achievement]]
 
# [[Relief (emotion)|Relief]]
 
# [[Contentment|Satisfaction]]
 
# [[Pleasure|Sensory pleasure]]
 
# [[Shame]]
 
  
然而,在20世纪90年代,埃克曼扩展了他的基本情绪列表,包括一系列积极和消极的情绪,这些情绪并非都对应于面部肌肉。新增的情绪是:  
+
然而,在20世纪90年代,埃克曼扩展了他的基本情绪列表,包括一系列积极和消极的情绪,这些情绪并非都对应于面部肌肉。<ref>{{Cite book|last=Ekman |first=Paul |author-link=Paul Ekman |year=1999 |url=http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf |contribution=Basic Emotions |editor1-first=T |editor1-last=Dalgleish |editor2-first=M |editor2-last=Power |title=Handbook of Cognition and Emotion |place=Sussex, UK |publisher=John Wiley & Sons |url-status=dead |archive-url=https://web.archive.org/web/20101228085345/http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf |archive-date=2010-12-28 }}.</ref>新增的情绪是:  
  
 
<nowiki>#</nowiki> 娱乐  
 
<nowiki>#</nowiki> 娱乐  
第251行: 第185行:
  
 
==== 面部行为编码系统 ====
 
==== 面部行为编码系统 ====
A system has been conceived by psychologists in order to formally categorize the physical expression of emotions on faces. The central concept of the Facial Action Coding System, or FACS, as created by Paul Ekman and Wallace V. Friesen in 1978 based on earlier work by Carl-Herman Hjortsjö<ref name=":16">[http://face-and-emotion.com/dataface/facs/description.jsp "Facial Action Coding System (FACS) and the FACS Manual"] {{webarchive |url=https://web.archive.org/web/20131019130324/http://face-and-emotion.com/dataface/facs/description.jsp |date=October 19, 2013 }}. A Human Face. Retrieved 21 March 2011.</ref> are action units (AU).
 
They are, basically, a contraction or a relaxation of one or more muscles. Psychologists have proposed the following classification of six basic emotions, according to their action units ("+" here mean "and"):
 
  
心理学家已经构想出一个系统,用来正式分类脸上情绪的物理表达。面部动作编码系统 (FACS) 的中心概念是由保罗·埃克曼( Paul Ekman )和 华莱士·V·弗里森(Wallace V. Friesen) 在 1978 年基于 Carl-Herman Hjortsjö <ref name=":16" />的早期工作创建的,动作单位 (Action unit, AU)是核心概念。它们基本上是一块或多块肌肉的收缩或放松。心理学家根据他们的行为单位,提出了以下六种基本情绪的分类(这里的“ +”是指“和”) :
+
心理学家已经构想出一个系统,用来正式分类脸上情绪的物理表达。面部动作编码系统 (FACS) 的中心概念是由保罗·埃克曼( Paul Ekman )和 华莱士·V·弗里森(Wallace V. Friesen) 在 1978 年基于 Carl-Herman Hjortsjö <ref name=":16">[http://face-and-emotion.com/dataface/facs/description.jsp "Facial Action Coding System (FACS) and the FACS Manual"] {{webarchive |url=https://web.archive.org/web/20131019130324/http://face-and-emotion.com/dataface/facs/description.jsp |date=October 19, 2013 }}. A Human Face. Retrieved 21 March 2011.</ref>的早期工作创建的,动作单位 (Action unit, AU)是核心概念。它们基本上是一块或多块肌肉的收缩或放松。心理学家根据他们的行为单位,提出了以下六种基本情绪的分类(这里的“ +”是指“和”) :
  
 
{| class="wikitable sortable"
 
{| class="wikitable sortable"
第276行: 第208行:
  
 
==== 面部情感检测的挑战 ====
 
==== 面部情感检测的挑战 ====
As with every computational practice, in affect detection by facial processing, some obstacles need to be surpassed, in order to fully unlock the hidden potential of the overall algorithm or method employed. In the early days of almost every kind of AI-based detection (speech recognition, face recognition, affect recognition), the accuracy of modeling and tracking has been an issue. As hardware evolves, as more data are collected and as new discoveries are made and new practices introduced, this lack of accuracy fades, leaving behind noise issues. However, methods for noise removal exist including neighborhood averaging, linear Gaussian smoothing, median filtering, or newer methods such as the Bacterial Foraging Optimization Algorithm.Clever Algorithms. "Bacterial Foraging Optimization Algorithm – Swarm Algorithms – Clever Algorithms" . Clever Algorithms. Retrieved 21 March 2011."Soft Computing". Soft Computing. Retrieved 18 March 2011.
 
  
 
正如计算领域的多数问题一样,在面部情感检测研究中,也有很多障碍需要克服,以便充分释放算法和方法的全部潜力。在几乎所有基于人工智能的检测(语音识别、人脸识别、情感识别)的早期,建模和跟踪的准确性一直是个问题。随着硬件的发展,数据集的完善,新的发现和新的实践的引入,准确性问题逐渐被解决,留下了噪音问题。现有的去噪方法包括'''[https://baike.baidu.com/item/%E7%9B%B8%E9%82%BB%E5%B9%B3%E5%9D%87%E6%B3%95/9807406 邻域平均法]'''、'''线性高斯平滑法'''、'''[https://zh.wikipedia.org/wiki/%E4%B8%AD%E5%80%BC%E6%BB%A4%E6%B3%A2%E5%99%A8 中值滤波法]''',或者更新的方法如'''菌群优化算法'''。
 
正如计算领域的多数问题一样,在面部情感检测研究中,也有很多障碍需要克服,以便充分释放算法和方法的全部潜力。在几乎所有基于人工智能的检测(语音识别、人脸识别、情感识别)的早期,建模和跟踪的准确性一直是个问题。随着硬件的发展,数据集的完善,新的发现和新的实践的引入,准确性问题逐渐被解决,留下了噪音问题。现有的去噪方法包括'''[https://baike.baidu.com/item/%E7%9B%B8%E9%82%BB%E5%B9%B3%E5%9D%87%E6%B3%95/9807406 邻域平均法]'''、'''线性高斯平滑法'''、'''[https://zh.wikipedia.org/wiki/%E4%B8%AD%E5%80%BC%E6%BB%A4%E6%B3%A2%E5%99%A8 中值滤波法]''',或者更新的方法如'''菌群优化算法'''。
 
Other challenges include
 
* The fact that posed expressions, as used by most subjects of the various studies, are not natural, and therefore algorithms trained on these may not apply to natural expressions.
 
* The lack of rotational movement freedom. Affect detection works very well with frontal use, but upon rotating the head more than 20 degrees, "there've been problems".<ref name=":17">Williams, Mark. [http://www.technologyreview.com/Infotech/18796/?a=f "Better Face-Recognition Software – Technology Review"]. Technology Review: The Authority on the Future of Technology. Retrieved 21 March 2011.</ref>
 
* Facial expressions do not always correspond to an underlying emotion that matches them (e.g. they can be posed or faked, or a person can feel emotions but maintain a "poker face").
 
* FACS did not include dynamics, while dynamics can help disambiguate (e.g. smiles of genuine happiness tend to have different dynamics than "try to look happy" smiles.)
 
* The FACS combinations do not correspond in a 1:1 way with the emotions that the psychologists originally proposed  (note that this lack of a 1:1 mapping also occurs in speech recognition with homophones and homonyms and many other sources of ambiguity, and may be mitigated by bringing in other channels of information).
 
* Accuracy of recognition is improved by adding context; however, adding context and other modalities increases computational cost and complexity
 
  
 
其他问题:  
 
其他问题:  
 
* 事实上,大多数研究所使用的摆拍表情是不自然的,因此训练这些算法可能不适用于自然表情。
 
* 事实上,大多数研究所使用的摆拍表情是不自然的,因此训练这些算法可能不适用于自然表情。
* 缺乏旋转运动的自由度。正面使用时效果检测效果很好,但在将头部旋转 20 度以上时,就会出现问题<ref name=":17" />。
+
* 缺乏旋转运动的自由度。正面使用时效果检测效果很好,但在将头部旋转 20 度以上时,就会出现问题<ref name=":17">Williams, Mark. [http://www.technologyreview.com/Infotech/18796/?a=f "Better Face-Recognition Software – Technology Review"]. Technology Review: The Authority on the Future of Technology. Retrieved 21 March 2011.</ref>。
 
* 面部表情并不总是与对应的情绪相对应(例如,它们可以摆拍或伪装,或者保持“扑克脸”)。
 
* 面部表情并不总是与对应的情绪相对应(例如,它们可以摆拍或伪装,或者保持“扑克脸”)。
 
* FACS 不包括动态,而动态可以帮助消除歧义(例如,真正快乐的微笑往往与“尝试看起来快乐”的微笑具有不同的动态)。
 
* FACS 不包括动态,而动态可以帮助消除歧义(例如,真正快乐的微笑往往与“尝试看起来快乐”的微笑具有不同的动态)。
第297行: 第220行:
  
 
=== 身体姿势 ===
 
=== 身体姿势 ===
{{Main|Gesture recognition}}
 
Gestures could be efficiently used as a means of detecting a particular emotional state of the user, especially when used in conjunction with speech and face recognition. Depending on the specific action, gestures could be simple reflexive responses, like lifting your shoulders when you don't know the answer to a question, or they could be complex and meaningful as when communicating with sign language. Without making use of any object or surrounding environment, we can wave our hands, clap or beckon. On the other hand, when using objects, we can point at them, move, touch or handle these. A computer should be able to recognize these, analyze the context and respond in a meaningful way, in order to be efficiently used for Human–Computer Interaction.
 
  
 
身体姿态可以有效地检测用户特定的情绪状态,特别是与语音和面部识别结合使用时。根据具体的动作,姿态可以是简单的反射性反应,比如当你不知道一个问题的答案时抬起你的肩膀;或者它们可以是复杂和有意义的,比如当用手语交流时。不需要利用任何物体或周围环境,我们可以挥手、拍手或招手。另一方面,当我们借助外物时,可以指向,移动,触摸或者持握。计算机应该能够识别这些姿态,分析情景并作出响应,以便有效地用于人机交互。
 
身体姿态可以有效地检测用户特定的情绪状态,特别是与语音和面部识别结合使用时。根据具体的动作,姿态可以是简单的反射性反应,比如当你不知道一个问题的答案时抬起你的肩膀;或者它们可以是复杂和有意义的,比如当用手语交流时。不需要利用任何物体或周围环境,我们可以挥手、拍手或招手。另一方面,当我们借助外物时,可以指向,移动,触摸或者持握。计算机应该能够识别这些姿态,分析情景并作出响应,以便有效地用于人机交互。
  
There are many proposed methods<ref name="JK">J. K. Aggarwal, Q. Cai, Human Motion Analysis: A Review, Computer Vision and Image Understanding, Vol. 73, No. 3, 1999</ref> to detect the body gesture. Some literature differentiates 2 different approaches in gesture recognition: a 3D model based and an appearance-based.<ref name="Vladimir">{{cite journal | first1 = Vladimir I. | last1 = Pavlovic | first2 = Rajeev | last2 = Sharma | first3 = Thomas S. | last3 = Huang | url = http://www.cs.rutgers.edu/~vladimir/pub/pavlovic97pami.pdf | title = Visual Interpretation of Hand Gestures for Human–Computer Interaction: A Review | journal = [[IEEE Transactions on Pattern Analysis and Machine Intelligence]] | volume = 19 | issue = 7 | pages = 677–695 | year = 1997 | doi = 10.1109/34.598226 }}</ref> The foremost method makes use of 3D information of key elements of the body parts in order to obtain several important parameters, like palm position or joint angles. On the other hand, appearance-based systems use images or videos to for direct interpretation. Hand gestures have been a common focus of body gesture detection methods.<ref name="Vladimir" />
 
  
身体姿态检测已经提出了许多方法<ref name="JK" />。 一些文献提出了姿势识别的两种不同方法:基于 3D 模型和基于外观<ref name="Vladimir" />。最重要的方法是利用人体关键部位的三维信息,获得手掌位置、关节角度等重要参数。另一方面,基于外观的系统直接使用图像或视频进行解释。手势一直是身体姿态检测方法的共同焦点<ref name="Vladimir" />。
+
身体姿态检测已经提出了许多方法<ref name="JK">J. K. Aggarwal, Q. Cai, Human Motion Analysis: A Review, Computer Vision and Image Understanding, Vol. 73, No. 3, 1999</ref> 。 一些文献提出了姿势识别的两种不同方法:基于 3D 模型和基于外观<ref name="Vladimir">{{cite journal | first1 = Vladimir I. | last1 = Pavlovic | first2 = Rajeev | last2 = Sharma | first3 = Thomas S. | last3 = Huang | url = http://www.cs.rutgers.edu/~vladimir/pub/pavlovic97pami.pdf | title = Visual Interpretation of Hand Gestures for Human–Computer Interaction: A Review | journal = [[IEEE Transactions on Pattern Analysis and Machine Intelligence]] | volume = 19 | issue = 7 | pages = 677–695 | year = 1997 | doi = 10.1109/34.598226 }}</ref>。最重要的方法是利用人体关键部位的三维信息,获得手掌位置、关节角度等重要参数。另一方面,基于外观的系统直接使用图像或视频进行解释。手势一直是身体姿态检测方法的共同焦点<ref name="Vladimir" />。
  
 
=== 生理检测 ===
 
=== 生理检测 ===
This could be used to detect a user's affective state by monitoring and analyzing their physiological signs. These signs range from changes in heart rate and skin conductance to minute contractions of the facial muscles and changes in facial blood flow. This area is gaining momentum and we are now seeing real products that implement the techniques. The four main physiological signs that are usually analyzed are [[Pulse|blood volume pulse]], [[Skin conductance|galvanic skin response]], [[facial electromyography]], and facial color patterns.
 
  
 
生理信号可用于检测和分析情绪状态。这些生理信号通常包括脉搏、心率、面部肌肉每分钟收缩频率等。这个领域的发展势头越来越强劲,并且已经有了应用这些技术的实际产品。通常被分析的4个主要生理特征是'''血容量脉冲'''、'''皮肤电反应'''、'''面部肌电图'''和面部颜色。
 
生理信号可用于检测和分析情绪状态。这些生理信号通常包括脉搏、心率、面部肌肉每分钟收缩频率等。这个领域的发展势头越来越强劲,并且已经有了应用这些技术的实际产品。通常被分析的4个主要生理特征是'''血容量脉冲'''、'''皮肤电反应'''、'''面部肌电图'''和面部颜色。
第314行: 第233行:
  
 
=====概述=====
 
=====概述=====
A subject's blood volume pulse (BVP) can be measured by a process called photoplethysmography, which produces a graph indicating blood flow through the extremities.<ref name="Picard, Rosalind 1998">Picard, Rosalind (1998). Affective Computing. MIT.</ref> The peaks of the waves indicate a cardiac cycle where the heart has pumped blood to the extremities. If the subject experiences fear or is startled, their heart usually 'jumps' and beats quickly for some time, causing the amplitude of the cardiac cycle to increase. This can clearly be seen on a photoplethysmograph when the distance between the trough and the peak of the wave has decreased. As the subject calms down, and as the body's inner core expands, allowing more blood to flow back to the extremities, the cycle will return to normal.
 
  
血容量脉搏(BVP)可以通过一个叫做光电容积扫描法的技术来测量,该方法产生一个图表来显示通过四肢的血液流动<ref name="Picard, Rosalind 1998" />。记录峰值代表着心搏周期中血流被泵到肢体末端。当被试受到惊吓或感到害怕时,他们往往会心跳加速,导致心率加快,从而在光电容积描记图上可以清楚地看到波峰与波谷间的距离变小。被试平静下来后,血液流回末端,心率回归正常。
+
血容量脉搏(BVP)可以通过一个叫做光电容积扫描法的技术来测量,该方法产生一个图表来显示通过四肢的血液流动<ref name="Picard, Rosalind 1998">Picard, Rosalind (1998). Affective Computing. MIT.</ref>。记录峰值代表着心搏周期中血流被泵到肢体末端。当被试受到惊吓或感到害怕时,他们往往会心跳加速,导致心率加快,从而在光电容积描记图上可以清楚地看到波峰与波谷间的距离变小。被试平静下来后,血液流回末端,心率回归正常。
 +
 
 
=====方法=====
 
=====方法=====
 
Infra-red light is shone on the skin by special sensor hardware, and the amount of light reflected is measured. The amount of reflected and transmitted light correlates to the BVP as light is absorbed by hemoglobin which is found richly in the bloodstream.
 
  
 
红外光通过特殊的传感器硬件照射在皮肤上,测量皮肤反射的光量。因为光线被血液中的血红蛋白吸收,所以反射光的数量与 BVP 相关。
 
红外光通过特殊的传感器硬件照射在皮肤上,测量皮肤反射的光量。因为光线被血液中的血红蛋白吸收,所以反射光的数量与 BVP 相关。
 
=====劣势=====
 
=====劣势=====
 
It can be cumbersome to ensure that the sensor shining an infra-red light and monitoring the reflected light is always pointing at the same extremity, especially seeing as subjects often stretch and readjust their position while using a computer.
 
There are other factors that can affect one's blood volume pulse. As it is a measure of blood flow through the extremities, if the subject feels hot, or particularly cold, then their body may allow more, or less, blood to flow to the extremities, all of this regardless of the subject's emotional state.
 
  
 
确保发出红外光并监测反射光的传感器始终指向同一个末端可能很麻烦,尤其是观察对象经常伸展并重新调整其位置时。 还有其他因素会影响血容量脉冲,因为它是对通过四肢的血流量的量度,如果受试者感觉热,或特别冷,那么他们的身体可能允许更多或更少的血液流向四肢,所有这一切都与受试者的情绪状态无关。
 
确保发出红外光并监测反射光的传感器始终指向同一个末端可能很麻烦,尤其是观察对象经常伸展并重新调整其位置时。 还有其他因素会影响血容量脉冲,因为它是对通过四肢的血流量的量度,如果受试者感觉热,或特别冷,那么他们的身体可能允许更多或更少的血液流向四肢,所有这一切都与受试者的情绪状态无关。
第334行: 第248行:
 
{{Main|Facial electromyography}}
 
{{Main|Facial electromyography}}
  
Facial electromyography is a technique used to measure the electrical activity of the facial muscles by amplifying the tiny electrical impulses that are generated by muscle fibers when they contract.<ref name="Larsen JT 2003">Larsen JT, Norris CJ, Cacioppo JT, "[https://web.archive.org/web/20181030170423/https://pdfs.semanticscholar.org/c3a5/4bfbaaade376aee951fe8578e6436be59861.pdf Effects of positive and negative affect on electromyographic activity over zygomaticus major and corrugator supercilii]", (September 2003)</ref>
 
The face expresses a great deal of emotion, however, there are two main facial muscle groups that are usually studied to detect emotion:
 
The corrugator supercilii muscle, also known as the 'frowning' muscle, draws the brow down into a frown, and therefore is the best test for negative, unpleasant emotional response.↵The zygomaticus major muscle is responsible for pulling the corners of the mouth back when you smile, and therefore is the muscle used to test for a positive emotional response.
 
  
面部肌电图是一种通过放大肌肉纤维收缩时产生的微小电脉冲来测量面部肌肉电活动的技术<ref name="Larsen JT 2003" />。面部表达大量情绪,然而,有两个主要的面部肌肉群通常被研究来检测情绪: 皱眉肌和颧大肌。皱眉肌将眉毛向下拉成皱眉,因此是对消极的、不愉快的情绪反应的最好反映。当微笑时,颧大肌负责将嘴角向后拉,因此是用于测试积极情绪反应的肌肉。
+
面部肌电图是一种通过放大肌肉纤维收缩时产生的微小电脉冲来测量面部肌肉电活动的技术<ref name="Larsen JT 2003">Larsen JT, Norris CJ, Cacioppo JT, "[https://web.archive.org/web/20181030170423/https://pdfs.semanticscholar.org/c3a5/4bfbaaade376aee951fe8578e6436be59861.pdf Effects of positive and negative affect on electromyographic activity over zygomaticus major and corrugator supercilii]", (September 2003)</ref>。面部表达大量情绪,然而,有两个主要的面部肌肉群通常被研究来检测情绪: 皱眉肌和颧大肌。皱眉肌将眉毛向下拉成皱眉,因此是对消极的、不愉快的情绪反应的最好反映。当微笑时,颧大肌负责将嘴角向后拉,因此是用于测试积极情绪反应的肌肉。
  
 
[[File:Gsrplot.svg|500px|thumb|Here we can see a plot of skin resistance measured using GSR and time whilst the subject played a video game. There are several peaks that are clear in the graph, which suggests that GSR is a good method of differentiating between an aroused and a non-aroused state. For example, at the start of the game where there is usually not much exciting game play, there is a high level of resistance recorded, which suggests a low level of conductivity and therefore less arousal. This is in clear contrast with the sudden trough where the player is killed as one is usually very stressed and tense as their character is killed in the game|链接=Special:FilePath/Gsrplot.svg]]
 
[[File:Gsrplot.svg|500px|thumb|Here we can see a plot of skin resistance measured using GSR and time whilst the subject played a video game. There are several peaks that are clear in the graph, which suggests that GSR is a good method of differentiating between an aroused and a non-aroused state. For example, at the start of the game where there is usually not much exciting game play, there is a high level of resistance recorded, which suggests a low level of conductivity and therefore less arousal. This is in clear contrast with the sudden trough where the player is killed as one is usually very stressed and tense as their character is killed in the game|链接=Special:FilePath/Gsrplot.svg]]
第345行: 第256行:
 
{{Main|Galvanic skin response}}
 
{{Main|Galvanic skin response}}
  
Galvanic skin response (GSR) is an outdated term for a more general phenomenon known as [Electrodermal Activity] or EDA.  EDA is a general phenomena whereby the skin's electrical properties change.  The skin is innervated by the [sympathetic nervous system], so measuring its resistance or conductance provides a way to quantify small changes in the sympathetic branch of the autonomic nervous system.  As the sweat glands are activated, even before the skin feels sweaty, the level of the EDA can be captured (usually using conductance) and used to discern small changes in autonomic arousal.  The more aroused a subject is, the greater the skin conductance tends to be.<ref name="Picard, Rosalind 1998" />
 
  
 
'''皮肤电反应'''(Galvanic skin response,GSR)是一个过时的术语,更一般的现象称为[Electrodermal Activity,皮肤电活动]或 EDA。EDA 是皮肤电特性改变的普遍现象。皮肤受交感神经支配,因此测量皮肤的电阻或电导率可以量化自主神经系统交感神经分支的细微变化。当汗腺被激活时,甚至在皮肤出汗之前,EDA 的水平就可以被捕获(通常使用电导) ,并用于辨别自主神经唤醒的微小变化。一个主体越兴奋,皮肤导电反应就越强烈<ref name="Picard, Rosalind 1998" />。
 
'''皮肤电反应'''(Galvanic skin response,GSR)是一个过时的术语,更一般的现象称为[Electrodermal Activity,皮肤电活动]或 EDA。EDA 是皮肤电特性改变的普遍现象。皮肤受交感神经支配,因此测量皮肤的电阻或电导率可以量化自主神经系统交感神经分支的细微变化。当汗腺被激活时,甚至在皮肤出汗之前,EDA 的水平就可以被捕获(通常使用电导) ,并用于辨别自主神经唤醒的微小变化。一个主体越兴奋,皮肤导电反应就越强烈<ref name="Picard, Rosalind 1998" />。
 
Skin conductance is often measured using two small [[silver-silver chloride]] electrodes placed somewhere on the skin and applying a small voltage between them. To maximize comfort and reduce irritation the electrodes can be placed on the wrist, legs, or feet, which leaves the hands fully free for daily activity.
 
  
 
皮肤导电反应通常是通过放置在皮肤某处的小型氯化银电极并在两者之间施加一个小电压来测量的。为了最大限度地舒适和减少刺激,电极可以放在手腕、腿上或脚上,这样手就可以完全自由地进行日常活动。
 
皮肤导电反应通常是通过放置在皮肤某处的小型氯化银电极并在两者之间施加一个小电压来测量的。为了最大限度地舒适和减少刺激,电极可以放在手腕、腿上或脚上,这样手就可以完全自由地进行日常活动。
第357行: 第265行:
 
===== 概述 =====
 
===== 概述 =====
  
 
+
人脸表面由大量血管网络支配。 这些血管中的血流变化会在脸上产生可见的颜色变化。 无论面部情绪是否激活面部肌肉,都会发生血流量、血压、血糖水平和其他变化。 此外,面部颜色信号与面部肌肉运动提供的信号无关<ref name="face">Carlos F. Benitez-Quiroz, Ramprakash Srinivasan, Aleix M. Martinez, [https://www.pnas.org/content/115/14/3581 Facial color is an efficient mechanism to visually transmit emotion], PNAS. April 3, 2018 115 (14) 3581–3586; first published March 19, 2018 https://doi.org/10.1073/pnas.1716084115.</ref>。
The surface of the human face is innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Whether or not facial emotions activate facial muscles, variations in blood flow, blood pressure, glucose levels, and other changes occur. Also, the facial color signal is independent from that provided by facial muscle movements.<ref name="face">Carlos F. Benitez-Quiroz, Ramprakash Srinivasan, Aleix M. Martinez, [https://www.pnas.org/content/115/14/3581 Facial color is an efficient mechanism to visually transmit emotion], PNAS. April 3, 2018 115 (14) 3581–3586; first published March 19, 2018 https://doi.org/10.1073/pnas.1716084115.</ref>
 
 
 
人脸表面由大量血管网络支配。 这些血管中的血流变化会在脸上产生可见的颜色变化。 无论面部情绪是否激活面部肌肉,都会发生血流量、血压、血糖水平和其他变化。 此外,面部颜色信号与面部肌肉运动提供的信号无关<ref name="face" />。
 
  
 
===== 方法 =====
 
===== 方法 =====
  
 
+
方法主要基于面部颜色的变化。 Delaunay 三角剖分用于创建三角形局部区域。 其中一些三角形定义了嘴和眼睛的内部(巩膜和虹膜), 使用左三角区域的像素来创建特征向量<ref name="face" />。它表明,将标准 RGB 颜色空间的像素颜色转换为 oRGB 颜色空间<ref name="orgb">M. Bratkova, S. Boulos, and P. Shirley, [https://ieeexplore.ieee.org/document/4736456 oRGB: a practical opponent color space for computer graphics], IEEE Computer Graphics and Applications, 29(1):42–55, 2009.</ref>LMS 通道等颜色空间在处理人脸时表现更好<ref name="mec">Hadas Shahar, [[Hagit Hel-Or]], [http://openaccess.thecvf.com/content_ICCVW_2019/papers/CVPM/Shahar_Micro_Expression_Classification_using_Facial_Color_and_Deep_Learning_Methods_ICCVW_2019_paper.pdf Micro Expression Classification using Facial Color and Deep Learning Methods], The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 0–0.</ref>。因此,将上面的矢量映射到较好的颜色空间,并分解为红绿色和黄蓝色通道。然后使用深度学习的方法来找到等效的情绪。
 
 
Approaches are based on facial color changes. Delaunay triangulation is used to create the triangular local areas. Some of these triangles which define the interior of the mouth and eyes (sclera and iris) are removed. Use the left triangular areas’ pixels to create feature vectors.<ref name="face" /> It shows that converting the pixel color of the standard RGB color space to a color space such as oRGB color space<ref name="orgb">M. Bratkova, S. Boulos, and P. Shirley, [https://ieeexplore.ieee.org/document/4736456 oRGB: a practical opponent color space for computer graphics], IEEE Computer Graphics and Applications, 29(1):42–55, 2009.</ref> or LMS channels perform better when dealing with faces.<ref name="mec">Hadas Shahar, [[Hagit Hel-Or]], [http://openaccess.thecvf.com/content_ICCVW_2019/papers/CVPM/Shahar_Micro_Expression_Classification_using_Facial_Color_and_Deep_Learning_Methods_ICCVW_2019_paper.pdf Micro Expression Classification using Facial Color and Deep Learning Methods], The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 0–0.</ref> So, map the above vector onto the better color space and decompose into red-green and yellow-blue channels. Then use deep learning methods to find equivalent emotions
 
 
 
方法主要基于面部颜色的变化。 Delaunay 三角剖分用于创建三角形局部区域。 其中一些三角形定义了嘴和眼睛的内部(巩膜和虹膜), 使用左三角区域的像素来创建特征向量<ref name="face" />。它表明,将标准 RGB 颜色空间的像素颜色转换为 oRGB 颜色空间<ref name="orgb" />或 LMS 通道等颜色空间在处理人脸时表现更好<ref name="mec" />。因此,将上面的矢量映射到较好的颜色空间,并分解为红绿色和黄蓝色通道。然后使用深度学习的方法来找到等效的情绪。
 
  
 
=== 视觉审美 ===
 
=== 视觉审美 ===
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring the aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing website as a data source.<ref name="datta">Ritendra Datta, Dhiraj Joshi, Jia Li and James Z. Wang, [https://web.archive.org/web/20181030170421/https://pdfs.semanticscholar.org/8772/877ceb40d6d8685655145034740f3df7baad.pdf Studying Aesthetics in Photographic Images Using a Computational Approach], Lecture Notes in Computer Science, vol. 3953, Proceedings of the European Conference on Computer Vision, Part III, pp. 288–301, Graz, Austria, May 2006.</ref> They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images.
 
  
美学,在艺术和摄影界,是指美的本质和欣赏原则。 对美和其他审美特质的判断是一项高度主观的任务。 宾夕法尼亚州立大学的计算机科学家将自动评价图像的审美特质视作机器学习的一大挑战,他们将一个同行评级的在线照片分享网站作为数据源,从中抽取了特定的视觉特征,可以区分审美上的愉悦与否。
+
美学,在艺术和摄影界,是指美的本质和欣赏原则。 对美和其他审美特质的判断是一项高度主观的任务。 宾夕法尼亚州立大学的计算机科学家将自动评价图像的审美特质视作机器学习的一大挑战,他们将一个同行评级的在线照片分享网站作为数据源<ref name="datta">Ritendra Datta, Dhiraj Joshi, Jia Li and James Z. Wang, [https://web.archive.org/web/20181030170421/https://pdfs.semanticscholar.org/8772/877ceb40d6d8685655145034740f3df7baad.pdf Studying Aesthetics in Photographic Images Using a Computational Approach], Lecture Notes in Computer Science, vol. 3953, Proceedings of the European Conference on Computer Vision, Part III, pp. 288–301, Graz, Austria, May 2006.</ref>,从中抽取了特定的视觉特征,可以区分审美上的愉悦与否。
  
 
== 潜在应用 ==
 
== 潜在应用 ==
  
 
=== 教育 ===
 
=== 教育 ===
Affection influences learners' learning state. Using affective computing technology, computers can judge the learners' affection and learning state by recognizing their facial expressions. In education, the teacher can use the analysis result to understand the student's learning and accepting ability, and then formulate reasonable teaching plans. At the same time, they can pay attention to students' inner feelings, which is helpful to students' psychological health. Especially in distance education, due to the separation of time and space, there is no emotional incentive between teachers and students for two-way communication. Without the atmosphere brought by traditional classroom learning, students are easily bored, and affect the learning effect. Applying affective computing in distance education system can effectively improve this situation.
 
<ref name=":18">http://www.learntechlib.org/p/173785/</ref>
 
  
情感影响学习者的学习状态。利用情感计算技术,计算机可以通过学习者的面部表情识别来判断学习者的情感和学习状态。在教学中,教师可以利用分析结果了解学生的学习和接受能力,制定合理的教学计划。同时关注学生的内心感受,有利于学生的心理健康。特别是在远程教育中,由于时间和空间的分离,师生之间缺乏双向交流的情感激励。没有了传统课堂学习带来的氛围,学生很容易感到无聊,影响学习效果。将情感计算应用于远程教育系统可以有效地改善这种状况<ref name=":18" />。
+
情感影响学习者的学习状态。利用情感计算技术,计算机可以通过学习者的面部表情识别来判断学习者的情感和学习状态。在教学中,教师可以利用分析结果了解学生的学习和接受能力,制定合理的教学计划。同时关注学生的内心感受,有利于学生的心理健康。特别是在远程教育中,由于时间和空间的分离,师生之间缺乏双向交流的情感激励。没有了传统课堂学习带来的氛围,学生很容易感到无聊,影响学习效果。将情感计算应用于远程教育系统可以有效地改善这种状况<ref name=":18">http://www.learntechlib.org/p/173785/</ref>。
  
 
=== 医疗 ===
 
=== 医疗 ===
[[Social robot]]s, as well as a growing number of robots used in health care benefit from emotional awareness because they can better judge users' and patient's emotional states and alter their actions/programming appropriately. This is especially important in those countries with growing aging populations and/or a lack of younger workers to address their needs.<ref name=":19">{{Cite book|title=Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence|last=Yonck|first=Richard|publisher=Arcade Publishing|year=2017|isbn=9781628727333|location=New York|pages=150–153|oclc=956349457}}</ref>
 
 
社会机器人,以及越来越多的机器人在医疗保健中的应用都受益于情感意识,因为它们可以更好地判断用户和病人的情感状态,并适当地改变他们的行为。在人口老龄化日益严重和缺乏年轻工人的国家,这一点尤为重要<ref name=":19" />。
 
  
Affective computing is also being applied to the development of communicative technologies for use by people with autism.<ref name=":20">[http://affect.media.mit.edu/projects.php Projects in Affective Computing]</ref> The affective component of a text is also increasingly gaining attention, particularly its role in the so-called emotional or [[emotive Internet]].<ref name=":21">Shanahan, James; Qu, Yan; Wiebe, Janyce (2006). ''Computing Attitude and Affect in Text: Theory and Applications''. Dordrecht: Springer Science & Business Media. p. 94. {{ISBN|1402040261}}</ref>
+
社会机器人,以及越来越多的机器人在医疗保健中的应用都受益于情感意识,因为它们可以更好地判断用户和病人的情感状态,并适当地改变他们的行为。在人口老龄化日益严重和缺乏年轻工人的国家,这一点尤为重要<ref name=":19">{{Cite book|title=Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence|last=Yonck|first=Richard|publisher=Arcade Publishing|year=2017|isbn=9781628727333|location=New York|pages=150–153|oclc=956349457}}</ref>
  
情感计算也被应用于交流技术的发展,以供孤独症患者使用<ref name=":20" />。情感计算项目文本中的情感成分也越来越受到关注,特别是它在所谓的情感或'''情感互联网'''中的作用<ref name=":21" />。
+
情感计算也被应用于交流技术的发展,以供孤独症患者使用<ref name=":20">[http://affect.media.mit.edu/projects.php Projects in Affective Computing]</ref>。情感计算项目文本中的情感成分也越来越受到关注,特别是它在所谓的情感或'''情感互联网'''中的作用<ref name=":21">Shanahan, James; Qu, Yan; Wiebe, Janyce (2006). ''Computing Attitude and Affect in Text: Theory and Applications''. Dordrecht: Springer Science & Business Media. p. 94. {{ISBN|1402040261}}</ref>。
 
=== 电子游戏 ===
 
=== 电子游戏 ===
  
 
+
情感型电子游戏可以通过'''生物反馈设备'''获取玩家的情绪状态<ref name=":22">{{cite conference |title=Affective Videogames and Modes of Affective Gaming: Assist Me, Challenge Me, Emote Me |first1=Kiel Mark |last1=Gilleade |first2=Alan |last2=Dix |first3=Jen |last3=Allanson |year=2005 |conference=Proc. [[Digital Games Research Association|DiGRA]] Conf. |url=http://comp.eprints.lancs.ac.uk/1057/1/Gilleade_Affective_Gaming_DIGRA_2005.pdf |access-date=2016-12-10 |archive-url=https://web.archive.org/web/20150406200454/http://comp.eprints.lancs.ac.uk/1057/1/Gilleade_Affective_Gaming_DIGRA_2005.pdf |archive-date=2015-04-06 |url-status=dead }}</ref>。有一些特别简单的生物反馈形式,如通过游戏手柄来测量按下按钮的压力,来获取玩家的唤醒度水平<ref name=":23">{{Cite conference| doi = 10.1145/765891.765957| title = Affective gaming: Measuring emotion through the gamepad| conference = CHI '03 Extended Abstracts on Human Factors in Computing Systems| year = 2003| last1 = Sykes | first1 = Jonathan| last2 = Brown | first2 = Simon| isbn = 1581136374| citeseerx = 10.1.1.92.2123}}</ref>; 另一方面是'''脑机接口'''<ref name=":24">{{Cite journal | doi = 10.1016/j.entcom.2009.09.007| title = Turning shortcomings into challenges: Brain–computer interfaces for games| journal = Entertainment Computing| volume = 1| issue = 2| pages = 85–94| year = 2009| last1 = Nijholt | first1 = Anton| last2 = Plass-Oude Bos | first2 = Danny| last3 = Reuderink | first3 = Boris| bibcode = 2009itie.conf..153N| url = http://wwwhome.cs.utwente.nl/~anijholt/artikelen/intetain_bci_2009.pdf}}</ref><ref name=":25">{{Cite conference| doi = 10.1007/978-3-642-02315-6_23| title = Affective Pacman: A Frustrating Game for Brain–Computer Interface Experiments| conference = Intelligent Technologies for Interactive Entertainment (INTETAIN)| pages = 221–227| year = 2009| last1 = Reuderink | first1 = Boris| last2 = Nijholt | first2 = Anton| last3 = Poel | first3 = Mannes| isbn = 978-3-642-02314-9}}</ref> 。情感游戏已被用于医学研究,以改善自闭症儿童的情感发展<ref name=":26">{{Cite journal
Affective video games can access their players' emotional states through [[biofeedback]] devices.<ref name=":22">{{cite conference |title=Affective Videogames and Modes of Affective Gaming: Assist Me, Challenge Me, Emote Me |first1=Kiel Mark |last1=Gilleade |first2=Alan |last2=Dix |first3=Jen |last3=Allanson |year=2005 |conference=Proc. [[Digital Games Research Association|DiGRA]] Conf. |url=http://comp.eprints.lancs.ac.uk/1057/1/Gilleade_Affective_Gaming_DIGRA_2005.pdf |access-date=2016-12-10 |archive-url=https://web.archive.org/web/20150406200454/http://comp.eprints.lancs.ac.uk/1057/1/Gilleade_Affective_Gaming_DIGRA_2005.pdf |archive-date=2015-04-06 |url-status=dead }}</ref> A particularly simple form of biofeedback is available through [[gamepad]]s that measure the pressure with which a button is pressed: this has been shown to correlate strongly with the players' level of [[arousal]];<ref name=":23">{{Cite conference| doi = 10.1145/765891.765957| title = Affective gaming: Measuring emotion through the gamepad| conference = CHI '03 Extended Abstracts on Human Factors in Computing Systems| year = 2003| last1 = Sykes | first1 = Jonathan| last2 = Brown | first2 = Simon| isbn = 1581136374| citeseerx = 10.1.1.92.2123}}</ref> at the other end of the scale are [[brain–computer interface]]s.<ref name=":24">{{Cite journal | doi = 10.1016/j.entcom.2009.09.007| title = Turning shortcomings into challenges: Brain–computer interfaces for games| journal = Entertainment Computing| volume = 1| issue = 2| pages = 85–94| year = 2009| last1 = Nijholt | first1 = Anton| last2 = Plass-Oude Bos | first2 = Danny| last3 = Reuderink | first3 = Boris| bibcode = 2009itie.conf..153N| url = http://wwwhome.cs.utwente.nl/~anijholt/artikelen/intetain_bci_2009.pdf}}</ref><ref name=":25">{{Cite conference| doi = 10.1007/978-3-642-02315-6_23| title = Affective Pacman: A Frustrating Game for Brain–Computer Interface Experiments| conference = Intelligent Technologies for Interactive Entertainment (INTETAIN)| pages = 221–227| year = 2009| last1 = Reuderink | first1 = Boris| last2 = Nijholt | first2 = Anton| last3 = Poel | first3 = Mannes| isbn = 978-3-642-02314-9}}</ref> Affective games have been used in medical research to support the emotional development of [[autism|autistic]] children.<ref name=":26">{{Cite journal
 
 
  | pmid = 19592726
 
  | pmid = 19592726
 
| year = 2009
 
| year = 2009
第403行: 第297行:
 
| volume = 144
 
| volume = 144
 
| pages = 37–9
 
| pages = 37–9
}}</ref>
+
}}</ref>
 +
 
  
情感型电子游戏可以通过'''生物反馈设备'''获取玩家的情绪状态<ref name=":22" />。有一些特别简单的生物反馈形式,如通过游戏手柄来测量按下按钮的压力,来获取玩家的唤醒度水平<ref name=":23" />; 另一方面是'''脑机接口'''<ref name=":24" /><ref name=":25" /> 。情感游戏已被用于医学研究,以改善自闭症儿童的情感发展<ref name=":26" />。
 
 
=== 其他应用 ===
 
=== 其他应用 ===
Other potential applications are centered around social monitoring.  For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry.<ref name=":27">{{cite web|url=https://gizmodo.com/in-car-facial-recognition-detects-angry-drivers-to-prev-1543709793|title=In-Car Facial Recognition Detects Angry Drivers To Prevent Road Rage|date=30 August 2018|website=Gizmodo}}</ref>  Affective computing has potential applications in [[human computer interaction|human–computer interaction]], such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.<ref name=":28">{{cite journal|last1=Janssen|first1=Joris H.|last2=van den Broek|first2=Egon L.|date=July 2012|title=Tune in to Your Emotions: A Robust Personalized Affective Music Player|journal=User Modeling and User-Adapted Interaction|volume=22|issue=3|pages=255–279|doi=10.1007/s11257-011-9107-7|doi-access=free}}</ref>
 
 
其他潜在的应用主要围绕社会监控。例如,一辆汽车可以监控所有乘客的情绪,并采取额外的安全措施。如果发现司机生气,就向其他车辆发出警报<ref name=":27" />。情感计算在人机交互方面有着潜在的应用,比如情感镜子可以让用户看到自己的表现; 情感监控代理在发送愤怒邮件之前发送警告; 甚至音乐播放器可以根据情绪选择音轨<ref name=":28" />。
 
  
One idea put forth by the Romanian researcher Dr. Nicu Sebe in an interview is the analysis of a person's face while they are using a certain product (he mentioned ice cream as an example).<ref name=":29">{{cite web|url=https://www.sciencedaily.com/videos/2006/0811-mona_lisa_smiling.htm|title=Mona Lisa: Smiling? Computer Scientists Develop Software That Evaluates Facial Expressions|date=1 August 2006|website=ScienceDaily|archive-url=https://web.archive.org/web/20071019235625/http://sciencedaily.com/videos/2006/0811-mona_lisa_smiling.htm|archive-date=19 October 2007|url-status=dead}}</ref> Companies would then be able to use such analysis to infer whether their product will or will not be well received by the respective market.
+
其他潜在的应用主要围绕社会监控。例如,一辆汽车可以监控所有乘客的情绪,并采取额外的安全措施。如果发现司机生气,就向其他车辆发出警报<ref name=":27">{{cite web|url=https://gizmodo.com/in-car-facial-recognition-detects-angry-drivers-to-prev-1543709793|title=In-Car Facial Recognition Detects Angry Drivers To Prevent Road Rage|date=30 August 2018|website=Gizmodo}}</ref> 。情感计算在人机交互方面有着潜在的应用,比如情感镜子可以让用户看到自己的表现; 情感监控代理在发送愤怒邮件之前发送警告; 甚至音乐播放器可以根据情绪选择音轨<ref name=":28">{{cite journal|last1=Janssen|first1=Joris H.|last2=van den Broek|first2=Egon L.|date=July 2012|title=Tune in to Your Emotions: A Robust Personalized Affective Music Player|journal=User Modeling and User-Adapted Interaction|volume=22|issue=3|pages=255–279|doi=10.1007/s11257-011-9107-7|doi-access=free}}</ref>
  
罗马尼亚研究人员尼库 · 塞贝博士在一次采访中提出的一个想法是,当一个人使用某种产品时,对他的面部进行分析(他提到了冰淇淋作为一个例子)<ref name=":29" />,公司就能够利用这种分析来推断他们的产品是否会受到各自市场的欢迎。
+
罗马尼亚研究人员尼库 · 塞贝博士在一次采访中提出的一个想法是,当一个人使用某种产品时,对他的面部进行分析(他提到了冰淇淋作为一个例子)<ref name=":29">{{cite web|url=https://www.sciencedaily.com/videos/2006/0811-mona_lisa_smiling.htm|title=Mona Lisa: Smiling? Computer Scientists Develop Software That Evaluates Facial Expressions|date=1 August 2006|website=ScienceDaily|archive-url=https://web.archive.org/web/20071019235625/http://sciencedaily.com/videos/2006/0811-mona_lisa_smiling.htm|archive-date=19 October 2007|url-status=dead}}</ref> ,公司就能够利用这种分析来推断他们的产品是否会受到各自市场的欢迎。
  
One could also use affective state recognition in order to judge the impact of a TV advertisement through a real-time video recording of that person and through the subsequent study of his or her facial expression. Averaging the results obtained on a large group of subjects, one can tell whether that commercial (or movie) has the desired effect and what the elements which interest the watcher most are.
 
  
 
人们也可以利用情感状态识别来判断电视广告的影响,通过实时录像和随后对人们面部表情的研究,之后对大量主题的结果进行平均,我们就能知道这个广告(或电影)是否达到了预期的效果,以及观众最感兴趣的元素是什么。
 
人们也可以利用情感状态识别来判断电视广告的影响,通过实时录像和随后对人们面部表情的研究,之后对大量主题的结果进行平均,我们就能知道这个广告(或电影)是否达到了预期的效果,以及观众最感兴趣的元素是什么。
  
 
== 认知主义与交互方法之争 ==
 
== 认知主义与交互方法之争 ==
Within the field of [[human–computer interaction]], Rosalind Picard's [[cognitivism (psychology)|cognitivist]] or "information model" concept of emotion has been criticized by and contrasted with the "post-cognitivist" or "interactional" [[pragmatism|pragmatist]] approach taken by Kirsten Boehner and others which views emotion as inherently social.<ref name=":30">{{cite journal|last1=Battarbee|first1=Katja|last2=Koskinen|first2=Ilpo|title=Co-experience: user experience as interaction|journal=CoDesign|date=2005|volume=1|issue=1|pages=5–18|url=http://www2.uiah.fi/~ikoskine/recentpapers/mobile_multimedia/coexperience_reprint_lr_5-18.pdf|doi=10.1080/15710880412331289917|citeseerx=10.1.1.294.9178|s2cid=15296236}}</ref>
 
  
在人机交互领域,罗莎琳德 · 皮卡德的情绪'''认知主义'''或“信息模型”概念受到了实用主义者柯尔斯滕 · 博纳等人的批判和对比,他们坚信“后认知主义”和“交互方法”<ref name=":30" />。
+
在人机交互领域,罗莎琳德 · 皮卡德的情绪'''认知主义'''或“信息模型”概念受到了实用主义者柯尔斯滕 · 博纳等人的批判和对比,他们坚信“后认知主义”和“交互方法”<ref name=":30">{{cite journal|last1=Battarbee|first1=Katja|last2=Koskinen|first2=Ilpo|title=Co-experience: user experience as interaction|journal=CoDesign|date=2005|volume=1|issue=1|pages=5–18|url=http://www2.uiah.fi/~ikoskine/recentpapers/mobile_multimedia/coexperience_reprint_lr_5-18.pdf|doi=10.1080/15710880412331289917|citeseerx=10.1.1.294.9178|s2cid=15296236}}</ref>。
  
Picard's focus is human–computer interaction, and her goal for affective computing is to "give computers the ability to recognize, express, and in some cases, 'have' emotions".<ref name="Affective Computing" /> In contrast, the interactional approach seeks to help "people to understand and experience their own emotions"<ref name="How emotion is made and measured">{{cite journal|last1=Boehner|first1=Kirsten|last2=DePaula|first2=Rogerio|last3=Dourish|first3=Paul|last4=Sengers|first4=Phoebe|title=How emotion is made and measured|journal=International Journal of Human–Computer Studies|date=2007|volume=65|issue=4|pages=275–291|doi=10.1016/j.ijhcs.2006.11.016}}</ref> and to improve computer-mediated interpersonal communication.  It does not necessarily seek to map emotion into an objective mathematical model for machine interpretation, but rather let humans make sense of each other's emotional expressions in open-ended ways that might be ambiguous, subjective, and sensitive to context.<ref name="How emotion is made and measured" />{{rp|284}}{{example needed|date=September 2018}}
 
  
皮卡德的研究重点是人机交互,她研究情感计算的目标是“赋予计算机识别、表达、在某些情况下‘拥有’情感的能力”<ref name="Affective Computing" />。相比之下,交互式的方法旨在帮助“人们理解和体验他们自己的情绪”<ref name="How emotion is made and measured" />,并改善以电脑为媒介的人际沟通。它认为不一定将情感映射到机器解释的客观数学模型中,重要的是让人类畅通无阻地理解彼此的情感,而这些情感信息往往会是歧义的、主观的或上下文敏感的<ref name="How emotion is made and measured" />。
+
皮卡德的研究重点是人机交互,她研究情感计算的目标是“赋予计算机识别、表达、在某些情况下‘拥有’情感的能力”<ref name="Affective Computing" />。相比之下,交互式的方法旨在帮助“人们理解和体验他们自己的情绪”<ref name="How emotion is made and measured">{{cite journal|last1=Boehner|first1=Kirsten|last2=DePaula|first2=Rogerio|last3=Dourish|first3=Paul|last4=Sengers|first4=Phoebe|title=How emotion is made and measured|journal=International Journal of Human–Computer Studies|date=2007|volume=65|issue=4|pages=275–291|doi=10.1016/j.ijhcs.2006.11.016}}</ref>,并改善以电脑为媒介的人际沟通。它认为不一定将情感映射到机器解释的客观数学模型中,重要的是让人类畅通无阻地理解彼此的情感,而这些情感信息往往会是歧义的、主观的或上下文敏感的<ref name="How emotion is made and measured" />。
  
Picard's critics describe her concept of emotion as "objective, internal, private, and mechanistic". They say it reduces emotion to a discrete psychological signal occurring inside the body that can be measured and which is an input to cognition, undercutting the complexity of emotional experience.<ref name="How emotion is made and measured" />{{rp|280}}<ref name="How emotion is made and measured" />{{rp|278}}
 
  
皮卡德的批评者将她的情感概念描述为“客观的、内在的、私人的和机械的”。他们认为她把情绪简化为发生在身体内部的一个离散的心理信号,这个信号可以被测量,并且是认知的输入,削弱了情绪体验的复杂性。
+
皮卡德的批评者将她的情感概念描述为“客观的、内在的、私人的和机械的”。他们认为她把情绪简化为发生在身体内部的一个离散的心理信号,这个信号可以被测量,并且是认知的输入,削弱了情绪体验的复杂性。<ref name="How emotion is made and measured" /><ref name="How emotion is made and measured" />
  
The interactional approach asserts that though emotion has biophysical aspects, it is "culturally grounded, dynamically experienced, and to some degree constructed in action and interaction".<ref name="How emotion is made and measured" />{{rp|276}} Put another way, it considers "emotion as a social and cultural product experienced through our interactions".<ref name=":31">{{cite journal|last1=Boehner|first1=Kirsten|last2=DePaula|first2=Rogerio|last3=Dourish|first3=Paul|last4=Sengers|first4=Phoebe|title=Affection: From Information to Interaction|journal=Proceedings of the Aarhus Decennial Conference on Critical Computing|date=2005|pages=59–68}}</ref><ref name="How emotion is made and measured" /><ref name=":32">{{cite journal|last1=Hook|first1=Kristina|last2=Staahl|first2=Anna|last3=Sundstrom|first3=Petra|last4=Laaksolahti|first4=Jarmo|title=Interactional empowerment|journal=Proc. CHI|date=2008|pages=647–656|url=http://research.microsoft.com/en-us/um/cambridge/projects/hci2020/pdf/interactional%20empowerment%20final%20Jan%2008.pdf}}</ref>
 
  
交互方法断言,虽然情绪具有生物物理性,但它是“以文化为基础的,动态体验的,并在某种程度上构建于行动和互动中”<ref name="How emotion is made and measured" />。换句话说,它认为“情感是一种通过我们的互动体验到的社会和文化产物”<ref name=":31" /><ref name="How emotion is made and measured" /><ref name=":32" />。
+
交互方法断言,虽然情绪具有生物物理性,但它是“以文化为基础的,动态体验的,并在某种程度上构建于行动和互动中”<ref name="How emotion is made and measured" />。换句话说,它认为“情感是一种通过我们的互动体验到的社会和文化产物”<ref name=":31">{{cite journal|last1=Boehner|first1=Kirsten|last2=DePaula|first2=Rogerio|last3=Dourish|first3=Paul|last4=Sengers|first4=Phoebe|title=Affection: From Information to Interaction|journal=Proceedings of the Aarhus Decennial Conference on Critical Computing|date=2005|pages=59–68}}</ref><ref name="How emotion is made and measured" /><ref name=":32">{{cite journal|last1=Hook|first1=Kristina|last2=Staahl|first2=Anna|last3=Sundstrom|first3=Petra|last4=Laaksolahti|first4=Jarmo|title=Interactional empowerment|journal=Proc. CHI|date=2008|pages=647–656|url=http://research.microsoft.com/en-us/um/cambridge/projects/hci2020/pdf/interactional%20empowerment%20final%20Jan%2008.pdf}}</ref>。
 
==另外参阅==
 
==另外参阅==
 
{{Columns-list|colwidth=30em|
 
{{Columns-list|colwidth=30em|
第509行: 第395行:
  
  
 
 
{{DEFAULTSORT:Affective Computing}}
 
[[index.php?title=分类:Affective computing| ]]
 
  
  

2021年8月22日 (日) 23:11的版本


情感计算 Affective computing (也被称为人工情感智能或情感AI)是基于系统和设备的研究和开发来识别、理解、处理和模拟人的情感。这是一个融合计算机科学心理学认知科学的跨学科领域[1]。虽然该领域的一些核心思想可以追溯到早期对情感[2] 的哲学研究,但计算机科学的现代分支研究起源于罗莎琳德·皮卡德1995年关于情感计算的论文[3]和她的由麻省理工出版社[4][5]出版的《情感计算》[6]。这项研究的动机之一是赋予机器情感智能,包括具备同理心。机器应能够解读人类的情绪状态,适应人类的情绪,并对这些情绪作出适当的反应。

研究范围

检测和识别情感信息

检测情感信息通常从被动式传感器开始,这些传感器捕捉关于用户身体状态或行为的数据,而不解释输入信息。收集的数据类似于人类用来感知他人情感的线索。例如,摄像机可以捕捉面部表情、身体姿势和手势,而麦克风可以捕捉语音。一些传感器可以通过直接测量生理数据(如皮肤温度和电流电阻)来探测情感信号[7]


识别情感信息需要从收集到的数据中提取出有意义的模式。这通常要使用多模态机器学习技术,如语音识别自然语言处理面部表情检测等。大多数这些技术的目标是给出与人类感情相一致的标签: 例如,如果一个人做出皱眉的面部表情,那么计算机视觉系统可能会被教导将他们的脸标记为“困惑”、“专注”或“轻微消极”(与象征着积极的快乐微笑相反)。这些标签可能与人们的真实感受相符,也可能不相符。

机器中的情感

情感计算的另一个研究领域是设计出能够展示天然的感情(或令人信服地模拟情感)的计算设备。基于当前的技术,一个更加可行的方法是模拟对话机器人的情感,以丰富和促进人与机器之间的互动[8]

人工智能领域的计算机科学先驱之一马文•明斯基(Marvin Minsky)在《情绪机器》(The Emotion Machine)一书中将情绪与更广泛的机器智能问题联系起来。他在书中表示,情绪“与我们所谓的‘思考’过程并没有特别的不同。'"[9]

技术

在心理学、认知科学和神经科学中,描述人类如何感知和分类情绪的方法主要有两种: 连续的和分类的。连续的方法倾向于使用诸如消极与积极、平静与激动之类的维度。

分类方法倾向于使用离散的类别,如快乐,悲伤,愤怒,恐惧,惊讶,厌恶。不同类型的机器学习回归和分类模型可以用于让机器产生连续或离散的标签。有时还会构建跨类别组合的模型,例如 一张高兴而惊讶的脸或一张害怕而惊讶的脸。[10]


接下来将讨论用于情感识别的不同种类的输入数据。

语音情感

自主神经系统的各种变化可以间接地改变一个人的语言,情感技术可以利用这些信息来识别情绪。例如,在恐惧、愤怒或高兴的状态下发言变得快速、响亮、清晰,音调变得越来越高,音域越来越宽;而诸如疲倦、厌倦或悲伤等情绪往往会产生缓慢、低沉、含糊不清的语音[11]。有些情绪更容易被计算识别,比如愤怒[12] 或赞同[13]

情感语音处理技术通过对语音特征的计算分析来识别用户的情感状态。通过模式识别技术[12][14] 可以分析声音参数和韵律特征,如音调高低和语速等。

语音分析是一种有效的情感状态识别方法,在最近的研究中,语音分析的平均报告准确率为70%-80%.[15][16]。这些系统往往比人类的平均准确率(大约60%[12])更高,但是不如使用其他情绪检测方式准确,比如生理状态或面部表情[17]。然而,由于许多言语特征是独立于语义或文化的,这种技术被认为是一个很有前景的研究方向[18]

算法

The process of speech/text affect detection requires the creation of a reliable database, knowledge base, or vector space model, broad enough to fit every need for its application, as well as the selection of a successful classifier which will allow for quick and accurate emotion identification.

语音/文本的情感检测程需要创建可靠的数据库知识库或者向量空间模型[19],为了适应各种应用,这些数据库的范围需要足够广泛;同时还需要选择一个又快又准的分类器,这样才能快速准确地识别情感。


目前常用的分类器有线性判别分类器(LDC)、 k- 近邻分类器(k-NN)、高斯混合模型(GMM)、支持向量机(SVM)、人工神经网络(ANN)、决策树算法隐马尔可夫模型(HMMs)[20]。各种研究表明,选择合适的分类器可以显著提高系统的整体性能。下面的列表给出了每个算法的简要描述:

  • LDC:特征以向量形式表示,通过计算特征的线性组合来分类。
  • k-NN:计算并选取特征空间中的点,将其与k个最近的数据点相比较,频数最大的类即为分类结果。
  • GMM:是一种概率模型,用于表示总体中子群的存在。 利用特征的多个高斯概率密度函数混合来分类[21]
  • SVM:是一种(通常为二分的)线性分类器,它决定每个输入可能属于两个(或多个)可能类别中的哪一个。
  • ANN:是一种受生物神经网络启发的数学模型,能够更好地处理特征空间可能存在的非线性。
  • 决策树算法:在一颗树中,每个叶子结点都是一个分类点,分支(路径)代表了一系列相邻接的特征,最终引向叶子节点实现分类。
  • HMMs:一种统计马尔可夫模型,其中的状态和状态转变不能直接用于观测。相反,依赖于状态的一系列输出是可见的。在情感识别领域,输出代表了语音特征向量的序列,这样可以推导出模型所经过的状态序列。这些状态包括情感表达中的各中间步骤,每个状态在输出向量上都有一个概率分布。状态序列是我们能够预测正在试图分类的情感状态,这也是语音情感识别中最为常用的技术之一。

研究证明,如果有足够的声音样本,人的情感可以被大多数主流分类器所正确分类。分类器模型由三个主要分类器组合而成: kNN、 C4.5和 SVM-RBF 核。该分类器比单独采集的基本分类器具有更好的分类性能。另外两组分类器为:1)具有混合内核的一对多 (OAA) 多类 SVM ,2)由C5.0 和神经网络两个基本分类器组成的分类器组,所提出的变体比这两组分类器有更好的性能[22]

数据库

绝大多数现有系统都依赖于数据。 选择一个恰当的数据库来训练分类器因而成为语音情感识别的首要问题。 目前拥有的大部分数据都是从演员那里获得的,都是一些典型的情绪表现。这些所谓的行为数据库通常是基于基本情绪理论(保罗 · 埃克曼) ,该理论假定存在六种基本情绪(愤怒、恐惧、厌恶、惊讶、喜悦、悲伤) ,其他情绪只是前者的混合体[23]。尽管如此,这仍然提供较高的音质和均衡的类别(尽管通常太少),有助于提高识别情绪的成功率。

然而,对于现实生活应用,自然数据是首选的。自然数据库可以通过在自然环境中观察和分析对象来产生。最终,自然数据库会帮助系统识别情境下的情绪,也可以用来发现交互的目标和结果。由于这类数据的自然性,可以真实自然地反映人机交互下的情感状态,也就可以应用于现实生活中的系统实现。

尽管自然数据比表演数据具有许多优势,但很难获得并且通常情绪强度较低。此外,由于环境噪声的存在、人员与麦克风的距离较远,在自然环境中获得的数据具有较低的信号质量。埃尔朗根-纽约堡大学的AIBO情感资料库(FAU Aibo Emotion Corpus for CEICES, CEICES: Combining Efforts for Improving Automatic Classification of Emotional User States)是建立自然情感数据库的首次尝试,其采集基于10—13岁儿童与索尼AIBO宠物机器人玩耍的真实情境。[24][25]同样,在情感研究领域,建立任何一个标准数据库,都需要提供评估方法,以比较不同情感识别系统的差异。

语音叙词

情感识别过程的复杂性随着分类器中使用的类(情感)和语音叙词的数量的增加而增加。因此,为了保证模型能够成功地识别情绪并提高性能,只选择最相关的特征,这对于实时检测尤为重要。可选择范围很广,有些研究提到使用了200多种不同的特征[20]。识别冗余的情感信息对于优化系统、提高情感检测的成功率至关重要。最常见的言语特征可分为以下几类[24][25]


  1. 频率特性[26]
  • 音调形状(Accent shape ):受基础频率变化的影响。
  • 平均音调(Average pitch):描述说话者相对于正常语言的音调高低。
  • 音调轮廓(Contour slope):描述频率随时间变化的趋势,可以是上升、下降或持平。
  • 尾音下降(Final lowering):一段话末尾频率下降的多少。
  • 音域(Pitch range):一段话语的最高和最低频率之间的差距。
  • 2.时间相关特征:
  • 语速(Speech rate):单位时间内发出词数或音节数。
  • 重音频率(Stress frequency):重读发生的频率
  • 3.音质参数和能量叙词:
  • 呼吸音(Breathiness):说话中的呼吸噪声
  • 亮度(Brilliance):语音中高频和低频的占比
  • 响度(Loudness):语音的振幅,亦为话音的能量
  • 暂停不连续性(Pause Discontinuity):描述声音和静音之间的转换
  • 音调不连续性(Pitch Discontinuity):描述基本频率的转换。

面部情感检测

面部表情的检测和处理通过光流隐马尔可夫模型神经网络主动外观模型等多种方法实现。可以组合或融合多种模态(多模态识别,例如面部表情和语音韵律[27]、面部表情和手势[28],或用于多模态数据和元数据分析的带有语音和文本的面部表情),以提供对受试者情绪的更可靠估计。Affectiva 是一家与情感计算直接相关的公司(由 Rosalind Picard 和 Rana El Kaliouby 共同创办) ,旨在研究面部情感检测的解决方案和软件。

面部表情数据库

情感数据库的建立是一项既困难又耗时的工作。然而,情感数据库是创建识别人类情感的系统的关键步骤。大多数公开的情感数据库只包含摆拍的面部表情,在这样的数据库中,参与者被要求展示不同的基本情绪表情;而在自然表情数据库中,面部表情是自发的。自然表情的发生需要选取恰当的刺激,这样才能引起目标表情的丰富展示。其次,这个过程需要受过训练的工作者为数据做标注,以实现数据库的高度可靠。因为表情及其强度的感知本质上是主观的,专家的标注对验证而言是十分重要的。


研究人员使用三种类型的数据库:峰值表情数据库、中性到峰值的情绪图像序列数据库以及带有情绪注释的视频片段。面部表情数据库是面部表情识别领域的一个重要研究课题,两个广泛使用的数据库是 CK+和 JAFFE。

情感分类

二十世纪六十年代末,保罗·埃克曼 (Paul Ekman) 在巴布亚新几内亚的法雷人部落( Fore Tribesmen) 上进行跨文化研究,提出了一种观点,即情感所对应的面部表情不是由文化决定的,而是普遍存在的。因此,他认为面部表情是生物本能,能够可靠地分类。[23]因此,他在 1972 年正式提出了六种基本情绪[29]

  • 愤怒
  • 厌恶
  • 恐惧
  • 快乐
  • 悲伤
  • 惊喜


然而,在20世纪90年代,埃克曼扩展了他的基本情绪列表,包括一系列积极和消极的情绪,这些情绪并非都对应于面部肌肉。[30]新增的情绪是:

# 娱乐

# 轻蔑

# 满足

# 尴尬

# 兴奋

# 内疚

# 成就感

# 解脱

# 满足

# 愉悦

# 羞耻

面部行为编码系统

心理学家已经构想出一个系统,用来正式分类脸上情绪的物理表达。面部动作编码系统 (FACS) 的中心概念是由保罗·埃克曼( Paul Ekman )和 华莱士·V·弗里森(Wallace V. Friesen) 在 1978 年基于 Carl-Herman Hjortsjö [31]的早期工作创建的,动作单位 (Action unit, AU)是核心概念。它们基本上是一块或多块肌肉的收缩或放松。心理学家根据他们的行为单位,提出了以下六种基本情绪的分类(这里的“ +”是指“和”) :

情感 行为单位
快乐 6+12
悲伤 1+4+15
惊喜 1+2+5B+26
恐惧 1+2+4+5+20+26
愤怒 4+5+7+23
厌恶 9+15+16
蔑视 R12A+R14A

面部情感检测的挑战

正如计算领域的多数问题一样,在面部情感检测研究中,也有很多障碍需要克服,以便充分释放算法和方法的全部潜力。在几乎所有基于人工智能的检测(语音识别、人脸识别、情感识别)的早期,建模和跟踪的准确性一直是个问题。随着硬件的发展,数据集的完善,新的发现和新的实践的引入,准确性问题逐渐被解决,留下了噪音问题。现有的去噪方法包括邻域平均法线性高斯平滑法中值滤波法,或者更新的方法如菌群优化算法

其他问题:

  • 事实上,大多数研究所使用的摆拍表情是不自然的,因此训练这些算法可能不适用于自然表情。
  • 缺乏旋转运动的自由度。正面使用时效果检测效果很好,但在将头部旋转 20 度以上时,就会出现问题[32]
  • 面部表情并不总是与对应的情绪相对应(例如,它们可以摆拍或伪装,或者保持“扑克脸”)。
  • FACS 不包括动态,而动态可以帮助消除歧义(例如,真正快乐的微笑往往与“尝试看起来快乐”的微笑具有不同的动态)。
  • FACS 组合与心理学家最初提出的情绪并不是一一对应的(这种缺乏 1:1 映射的情况也发生在具有同音异义词和许多其他歧义来源的语音识别中,可能通过引入其他信息渠道来缓解)。
  • 通过添加上下文提高了识别的准确性; 然而,添加上下文和其他模式增加了计算成本和复杂性

身体姿势

身体姿态可以有效地检测用户特定的情绪状态,特别是与语音和面部识别结合使用时。根据具体的动作,姿态可以是简单的反射性反应,比如当你不知道一个问题的答案时抬起你的肩膀;或者它们可以是复杂和有意义的,比如当用手语交流时。不需要利用任何物体或周围环境,我们可以挥手、拍手或招手。另一方面,当我们借助外物时,可以指向,移动,触摸或者持握。计算机应该能够识别这些姿态,分析情景并作出响应,以便有效地用于人机交互。


身体姿态检测已经提出了许多方法[33] 。 一些文献提出了姿势识别的两种不同方法:基于 3D 模型和基于外观[34]。最重要的方法是利用人体关键部位的三维信息,获得手掌位置、关节角度等重要参数。另一方面,基于外观的系统直接使用图像或视频进行解释。手势一直是身体姿态检测方法的共同焦点[34]

生理检测

生理信号可用于检测和分析情绪状态。这些生理信号通常包括脉搏、心率、面部肌肉每分钟收缩频率等。这个领域的发展势头越来越强劲,并且已经有了应用这些技术的实际产品。通常被分析的4个主要生理特征是血容量脉冲皮肤电反应面部肌电图和面部颜色。

血容量脉冲

概述

血容量脉搏(BVP)可以通过一个叫做光电容积扫描法的技术来测量,该方法产生一个图表来显示通过四肢的血液流动[35]。记录峰值代表着心搏周期中血流被泵到肢体末端。当被试受到惊吓或感到害怕时,他们往往会心跳加速,导致心率加快,从而在光电容积描记图上可以清楚地看到波峰与波谷间的距离变小。被试平静下来后,血液流回末端,心率回归正常。

方法

红外光通过特殊的传感器硬件照射在皮肤上,测量皮肤反射的光量。因为光线被血液中的血红蛋白吸收,所以反射光的数量与 BVP 相关。

劣势

确保发出红外光并监测反射光的传感器始终指向同一个末端可能很麻烦,尤其是观察对象经常伸展并重新调整其位置时。 还有其他因素会影响血容量脉冲,因为它是对通过四肢的血流量的量度,如果受试者感觉热,或特别冷,那么他们的身体可能允许更多或更少的血液流向四肢,所有这一切都与受试者的情绪状态无关。

The corrugator supercilii muscle and zygomaticus major muscle are the 2 main muscles used for measuring the electrical activity, in facial electromyography

面部肌电图


面部肌电图是一种通过放大肌肉纤维收缩时产生的微小电脉冲来测量面部肌肉电活动的技术[36]。面部表达大量情绪,然而,有两个主要的面部肌肉群通常被研究来检测情绪: 皱眉肌和颧大肌。皱眉肌将眉毛向下拉成皱眉,因此是对消极的、不愉快的情绪反应的最好反映。当微笑时,颧大肌负责将嘴角向后拉,因此是用于测试积极情绪反应的肌肉。

文件:Gsrplot.svg
Here we can see a plot of skin resistance measured using GSR and time whilst the subject played a video game. There are several peaks that are clear in the graph, which suggests that GSR is a good method of differentiating between an aroused and a non-aroused state. For example, at the start of the game where there is usually not much exciting game play, there is a high level of resistance recorded, which suggests a low level of conductivity and therefore less arousal. This is in clear contrast with the sudden trough where the player is killed as one is usually very stressed and tense as their character is killed in the game

皮肤电反应


皮肤电反应(Galvanic skin response,GSR)是一个过时的术语,更一般的现象称为[Electrodermal Activity,皮肤电活动]或 EDA。EDA 是皮肤电特性改变的普遍现象。皮肤受交感神经支配,因此测量皮肤的电阻或电导率可以量化自主神经系统交感神经分支的细微变化。当汗腺被激活时,甚至在皮肤出汗之前,EDA 的水平就可以被捕获(通常使用电导) ,并用于辨别自主神经唤醒的微小变化。一个主体越兴奋,皮肤导电反应就越强烈[35]

皮肤导电反应通常是通过放置在皮肤某处的小型氯化银电极并在两者之间施加一个小电压来测量的。为了最大限度地舒适和减少刺激,电极可以放在手腕、腿上或脚上,这样手就可以完全自由地进行日常活动。

面部颜色

概述

人脸表面由大量血管网络支配。 这些血管中的血流变化会在脸上产生可见的颜色变化。 无论面部情绪是否激活面部肌肉,都会发生血流量、血压、血糖水平和其他变化。 此外,面部颜色信号与面部肌肉运动提供的信号无关[37]

方法

方法主要基于面部颜色的变化。 Delaunay 三角剖分用于创建三角形局部区域。 其中一些三角形定义了嘴和眼睛的内部(巩膜和虹膜), 使用左三角区域的像素来创建特征向量[37]。它表明,将标准 RGB 颜色空间的像素颜色转换为 oRGB 颜色空间[38]或 LMS 通道等颜色空间在处理人脸时表现更好[39]。因此,将上面的矢量映射到较好的颜色空间,并分解为红绿色和黄蓝色通道。然后使用深度学习的方法来找到等效的情绪。

视觉审美

美学,在艺术和摄影界,是指美的本质和欣赏原则。 对美和其他审美特质的判断是一项高度主观的任务。 宾夕法尼亚州立大学的计算机科学家将自动评价图像的审美特质视作机器学习的一大挑战,他们将一个同行评级的在线照片分享网站作为数据源[40],从中抽取了特定的视觉特征,可以区分审美上的愉悦与否。

潜在应用

教育

情感影响学习者的学习状态。利用情感计算技术,计算机可以通过学习者的面部表情识别来判断学习者的情感和学习状态。在教学中,教师可以利用分析结果了解学生的学习和接受能力,制定合理的教学计划。同时关注学生的内心感受,有利于学生的心理健康。特别是在远程教育中,由于时间和空间的分离,师生之间缺乏双向交流的情感激励。没有了传统课堂学习带来的氛围,学生很容易感到无聊,影响学习效果。将情感计算应用于远程教育系统可以有效地改善这种状况[41]

医疗

社会机器人,以及越来越多的机器人在医疗保健中的应用都受益于情感意识,因为它们可以更好地判断用户和病人的情感状态,并适当地改变他们的行为。在人口老龄化日益严重和缺乏年轻工人的国家,这一点尤为重要[42]

情感计算也被应用于交流技术的发展,以供孤独症患者使用[43]。情感计算项目文本中的情感成分也越来越受到关注,特别是它在所谓的情感或情感互联网中的作用[44]

电子游戏

情感型电子游戏可以通过生物反馈设备获取玩家的情绪状态[45]。有一些特别简单的生物反馈形式,如通过游戏手柄来测量按下按钮的压力,来获取玩家的唤醒度水平[46]; 另一方面是脑机接口[47][48] 。情感游戏已被用于医学研究,以改善自闭症儿童的情感发展[49]


其他应用

其他潜在的应用主要围绕社会监控。例如,一辆汽车可以监控所有乘客的情绪,并采取额外的安全措施。如果发现司机生气,就向其他车辆发出警报[50] 。情感计算在人机交互方面有着潜在的应用,比如情感镜子可以让用户看到自己的表现; 情感监控代理在发送愤怒邮件之前发送警告; 甚至音乐播放器可以根据情绪选择音轨[51]

罗马尼亚研究人员尼库 · 塞贝博士在一次采访中提出的一个想法是,当一个人使用某种产品时,对他的面部进行分析(他提到了冰淇淋作为一个例子)[52] ,公司就能够利用这种分析来推断他们的产品是否会受到各自市场的欢迎。


人们也可以利用情感状态识别来判断电视广告的影响,通过实时录像和随后对人们面部表情的研究,之后对大量主题的结果进行平均,我们就能知道这个广告(或电影)是否达到了预期的效果,以及观众最感兴趣的元素是什么。

认知主义与交互方法之争

在人机交互领域,罗莎琳德 · 皮卡德的情绪认知主义或“信息模型”概念受到了实用主义者柯尔斯滕 · 博纳等人的批判和对比,他们坚信“后认知主义”和“交互方法”[53]


皮卡德的研究重点是人机交互,她研究情感计算的目标是“赋予计算机识别、表达、在某些情况下‘拥有’情感的能力”[6]。相比之下,交互式的方法旨在帮助“人们理解和体验他们自己的情绪”[54],并改善以电脑为媒介的人际沟通。它认为不一定将情感映射到机器解释的客观数学模型中,重要的是让人类畅通无阻地理解彼此的情感,而这些情感信息往往会是歧义的、主观的或上下文敏感的[54]


皮卡德的批评者将她的情感概念描述为“客观的、内在的、私人的和机械的”。他们认为她把情绪简化为发生在身体内部的一个离散的心理信号,这个信号可以被测量,并且是认知的输入,削弱了情绪体验的复杂性。[54][54]


交互方法断言,虽然情绪具有生物物理性,但它是“以文化为基础的,动态体验的,并在某种程度上构建于行动和互动中”[54]。换句话说,它认为“情感是一种通过我们的互动体验到的社会和文化产物”[55][54][56]

另外参阅


其他资源

  • Hudlicka, Eva (2003). "To feel or not to feel: The role of affect in human–computer interaction". International Journal of Human–Computer Studies. 59 (1–2): 1–32. CiteSeerX 10.1.1.180.6429. doi:10.1016/s1071-5819(03)00047-8.
  • Scherer, Klaus R; Bänziger, Tanja; Roesch, Etienne B (2010). A Blueprint for Affective Computing: A Sourcebook and Manual. Oxford: Oxford University Press. 


其他链接

  • Affective Computing Research Group at the MIT Media Laboratory
  • Computational Emotion Group at USC
  • Emotion Processing Unit – EPU
  • Emotive Computing Group at the University of Memphis
  • 2011 International Conference on Affective Computing and Intelligent Interaction
  • Brain, Body and Bytes: Psychophysiological User Interaction CHI 2010 Workshop (10–15, April 2010)
  • IEEE Transactions on Affective Computing (TAC)
  • openSMILE: popular state-of-the-art open-source toolkit for large-scale feature extraction for affect recognition and computational paralinguistics


  • MIT 媒体实验室情感计算研究小组
  • USC 计算情感小组
  • 情感处理单元-EPU
  • 曼菲斯大学情感计算小组
  • 2011年国际情感计算和智能交互会议
  • 大脑,身体和字节: 精神生理学用户交互 CHI 2010研讨会(10-15,2010年4月)
  • IEEE 情感计算会刊(TAC)
  • openSMILE: 流行的最先进的开源工具包,用于大规模的情感识别和计算语言学特征提取

模板:Navboxes



This page was moved from wikipedia:en:Affective computing. Its edit history can be viewed at 情感计算/edithistory



参考文献

  1. Tao, Jianhua; Tieniu Tan (2005). "Affective Computing: A Review". Affective Computing and Intelligent Interaction. Vol. LNCS 3784. Springer. pp. 981–995. doi:10.1007/11573548.
  2. James, William (1884). "What Is Emotion". Mind. 9 (34): 188–205. doi:10.1093/mind/os-IX.34.188. Cited by Tao and Tan.
  3. "Affective Computing" MIT Technical Report #321 (Abstract), 1995
  4. Kleine-Cosack, Christian (October 2006). "Recognition and Simulation of Emotions" (PDF). Archived from the original (PDF) on May 28, 2008. Retrieved May 13, 2008. The introduction of emotion to computer science was done by Pickard (sic) who created the field of affective computing.
  5. Diamond, David (December 2003). "The Love Machine; Building computers that care". Wired. Archived from the original on 18 May 2008. Retrieved May 13, 2008. Rosalind Picard, a genial MIT professor, is the field's godmother; her 1997 book, Affective Computing, triggered an explosion of interest in the emotional side of computers and their users.
  6. 6.0 6.1 Picard, Rosalind (1997). Affective Computing. Cambridge, MA: MIT Press. p. 1. 
  7. Garay, Nestor; Idoia Cearreta; Juan Miguel López; Inmaculada Fajardo (April 2006). "Assistive Technology and Affective Mediation" (PDF). Human Technology. 2 (1): 55–83. doi:10.17011/ht/urn.2006159. Archived (PDF) from the original on 28 May 2008. Retrieved 2008-05-12.
  8. Heise, David (2004). "Enculturating agents with expressive role behavior". Agent Culture: Human-Agent Interaction in a Mutlicultural World. Lawrence Erlbaum Associates. pp. 127–142. 
  9. Restak, Richard (2006-12-17). "Mind Over Matter". The Washington Post. Retrieved 2008-05-13.
  10. Aleix, and Shichuan Du, Martinez (2012). "A model of the perception of facial expressions of emotion by humans: Research overview and perspectives" (PDF). The Journal of Machine Learning Research. 13 (1): 1589–1608.
  11. Breazeal, C. and Aryananda, L. Recognition of affective communicative intent in robot-directed speech. Autonomous Robots 12 1, 2002. pp. 83–104.
  12. 12.0 12.1 12.2 Dellaert, F., Polizin, t., and Waibel, A., Recognizing Emotion in Speech", In Proc. Of ICSLP 1996, Philadelphia, PA, pp.1970–1973, 1996
  13. Roy, D.; Pentland, A. (1996-10-01). Automatic spoken affect classification and analysis. pp. 363–367. doi:10.1109/AFGR.1996.557292. ISBN 978-0-8186-7713-7. 
  14. Lee, C.M.; Narayanan, S.; Pieraccini, R., Recognition of Negative Emotion in the Human Speech Signals, Workshop on Auto. Speech Recognition and Understanding, Dec 2001
  15. Neiberg, D; Elenius, K; Laskowski, K (2006). "Emotion recognition in spontaneous speech using GMMs" (PDF). Proceedings of Interspeech.
  16. Yacoub, Sherif; Simske, Steve; Lin, Xiaofan; Burns, John (2003). "Recognition of Emotions in Interactive Voice Response Systems". Proceedings of Eurospeech: 729–732. CiteSeerX 10.1.1.420.8158.
  17. Hudlicka 2003, p. 24
  18. Hudlicka 2003, p. 25
  19. Charles Osgood; William May; Murray Miron (1975). Cross-Cultural Universals of Affective Meaning. Univ. of Illinois Press. ISBN 978-94-007-5069-2. https://archive.org/details/crossculturaluni00osgo. 
  20. 20.0 20.1 引用错误:无效<ref>标签;未给name属性为Scherer-2010-p241的引用提供文字
  21. "Gaussian Mixture Model". Connexions – Sharing Knowledge and Building Communities. Retrieved 10 March 2011.
  22. S.E. Khoruzhnikov; et al. (2014). "Extended speech emotion recognition and prediction". Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 14 (6): 137.
  23. 23.0 23.1 Ekman, P. & Friesen, W. V (1969). The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica, 1, 49–98.
  24. 24.0 24.1 Steidl, Stefan (5 March 2011). "FAU Aibo Emotion Corpus". Pattern Recognition Lab.
  25. 25.0 25.1 Scherer, Bänziger & Roesch 2010, p. 243
  26. Singh, Premjeet; Saha, Goutam; Sahidullah, Md (2021). "Non-linear frequency warping using constant-Q transformation for speech emotion recognition". 2021 International Conference on Computer Communication and Informatics (ICCCI). pp. 1–4. arXiv:2102.04029. doi:10.1109/ICCCI50826.2021.9402569. ISBN 978-1-7281-5875-4. 
  27. Caridakis, G.; Malatesta, L.; Kessous, L.; Amir, N.; Raouzaiou, A.; Karpouzis, K. (November 2–4, 2006). Modeling naturalistic affective states via facial and vocal expressions recognition. International Conference on Multimodal Interfaces (ICMI'06). Banff, Alberta, Canada.
  28. Balomenos, T.; Raouzaiou, A.; Ioannou, S.; Drosopoulos, A.; Karpouzis, K.; Kollias, S. (2004). "Emotion Analysis in Man-Machine Interaction Systems". In Bengio, Samy; Bourlard, Herve. Machine Learning for Multimodal Interaction. Lecture Notes in Computer Science. 3361. Springer-Verlag. pp. 318–328. http://www.image.ece.ntua.gr/php/savepaper.php?id=334. 
  29. Ekman, Paul (1972). Cole, J. (ed.). Universals and Cultural Differences in Facial Expression of Emotion. Nebraska Symposium on Motivation. Lincoln, Nebraska: University of Nebraska Press. pp. 207–283.
  30. Ekman, Paul (1999). "Basic Emotions". In Dalgleish, T; Power, M. Handbook of Cognition and Emotion. Sussex, UK: John Wiley & Sons. http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf. .
  31. "Facial Action Coding System (FACS) and the FACS Manual" -{zh-cn:互联网档案馆; zh-tw:網際網路檔案館; zh-hk:互聯網檔案館;}-存檔,存档日期October 19, 2013,.. A Human Face. Retrieved 21 March 2011.
  32. Williams, Mark. "Better Face-Recognition Software – Technology Review". Technology Review: The Authority on the Future of Technology. Retrieved 21 March 2011.
  33. J. K. Aggarwal, Q. Cai, Human Motion Analysis: A Review, Computer Vision and Image Understanding, Vol. 73, No. 3, 1999
  34. 34.0 34.1 Pavlovic, Vladimir I.; Sharma, Rajeev; Huang, Thomas S. (1997). "Visual Interpretation of Hand Gestures for Human–Computer Interaction: A Review" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 19 (7): 677–695. doi:10.1109/34.598226.
  35. 35.0 35.1 Picard, Rosalind (1998). Affective Computing. MIT.
  36. Larsen JT, Norris CJ, Cacioppo JT, "Effects of positive and negative affect on electromyographic activity over zygomaticus major and corrugator supercilii", (September 2003)
  37. 37.0 37.1 Carlos F. Benitez-Quiroz, Ramprakash Srinivasan, Aleix M. Martinez, Facial color is an efficient mechanism to visually transmit emotion, PNAS. April 3, 2018 115 (14) 3581–3586; first published March 19, 2018 https://doi.org/10.1073/pnas.1716084115.
  38. M. Bratkova, S. Boulos, and P. Shirley, oRGB: a practical opponent color space for computer graphics, IEEE Computer Graphics and Applications, 29(1):42–55, 2009.
  39. Hadas Shahar, Hagit Hel-Or, Micro Expression Classification using Facial Color and Deep Learning Methods, The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 0–0.
  40. Ritendra Datta, Dhiraj Joshi, Jia Li and James Z. Wang, Studying Aesthetics in Photographic Images Using a Computational Approach, Lecture Notes in Computer Science, vol. 3953, Proceedings of the European Conference on Computer Vision, Part III, pp. 288–301, Graz, Austria, May 2006.
  41. http://www.learntechlib.org/p/173785/
  42. Yonck, Richard (2017). Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence. New York: Arcade Publishing. pp. 150–153. ISBN 9781628727333. OCLC 956349457. 
  43. Projects in Affective Computing
  44. Shanahan, James; Qu, Yan; Wiebe, Janyce (2006). Computing Attitude and Affect in Text: Theory and Applications. Dordrecht: Springer Science & Business Media. p. 94.
  45. Gilleade, Kiel Mark; Dix, Alan; Allanson, Jen (2005). Affective Videogames and Modes of Affective Gaming: Assist Me, Challenge Me, Emote Me (PDF). Proc. DiGRA Conf. Archived from the original (PDF) on 2015-04-06. Retrieved 2016-12-10.
  46. Sykes, Jonathan; Brown, Simon (2003). Affective gaming: Measuring emotion through the gamepad. CHI '03 Extended Abstracts on Human Factors in Computing Systems. CiteSeerX 10.1.1.92.2123. doi:10.1145/765891.765957. ISBN 1581136374.
  47. Nijholt, Anton; Plass-Oude Bos, Danny; Reuderink, Boris (2009). "Turning shortcomings into challenges: Brain–computer interfaces for games" (PDF). Entertainment Computing. 1 (2): 85–94. Bibcode:2009itie.conf..153N. doi:10.1016/j.entcom.2009.09.007.
  48. Reuderink, Boris; Nijholt, Anton; Poel, Mannes (2009). Affective Pacman: A Frustrating Game for Brain–Computer Interface Experiments. Intelligent Technologies for Interactive Entertainment (INTETAIN). pp. 221–227. doi:10.1007/978-3-642-02315-6_23. ISBN 978-3-642-02314-9.
  49. Khandaker, M (2009). "Designing affective video games to support the social-emotional development of teenagers with autism spectrum disorders". Studies in Health Technology and Informatics. 144: 37–9. PMID 19592726.
  50. "In-Car Facial Recognition Detects Angry Drivers To Prevent Road Rage". Gizmodo. 30 August 2018.
  51. Janssen, Joris H.; van den Broek, Egon L. (July 2012). "Tune in to Your Emotions: A Robust Personalized Affective Music Player". User Modeling and User-Adapted Interaction. 22 (3): 255–279. doi:10.1007/s11257-011-9107-7.
  52. "Mona Lisa: Smiling? Computer Scientists Develop Software That Evaluates Facial Expressions". ScienceDaily. 1 August 2006. Archived from the original on 19 October 2007.
  53. Battarbee, Katja; Koskinen, Ilpo (2005). "Co-experience: user experience as interaction" (PDF). CoDesign. 1 (1): 5–18. CiteSeerX 10.1.1.294.9178. doi:10.1080/15710880412331289917. S2CID 15296236.
  54. 54.0 54.1 54.2 54.3 54.4 54.5 Boehner, Kirsten; DePaula, Rogerio; Dourish, Paul; Sengers, Phoebe (2007). "How emotion is made and measured". International Journal of Human–Computer Studies. 65 (4): 275–291. doi:10.1016/j.ijhcs.2006.11.016.
  55. Boehner, Kirsten; DePaula, Rogerio; Dourish, Paul; Sengers, Phoebe (2005). "Affection: From Information to Interaction". Proceedings of the Aarhus Decennial Conference on Critical Computing: 59–68.
  56. Hook, Kristina; Staahl, Anna; Sundstrom, Petra; Laaksolahti, Jarmo (2008). "Interactional empowerment" (PDF). Proc. CHI: 647–656.








本中文词条由11编译,CecileLi、 栗子CUGB审校,糖糖编辑,如有问题,欢迎在讨论页面留言。


本词条内容源自wikipedia及公开资料,遵守 CC3.0协议。