第30行: |
第30行: |
| One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions. | | One of the motivations for the research is the ability to give machines emotional intelligence, including to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions. |
| | | |
− | '''情感计算''' '''Affective computing ('''也被称为人工情感智能或情感AI)是基于系统和设备的研究和开发来识别、理解、处理和模拟人的情感。这是一个融合计算机科学、心理学和认知科学的跨学科领域<ref name="TaoTan" />。虽然该领域的一些核心思想可以追溯到早期对情感<ref name=":0" />的哲学研究,但计算机科学的现代分支研究起源于罗莎琳德·皮卡德1995年关于情感计算的论文【3】和她的由麻省理工出版社【5】【6】出版的《情感计算》【4】。这项研究的动机之一是赋予机器情感智能,包括具备同理心。机器应能够解读人类的情绪状态,适应人类的情绪,并对这些情绪作出适当的反应。 | + | '''情感计算''' '''Affective computing ('''也被称为人工情感智能或情感AI)是基于系统和设备的研究和开发来识别、理解、处理和模拟人的情感。这是一个融合'''计算机科学'''、'''心理学'''和'''认知科学'''的跨学科领域<ref name="TaoTan" />。虽然该领域的一些核心思想可以追溯到早期对情感<ref name=":0" />的哲学研究,但计算机科学的现代分支研究起源于罗莎琳德·皮卡德1995年关于情感计算的论文【3】和她的由麻省理工出版社【5】【6】出版的《情感计算》【4】。这项研究的动机之一是赋予机器情感智能,包括具备'''同理心'''。机器应能够解读人类的情绪状态,适应人类的情绪,并对这些情绪作出适当的反应。 |
| | | |
| == Areas == | | == Areas == |
第58行: |
第58行: |
| Detecting emotional information usually begins with passive sensors that capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture, and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance. | | Detecting emotional information usually begins with passive sensors that capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture, and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance. |
| | | |
− | 检测情感信息通常从被动式传感器开始,这些传感器捕捉关于用户身体状态或行为的数据,而不解释输入信息。收集的数据类似于人类用来感知他人情感的线索。例如,摄像机可以捕捉面部表情、身体姿势和手势,而麦克风可以捕捉语音。一些传感器可以通过直接测量生理数据(如皮肤温度和电流电阻)来探测情感信号【7】。
| + | 检测情感信息通常从被动式'''传感器'''开始,这些传感器捕捉关于用户身体状态或行为的数据,而不解释输入信息。收集的数据类似于人类用来感知他人情感的线索。例如,摄像机可以捕捉面部表情、身体姿势和手势,而麦克风可以捕捉语音。一些传感器可以通过直接测量生理数据(如皮肤温度和电流电阻)来探测情感信号【7】。 |
| | | |
| Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different [[Modality (human–computer interaction)|modalities]], such as [[speech recognition]], [[natural language processing]], or [[face recognition|facial expression detection]]. The goal of most of these techniques is to produce labels that would match the labels a human perceiver would give in the same situation: For example, if a person makes a facial expression furrowing their brow, then the computer vision system might be taught to label their face as appearing "confused" or as "concentrating" or "slightly negative" (as opposed to positive, which it might say if they were smiling in a happy-appearing way). These labels may or may not correspond to what the person is actually feeling. | | Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different [[Modality (human–computer interaction)|modalities]], such as [[speech recognition]], [[natural language processing]], or [[face recognition|facial expression detection]]. The goal of most of these techniques is to produce labels that would match the labels a human perceiver would give in the same situation: For example, if a person makes a facial expression furrowing their brow, then the computer vision system might be taught to label their face as appearing "confused" or as "concentrating" or "slightly negative" (as opposed to positive, which it might say if they were smiling in a happy-appearing way). These labels may or may not correspond to what the person is actually feeling. |
第64行: |
第64行: |
| Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different modalities, such as speech recognition, natural language processing, or facial expression detection. The goal of most of these techniques is to produce labels that would match the labels a human perceiver would give in the same situation: For example, if a person makes a facial expression furrowing their brow, then the computer vision system might be taught to label their face as appearing "confused" or as "concentrating" or "slightly negative" (as opposed to positive, which it might say if they were smiling in a happy-appearing way). These labels may or may not correspond to what the person is actually feeling. | | Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different modalities, such as speech recognition, natural language processing, or facial expression detection. The goal of most of these techniques is to produce labels that would match the labels a human perceiver would give in the same situation: For example, if a person makes a facial expression furrowing their brow, then the computer vision system might be taught to label their face as appearing "confused" or as "concentrating" or "slightly negative" (as opposed to positive, which it might say if they were smiling in a happy-appearing way). These labels may or may not correspond to what the person is actually feeling. |
| | | |
− | 识别情感信息需要从收集到的数据中提取出有意义的模式。这通常要使用多模态机器学习技术,如语音识别、自然语言处理或面部表情检测等。大多数这些技术的目标是给出与人类感情相一致的标签: 例如,如果一个人做出皱眉的面部表情,那么计算机视觉系统可能会被教导将他们的脸标记为“困惑”、“专注”或“轻微消极”(与象征着积极的快乐微笑相反)。这些标签可能与人们的真实感受相符,也可能不相符。
| + | 识别情感信息需要从收集到的数据中提取出有意义的模式。这通常要使用'''多模态'''机器学习技术,如'''语音识别'''、'''自然语言处理'''或'''面部表情检测'''等。大多数这些技术的目标是给出与人类感情相一致的标签: 例如,如果一个人做出皱眉的面部表情,那么计算机视觉系统可能会被教导将他们的脸标记为“困惑”、“专注”或“轻微消极”(与象征着积极的快乐微笑相反)。这些标签可能与人们的真实感受相符,也可能不相符。 |
| | | |
| ===Emotion in machines=== | | ===Emotion in machines=== |
第146行: |
第146行: |
| broad enough to fit every need for its application, as well as the selection of a successful classifier which will allow for quick and accurate emotion identification. | | broad enough to fit every need for its application, as well as the selection of a successful classifier which will allow for quick and accurate emotion identification. |
| | | |
− | 语音/文本的情感检测程需要创建可靠的数据库、知识库或者向量空间模型【19】,为了适应各种应用,这些数据库的范围需要足够广泛;同时还需要选择一个又快又准的分类器,这样才能快速准确地识别情感。 | + | 语音/文本的情感检测程需要创建可靠的'''数据库'''、'''知识库'''或者'''向量空间模型'''【19】,为了适应各种应用,这些数据库的范围需要足够广泛;同时还需要选择一个又快又准的分类器,这样才能快速准确地识别情感。 |
| | | |
| Currently, the most frequently used classifiers are linear discriminant classifiers (LDC), k-nearest neighbor (k-NN), Gaussian mixture model (GMM), support vector machines (SVM), artificial neural networks (ANN), decision tree algorithms and hidden Markov models (HMMs).<ref name="Scherer-2010-p241">{{harvnb|Scherer|Bänziger|Roesch|2010|p=241}}</ref> Various studies showed that choosing the appropriate classifier can significantly enhance the overall performance of the system.<ref name="Hudlicka-2003-p24"/> The list below gives a brief description of each algorithm: | | Currently, the most frequently used classifiers are linear discriminant classifiers (LDC), k-nearest neighbor (k-NN), Gaussian mixture model (GMM), support vector machines (SVM), artificial neural networks (ANN), decision tree algorithms and hidden Markov models (HMMs).<ref name="Scherer-2010-p241">{{harvnb|Scherer|Bänziger|Roesch|2010|p=241}}</ref> Various studies showed that choosing the appropriate classifier can significantly enhance the overall performance of the system.<ref name="Hudlicka-2003-p24"/> The list below gives a brief description of each algorithm: |
第152行: |
第152行: |
| Currently, the most frequently used classifiers are linear discriminant classifiers (LDC), k-nearest neighbor (k-NN), Gaussian mixture model (GMM), support vector machines (SVM), artificial neural networks (ANN), decision tree algorithms and hidden Markov models (HMMs). Various studies showed that choosing the appropriate classifier can significantly enhance the overall performance of the system. The list below gives a brief description of each algorithm: | | Currently, the most frequently used classifiers are linear discriminant classifiers (LDC), k-nearest neighbor (k-NN), Gaussian mixture model (GMM), support vector machines (SVM), artificial neural networks (ANN), decision tree algorithms and hidden Markov models (HMMs). Various studies showed that choosing the appropriate classifier can significantly enhance the overall performance of the system. The list below gives a brief description of each algorithm: |
| | | |
− | 目前常用的分类器有线性判别分类器(LDC)、 k- 近邻分类器(k-NN)、高斯混合模型(GMM)、支持向量机(SVM)、人工神经网络(ANN)、决策树算法和隐马尔可夫模型(HMMs)【20】。各种研究表明,选择合适的分类器可以显著提高系统的整体性能。下面的列表给出了每个算法的简要描述:
| + | 目前常用的分类器有'''线性判别分类器'''(LDC)、 '''k- 近邻分类器'''(k-NN)、'''高斯混合模型'''(GMM)、'''支持向量机'''(SVM)、'''人工神经网络'''(ANN)、'''决策树算法'''和'''隐马尔可夫模型'''(HMMs)【20】。各种研究表明,选择合适的分类器可以显著提高系统的整体性能。下面的列表给出了每个算法的简要描述: |
| | | |
| * [[Linear classifier|LDC]] – Classification happens based on the value obtained from the linear combination of the feature values, which are usually provided in the form of vector features. | | * [[Linear classifier|LDC]] – Classification happens based on the value obtained from the linear combination of the feature values, which are usually provided in the form of vector features. |
第201行: |
第201行: |
| However, for real life application, naturalistic data is preferred. A naturalistic database can be produced by observation and analysis of subjects in their natural context. Ultimately, such database should allow the system to recognize emotions based on their context as well as work out the goals and outcomes of the interaction. The nature of this type of data allows for authentic real life implementation, due to the fact it describes states naturally occurring during the human–computer interaction (HCI). | | However, for real life application, naturalistic data is preferred. A naturalistic database can be produced by observation and analysis of subjects in their natural context. Ultimately, such database should allow the system to recognize emotions based on their context as well as work out the goals and outcomes of the interaction. The nature of this type of data allows for authentic real life implementation, due to the fact it describes states naturally occurring during the human–computer interaction (HCI). |
| | | |
− | 然而,对于现实生活应用,自然数据是首选的。自然数据库可以通过在自然环境中观察和分析对象来产生。最终,自然数据库会帮助系统识别情境下的情绪,也可以用来发现交互的目标和结果。由于这类数据的自然性,可以真实自然地反映人机交互下的情感状态,也就可以应用于现实生活中的系统实现。
| + | 然而,对于现实生活应用,自然数据是首选的。自然数据库可以通过在自然环境中观察和分析对象来产生。最终,自然数据库会帮助系统识别情境下的情绪,也可以用来发现交互的目标和结果。由于这类数据的自然性,可以真实自然地反映'''人机交互'''下的情感状态,也就可以应用于现实生活中的系统实现。 |
| | | |
| Despite the numerous advantages which naturalistic data has over acted data, it is difficult to obtain and usually has low emotional intensity. Moreover, data obtained in a natural context has lower signal quality, due to surroundings noise and distance of the subjects from the microphone. The first attempt to produce such database was the FAU Aibo Emotion Corpus for CEICES (Combining Efforts for Improving Automatic Classification of Emotional User States), which was developed based on a realistic context of children (age 10–13) playing with Sony's Aibo robot pet.<ref name="Steidl-2011">{{cite web | last = Steidl | first = Stefan | title = FAU Aibo Emotion Corpus | publisher = Pattern Recognition Lab | date = 5 March 2011 | url = http://www5.cs.fau.de/de/mitarbeiter/steidl-stefan/fau-aibo-emotion-corpus/ }}</ref><ref name="Scherer-2010-p243">{{harvnb|Scherer|Bänziger|Roesch|2010|p=243}}</ref> Likewise, producing one standard database for all emotional research would provide a method of evaluating and comparing different affect recognition systems. | | Despite the numerous advantages which naturalistic data has over acted data, it is difficult to obtain and usually has low emotional intensity. Moreover, data obtained in a natural context has lower signal quality, due to surroundings noise and distance of the subjects from the microphone. The first attempt to produce such database was the FAU Aibo Emotion Corpus for CEICES (Combining Efforts for Improving Automatic Classification of Emotional User States), which was developed based on a realistic context of children (age 10–13) playing with Sony's Aibo robot pet.<ref name="Steidl-2011">{{cite web | last = Steidl | first = Stefan | title = FAU Aibo Emotion Corpus | publisher = Pattern Recognition Lab | date = 5 March 2011 | url = http://www5.cs.fau.de/de/mitarbeiter/steidl-stefan/fau-aibo-emotion-corpus/ }}</ref><ref name="Scherer-2010-p243">{{harvnb|Scherer|Bänziger|Roesch|2010|p=243}}</ref> Likewise, producing one standard database for all emotional research would provide a method of evaluating and comparing different affect recognition systems. |
第207行: |
第207行: |
| Despite the numerous advantages which naturalistic data has over acted data, it is difficult to obtain and usually has low emotional intensity. Moreover, data obtained in a natural context has lower signal quality, due to surroundings noise and distance of the subjects from the microphone. The first attempt to produce such database was the FAU Aibo Emotion Corpus for CEICES (Combining Efforts for Improving Automatic Classification of Emotional User States), which was developed based on a realistic context of children (age 10–13) playing with Sony's Aibo robot pet. Likewise, producing one standard database for all emotional research would provide a method of evaluating and comparing different affect recognition systems. | | Despite the numerous advantages which naturalistic data has over acted data, it is difficult to obtain and usually has low emotional intensity. Moreover, data obtained in a natural context has lower signal quality, due to surroundings noise and distance of the subjects from the microphone. The first attempt to produce such database was the FAU Aibo Emotion Corpus for CEICES (Combining Efforts for Improving Automatic Classification of Emotional User States), which was developed based on a realistic context of children (age 10–13) playing with Sony's Aibo robot pet. Likewise, producing one standard database for all emotional research would provide a method of evaluating and comparing different affect recognition systems. |
| | | |
− | 尽管自然数据比表演数据具有许多优势,但很难获得并且通常情绪强度较低。此外,由于环境噪声的存在、人员与麦克风的距离较远,在自然环境中获得的数据具有较低的信号质量。埃尔朗根-纽约堡大学的AIBO情感资料库(FAU Aibo Emotion Corpus for CEICES, CEICES: Combining Efforts for Improving Automatic Classification of Emotional User States)是建立自然情感数据库的首次尝试,其采集基于10—13岁儿童与索尼AIBO宠物机器人玩耍的真实情境。同样,在情感研究领域,建立任何一个标准数据库,都需要提供评估方法,以比较不同情感识别系统的差异。 | + | 尽管自然数据比表演数据具有许多优势,但很难获得并且通常情绪强度较低。此外,由于环境噪声的存在、人员与麦克风的距离较远,在自然环境中获得的数据具有较低的信号质量。埃尔朗根-纽约堡大学的AIBO情感资料库(FAU Aibo Emotion Corpus for CEICES, CEICES: Combining Efforts for Improving Automatic Classification of Emotional User States)是建立'''自然情感数据库'''的首次尝试,其采集基于10—13岁儿童与索尼AIBO宠物机器人玩耍的真实情境。同样,在情感研究领域,建立任何一个标准数据库,都需要提供评估方法,以比较不同情感识别系统的差异。 |
| | | |
| ====Speech descriptors==== | | ====Speech descriptors==== |
第276行: |
第276行: |
| The detection and processing of facial expression are achieved through various methods such as optical flow, hidden Markov models, neural network processing or active appearance models. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody, facial expressions and hand gestures, or facial expressions with speech and text for multimodal data and metadata analysis) to provide a more robust estimation of the subject's emotional state. Affectiva is a company (co-founded by Rosalind Picard and Rana El Kaliouby) directly related to affective computing and aims at investigating solutions and software for facial affect detection. | | The detection and processing of facial expression are achieved through various methods such as optical flow, hidden Markov models, neural network processing or active appearance models. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody, facial expressions and hand gestures, or facial expressions with speech and text for multimodal data and metadata analysis) to provide a more robust estimation of the subject's emotional state. Affectiva is a company (co-founded by Rosalind Picard and Rana El Kaliouby) directly related to affective computing and aims at investigating solutions and software for facial affect detection. |
| | | |
− | 面部表情的检测和处理通过[[wikipedia:Optical_flow|光流]]、隐马尔可夫模型、神经网络处理或主动外观模型等多种方法实现。可以组合或融合多种模态(多模态识别,例如面部表情和语音韵律【27】、面部表情和手势【28】,或用于多模态数据和元数据分析的带有语音和文本的面部表情),以提供对受试者情绪的更可靠估计。Affectiva 是一家与情感计算直接相关的公司(由 Rosalind Picard 和 Rana El Kaliouby 共同创办) ,旨在研究面部情感检测的解决方案和软件。 | + | 面部表情的检测和处理通过[[wikipedia:Optical_flow|'''光流''']]、'''隐马尔可夫模型'''、'''神经网络'''或'''主动外观模型'''等多种方法实现。可以组合或融合多种模态(多模态识别,例如面部表情和语音韵律【27】、面部表情和手势【28】,或用于多模态数据和元数据分析的带有语音和文本的面部表情),以提供对受试者情绪的更可靠估计。Affectiva 是一家与情感计算直接相关的公司(由 Rosalind Picard 和 Rana El Kaliouby 共同创办) ,旨在研究面部情感检测的解决方案和软件。 |
| | | |
| ==== Facial expression databases ==== | | ==== Facial expression databases ==== |
第436行: |
第436行: |
| As with every computational practice, in affect detection by facial processing, some obstacles need to be surpassed, in order to fully unlock the hidden potential of the overall algorithm or method employed. In the early days of almost every kind of AI-based detection (speech recognition, face recognition, affect recognition), the accuracy of modeling and tracking has been an issue. As hardware evolves, as more data are collected and as new discoveries are made and new practices introduced, this lack of accuracy fades, leaving behind noise issues. However, methods for noise removal exist including neighborhood averaging, linear Gaussian smoothing, median filtering, or newer methods such as the Bacterial Foraging Optimization Algorithm.Clever Algorithms. "Bacterial Foraging Optimization Algorithm – Swarm Algorithms – Clever Algorithms" . Clever Algorithms. Retrieved 21 March 2011."Soft Computing". Soft Computing. Retrieved 18 March 2011. | | As with every computational practice, in affect detection by facial processing, some obstacles need to be surpassed, in order to fully unlock the hidden potential of the overall algorithm or method employed. In the early days of almost every kind of AI-based detection (speech recognition, face recognition, affect recognition), the accuracy of modeling and tracking has been an issue. As hardware evolves, as more data are collected and as new discoveries are made and new practices introduced, this lack of accuracy fades, leaving behind noise issues. However, methods for noise removal exist including neighborhood averaging, linear Gaussian smoothing, median filtering, or newer methods such as the Bacterial Foraging Optimization Algorithm.Clever Algorithms. "Bacterial Foraging Optimization Algorithm – Swarm Algorithms – Clever Algorithms" . Clever Algorithms. Retrieved 21 March 2011."Soft Computing". Soft Computing. Retrieved 18 March 2011. |
| | | |
− | 正如计算领域的多数问题一样,在面部情感检测研究中,也有很多障碍需要克服,以便充分释放算法和方法的全部潜力。在几乎所有基于人工智能的检测(语音识别、人脸识别、情感识别)的早期,建模和跟踪的准确性一直是个问题。随着硬件的发展,数据集的完善,新的发现和新的实践的引入,准确性问题逐渐被解决,留下了噪音问题。现有的去噪方法包括邻域平均法、线性高斯平滑法、中值滤波法【32】,或者更新的方法如菌群优化算法【33】【34】。 | + | 正如计算领域的多数问题一样,在面部情感检测研究中,也有很多障碍需要克服,以便充分释放算法和方法的全部潜力。在几乎所有基于人工智能的检测(语音识别、人脸识别、情感识别)的早期,建模和跟踪的准确性一直是个问题。随着硬件的发展,数据集的完善,新的发现和新的实践的引入,准确性问题逐渐被解决,留下了噪音问题。现有的去噪方法包括'''邻域平均法'''、'''线性高斯平滑法'''、'''中值滤波法'''【32】,或者更新的方法如'''菌群优化算法'''【33】【34】。 |
| | | |
| Other challenges include | | Other challenges include |
第486行: |
第486行: |
| This could be used to detect a user's affective state by monitoring and analyzing their physiological signs. These signs range from changes in heart rate and skin conductance to minute contractions of the facial muscles and changes in facial blood flow. This area is gaining momentum and we are now seeing real products that implement the techniques. The four main physiological signs that are usually analyzed are blood volume pulse, galvanic skin response, facial electromyography, and facial color patterns. | | This could be used to detect a user's affective state by monitoring and analyzing their physiological signs. These signs range from changes in heart rate and skin conductance to minute contractions of the facial muscles and changes in facial blood flow. This area is gaining momentum and we are now seeing real products that implement the techniques. The four main physiological signs that are usually analyzed are blood volume pulse, galvanic skin response, facial electromyography, and facial color patterns. |
| | | |
− | 生理信号可用于检测和分析情绪状态。这些生理信号通常包括脉搏、心率、面部肌肉每分钟收缩频率等。这个领域的发展势头越来越强劲,并且已经有了应用这些技术的实际产品。通常被分析的4个主要生理特征是血容量脉冲、皮肤电反应、面部肌电图和面部颜色。
| + | 生理信号可用于检测和分析情绪状态。这些生理信号通常包括脉搏、心率、面部肌肉每分钟收缩频率等。这个领域的发展势头越来越强劲,并且已经有了应用这些技术的实际产品。通常被分析的4个主要生理特征是'''血容量脉冲'''、'''皮肤电反应'''、'''面部肌电图'''和面部颜色。 |
| | | |
| ==== Blood volume pulse ==== | | ==== Blood volume pulse ==== |
第547行: |
第547行: |
| Galvanic skin response (GSR) is an outdated term for a more general phenomenon known as [Electrodermal Activity] or EDA. EDA is a general phenomena whereby the skin's electrical properties change. The skin is innervated by the [sympathetic nervous system], so measuring its resistance or conductance provides a way to quantify small changes in the sympathetic branch of the autonomic nervous system. As the sweat glands are activated, even before the skin feels sweaty, the level of the EDA can be captured (usually using conductance) and used to discern small changes in autonomic arousal. The more aroused a subject is, the greater the skin conductance tends to be. | | Galvanic skin response (GSR) is an outdated term for a more general phenomenon known as [Electrodermal Activity] or EDA. EDA is a general phenomena whereby the skin's electrical properties change. The skin is innervated by the [sympathetic nervous system], so measuring its resistance or conductance provides a way to quantify small changes in the sympathetic branch of the autonomic nervous system. As the sweat glands are activated, even before the skin feels sweaty, the level of the EDA can be captured (usually using conductance) and used to discern small changes in autonomic arousal. The more aroused a subject is, the greater the skin conductance tends to be. |
| | | |
− | 皮肤电反应(Galvanic skin response,GSR)是一个过时的术语,更一般的现象称为[Electrodermal Activity,皮肤电活动]或 EDA。EDA 是皮肤电特性改变的普遍现象。皮肤受交感神经神经支配,因此测量皮肤的电阻或电导率可以量化自主神经系统交感神经分支的细微变化。当汗腺被激活时,甚至在皮肤出汗之前,EDA 的水平就可以被捕获(通常使用电导) ,并用于辨别自主神经唤醒的微小变化。一个主体越兴奋,皮肤导电反应就越强烈【38】。 | + | '''皮肤电反应'''(Galvanic skin response,GSR)是一个过时的术语,更一般的现象称为[Electrodermal Activity,皮肤电活动]或 EDA。EDA 是皮肤电特性改变的普遍现象。皮肤受交感神经支配,因此测量皮肤的电阻或电导率可以量化自主神经系统交感神经分支的细微变化。当汗腺被激活时,甚至在皮肤出汗之前,EDA 的水平就可以被捕获(通常使用电导) ,并用于辨别自主神经唤醒的微小变化。一个主体越兴奋,皮肤导电反应就越强烈【38】。 |
| | | |
| Skin conductance is often measured using two small [[silver-silver chloride]] electrodes placed somewhere on the skin and applying a small voltage between them. To maximize comfort and reduce irritation the electrodes can be placed on the wrist, legs, or feet, which leaves the hands fully free for daily activity. | | Skin conductance is often measured using two small [[silver-silver chloride]] electrodes placed somewhere on the skin and applying a small voltage between them. To maximize comfort and reduce irritation the electrodes can be placed on the wrist, legs, or feet, which leaves the hands fully free for daily activity. |
第578行: |
第578行: |
| Approaches are based on facial color changes. Delaunay triangulation is used to create the triangular local areas. Some of these triangles which define the interior of the mouth and eyes (sclera and iris) are removed. Use the left triangular areas’ pixels to create feature vectors. It shows that converting the pixel color of the standard RGB color space to a color space such as oRGB color spaceM. Bratkova, S. Boulos, and P. Shirley, oRGB: a practical opponent color space for computer graphics, IEEE Computer Graphics and Applications, 29(1):42–55, 2009. or LMS channels perform better when dealing with faces.Hadas Shahar, Hagit Hel-Or, Micro Expression Classification using Facial Color and Deep Learning Methods, The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 0–0. So, map the above vector onto the better color space and decompose into red-green and yellow-blue channels. Then use deep learning methods to find equivalent emotions. | | Approaches are based on facial color changes. Delaunay triangulation is used to create the triangular local areas. Some of these triangles which define the interior of the mouth and eyes (sclera and iris) are removed. Use the left triangular areas’ pixels to create feature vectors. It shows that converting the pixel color of the standard RGB color space to a color space such as oRGB color spaceM. Bratkova, S. Boulos, and P. Shirley, oRGB: a practical opponent color space for computer graphics, IEEE Computer Graphics and Applications, 29(1):42–55, 2009. or LMS channels perform better when dealing with faces.Hadas Shahar, Hagit Hel-Or, Micro Expression Classification using Facial Color and Deep Learning Methods, The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 0–0. So, map the above vector onto the better color space and decompose into red-green and yellow-blue channels. Then use deep learning methods to find equivalent emotions. |
| | | |
− | 方法是基于面部颜色的变化。 Delaunay 三角剖分用于创建三角形局部区域。 一些定义嘴巴和眼睛(巩膜和虹膜)内部的三角形被移除。 使用左三角区域的像素来创建特征向量【40】。它表明,将标准 RGB 颜色空间的像素颜色转换为 oRGB 颜色空间【41】或 LMS 通道等颜色空间在处理人脸时表现更好【42】。因此,将上面的矢量映射到较好的颜色空间,并分解为红绿色和黄蓝色通道。然后使用深度学习的方法来找到等效的情绪。
| + | 方法主要基于面部颜色的变化。 Delaunay 三角剖分用于创建三角形局部区域。 其中一些三角形定义了嘴和眼睛的内部(巩膜和虹膜), 使用左三角区域的像素来创建特征向量【40】。它表明,将标准 RGB 颜色空间的像素颜色转换为 oRGB 颜色空间【41】或 LMS 通道等颜色空间在处理人脸时表现更好【42】。因此,将上面的矢量映射到较好的颜色空间,并分解为红绿色和黄蓝色通道。然后使用深度学习的方法来找到等效的情绪。 |
| | | |
| ===Visual aesthetics=== | | ===Visual aesthetics=== |
第617行: |
第617行: |
| Affective computing is also being applied to the development of communicative technologies for use by people with autism.Projects in Affective Computing The affective component of a text is also increasingly gaining attention, particularly its role in the so-called emotional or emotive Internet.Shanahan, James; Qu, Yan; Wiebe, Janyce (2006). Computing Attitude and Affect in Text: Theory and Applications. Dordrecht: Springer Science & Business Media. p. 94. | | Affective computing is also being applied to the development of communicative technologies for use by people with autism.Projects in Affective Computing The affective component of a text is also increasingly gaining attention, particularly its role in the so-called emotional or emotive Internet.Shanahan, James; Qu, Yan; Wiebe, Janyce (2006). Computing Attitude and Affect in Text: Theory and Applications. Dordrecht: Springer Science & Business Media. p. 94. |
| | | |
− | 情感计算也被应用于交流技术的发展,以供孤独症患者使用【46】。情感计算项目文本中的情感成分也越来越受到关注,特别是它在所谓的情感或情感互联网中的作用【47】。
| + | 情感计算也被应用于交流技术的发展,以供孤独症患者使用【46】。情感计算项目文本中的情感成分也越来越受到关注,特别是它在所谓的情感或'''情感互联网'''中的作用【47】。 |
| ===Video games=== | | ===Video games=== |
| | | |
第636行: |
第636行: |
| Affective video games can access their players' emotional states through biofeedback devices. A particularly simple form of biofeedback is available through gamepads that measure the pressure with which a button is pressed: this has been shown to correlate strongly with the players' level of arousal; at the other end of the scale are brain–computer interfaces. Affective games have been used in medical research to support the emotional development of autistic children. | | Affective video games can access their players' emotional states through biofeedback devices. A particularly simple form of biofeedback is available through gamepads that measure the pressure with which a button is pressed: this has been shown to correlate strongly with the players' level of arousal; at the other end of the scale are brain–computer interfaces. Affective games have been used in medical research to support the emotional development of autistic children. |
| | | |
− | 情感型电子游戏可以通过生物反馈设备获取玩家的情绪状态【48】。有一些特别简单的生物反馈形式,如通过游戏手柄来测量按下按钮的压力,来获取玩家的唤醒度水平【49】; 另一方面是脑机接口【50】【51】。情感游戏已被用于医学研究,以改善自闭症儿童的情感发展【52】。
| + | 情感型电子游戏可以通过'''生物反馈设备'''获取玩家的情绪状态【48】。有一些特别简单的生物反馈形式,如通过游戏手柄来测量按下按钮的压力,来获取玩家的唤醒度水平【49】; 另一方面是'''脑机接口'''【50】【51】。情感游戏已被用于医学研究,以改善自闭症儿童的情感发展【52】。 |
| ===Other applications=== | | ===Other applications=== |
| | | |
第664行: |
第664行: |
| Within the field of human–computer interaction, Rosalind Picard's cognitivist or "information model" concept of emotion has been criticized by and contrasted with the "post-cognitivist" or "interactional" pragmatist approach taken by Kirsten Boehner and others which views emotion as inherently social. | | Within the field of human–computer interaction, Rosalind Picard's cognitivist or "information model" concept of emotion has been criticized by and contrasted with the "post-cognitivist" or "interactional" pragmatist approach taken by Kirsten Boehner and others which views emotion as inherently social. |
| | | |
− | 在人机交互领域,罗莎琳德 · 皮卡德的情绪认知主义或“信息模型”概念受到了实用主义者柯尔斯滕 · 博纳等人的批判和对比,他们坚信“后认知主义”和“交互方法”【56】。 | + | 在人机交互领域,罗莎琳德 · 皮卡德的情绪'''认知主义'''或“信息模型”概念受到了实用主义者柯尔斯滕 · 博纳等人的批判和对比,他们坚信“后认知主义”和“交互方法”【56】。 |
| | | |
| Picard's focus is human–computer interaction, and her goal for affective computing is to "give computers the ability to recognize, express, and in some cases, 'have' emotions".<ref name="Affective Computing" /> In contrast, the interactional approach seeks to help "people to understand and experience their own emotions"<ref name="How emotion is made and measured">{{cite journal|last1=Boehner|first1=Kirsten|last2=DePaula|first2=Rogerio|last3=Dourish|first3=Paul|last4=Sengers|first4=Phoebe|title=How emotion is made and measured|journal=International Journal of Human–Computer Studies|date=2007|volume=65|issue=4|pages=275–291|doi=10.1016/j.ijhcs.2006.11.016}}</ref> and to improve computer-mediated interpersonal communication. It does not necessarily seek to map emotion into an objective mathematical model for machine interpretation, but rather let humans make sense of each other's emotional expressions in open-ended ways that might be ambiguous, subjective, and sensitive to context.<ref name="How emotion is made and measured" />{{rp|284}}{{example needed|date=September 2018}} | | Picard's focus is human–computer interaction, and her goal for affective computing is to "give computers the ability to recognize, express, and in some cases, 'have' emotions".<ref name="Affective Computing" /> In contrast, the interactional approach seeks to help "people to understand and experience their own emotions"<ref name="How emotion is made and measured">{{cite journal|last1=Boehner|first1=Kirsten|last2=DePaula|first2=Rogerio|last3=Dourish|first3=Paul|last4=Sengers|first4=Phoebe|title=How emotion is made and measured|journal=International Journal of Human–Computer Studies|date=2007|volume=65|issue=4|pages=275–291|doi=10.1016/j.ijhcs.2006.11.016}}</ref> and to improve computer-mediated interpersonal communication. It does not necessarily seek to map emotion into an objective mathematical model for machine interpretation, but rather let humans make sense of each other's emotional expressions in open-ended ways that might be ambiguous, subjective, and sensitive to context.<ref name="How emotion is made and measured" />{{rp|284}}{{example needed|date=September 2018}} |