A Multi-Modal Driver Emotion Dataset and Study: Including Facial Expressions and Synchronized Physiological Signals
Engineering Applications of Artificial Intelligence(2024)
摘要
To address the limitations of databases in the field of emotion recognition and to cater to the trend of integrating data from multiple sources, we have established a multi-modal emotional dataset based on spontaneous expression of drivers. By selecting emotional induction materials and inducing emotions before each driving task, facial expression videos and synchronous physiological signals of the drivers during driving were collected. The dataset includes records of 64 participants under five different emotions (neutral, happy, angry, sad, and fear), and the emotional valence, arousal, and peak time of all participants in each driving task were recorded. To analyze the dataset, spatio-temporal convolutional neural networks were designed to analyze the different modalities of data with varying durations in the dataset, aiming to investigate their performance in emotion recognition. The results demonstrate that the fusion of multi-modal data significantly improves the accuracy of driver's emotion recognition, with accuracy increases of 11.28% and 6.83% compared to using only facial video signals or physiological signals, respectively. Therefore, the publication and analysis of multi-modal emotional data for driving scenarios is crucial to support further research in the fields of multimodal perception and intelligent transportation engineering.
更多查看译文
关键词
Driver emotion recognition,Multi-modal information,Facial expression,Physiological signal,Smart vehicle
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn