详细信息
VAD-Net: Multidimensional Facial Expression Recognition in Intelligent Education System ( EI收录)
文献类型:期刊文献
英文题名:VAD-Net: Multidimensional Facial Expression Recognition in Intelligent Education System
作者:Yi, Huo[1]; Yun, Ge[2]
第一作者:霍奕
机构:[1] Department of Educational Information Technology, Teachers’ College, Beijing Union University, Beijing, China; [2] Department of Computer Teaching and Research, University of Chinese Academy of Social Sciences, Beijing, China
第一机构:北京联合大学师范学院
年份:2025
外文期刊名:arXiv
收录:EI(收录号:20260014625)
语种:英文
外文关键词:Convolution - Education computing - Face recognition - Forecasting - Learning systems - Regression analysis
摘要:Current FER (Facial Expression Recognition) dataset is mostly labeled by emotion categories, such as happy, angry, sad, fear, disgust, surprise, and neutral which are limited in expressiveness. However, future affective computing requires more comprehensive and precise emotion metrics which could be measured by VAD(Valence-Arousal-Dominance) multidimension parameters. To address this, AffectNet has tried to add VA (Valence and Arousal) information, but still lacks D(Dominance). Thus, the research introduces VAD annotation on FER2013 dataset, takes the initiative to label D(Dominance) dimension. Then, to further improve network capacity, it enforces orthogonalized convolution on it, which extracts more diverse and expressive features and will finally increase the prediction accuracy. Experiment results show that D dimension could be measured but is difficult to obtain compared with V and A dimension no matter in manual annotation or regression network prediction. Secondly, the ablation test by introducing orthogonal convolution verifies that better VAD prediction could be obtained in the configuration of orthogonal convolution. Therefore, the research provides an initiative labelling for D dimension on FER dataset, and proposes a better prediction network for VAD prediction through orthogonal convolution. The newly built VAD annotated FER2013 dataset could act as a benchmark to measure VAD multidimensional emotions, while the orthogonalized regression network based on ResNet could act as the facial expression recognition baseline for VAD emotion prediction. The newly labeled dataset and implementation code is publicly available on https://github.com/YeeHoran/VAD-Net . Copyright ? 2025, The Authors. All rights reserved.
参考文献:
正在载入数据...
