Emotion recognition from facial images is a crucial task in human-computer interaction, enabling machines to learn human emotions through facial expressions.Previous studies have shown that facial images can be used to train deep learning models; however, most of these studies do not include a through dataset analysis.Visualizing facial landmarks can be challenging when extracting meaningful dataset insights; to address this issue, we propose facial landmark box plots, a visualization technique designed to identify outliers in facial datasets.Additionally, we compare two sets of facial landmark features: (i) the landmarks' absolute positions and (ii) their displacements from a neutral expression to the peak of an emotional expression.Our results indicate that a neural network achieves better performance than a random forest classifier.
This paper investigates emotion recognition from facial images through a combined approach of statistical methods and neural networks. The method is focused on improving human-computer interaction by using facial expressions, with an emphasis on efficient processing and analysis of facial datasets. The authors introduce facial landmark box plots to identify outliers and compare two sets of facial landmark features: absolute positions and displacements from a neutral expression. The study shows that an optimized neural network outperforms a decision tree classifier in accuracy, achieving 98.48% accuracy compared to the decision tree's 80%. The paper also emphasizes the importance of tackling data imbalance in emotion recognition tasks. Future work explores transfer learning and data augmentation to enhance model robustness.
This paper employs the following methods:
- Neural Networks
- Random Forest
The following datasets were used in this research:
- Optimized neural network achieved 98.48% accuracy
- Decision tree classifier achieved 80% accuracy
- Number of GPUs: None specified
- GPU Type: None specified
- Compute Requirements: 50 epochs