Development of an Emotion Recognition Classifier from Body Language Using Deep Learning for the Children with Autism to Help Identifying Human Emotions
This thesis focuses on training the machine learning model to recognize emotions from a person’s body language. This research aims to develop an emotion recognition classifier for children with autism spectrum disorder (ASD) to help recognize emotions in social interaction. It classifies six universal emotions- “anger,” “sad,” “happy,” “surprise,” “fear,” and “disgust.” Video data and image data have been used to train three different models. Three different datasets have been used- GEMEP, BEAST, and “7453 IRB” datasets. The BEAST dataset is an image dataset, and the GEMEP and “7453 IRB” are video audio datasets. Three deep learning algorithms were tested to classify emotions from body language: Convolution Neural Network (CNN), the combination of CNN and Recurrent Neural Network (CNN+RNN), and the combination of CNN and Long Short-Term Memory (CNN+LSTM). For this research, though the CNN model provides better accuracy than CNN+RNN and CNN+LSTM, CNN model ignores the temporal information from the sequenced video data. The CNN+RNN and CNN+LSTM model provides more realistic results than CNN because it considers both spatial and temporal information in the data, which is essential for analyzing video data. Same hyperparameters from CNN+RNN were used to train the CNN+LSTM model due to limited time and limited computation resources. For this experiment, CNN+RNN outperforms the CNN+LSTM with a 12.33% higher test accuracy for the combined video datasets. However, with better hyperparameter tuning, the performance of CNN+LSTM model can be improved.
emotion classification, body language, CNN, CNN+RNN, CNN+LSTM, ResNet50, deep learning
Rashid, T. (2021). Development of an emotion recognition classifier from body language using deep learning for the children with autism to help identifying human emotions (Unpublished thesis). Texas State University, San Marcos, Texas.