Modeling of Human Mood States from Voice using Adaptively Tuned Neuro-Fuzzy Inference System
Main Article Content
Abstract
In this article, an attempt is made to model angry, happy, and neutral human mood states by adaptively tuning the Neuro-fuzzy Inference system for efficient characterization. The algorithm is self-tunable and can provide low-cost and robust solutions to many complex physical world problems. Such analysis can provide crucial inputs to many vivid application domains such as security organization, bio-medical engineering, computer tutors, call centers, banking and finance sectors, criminal investigations, etc. for effective functioning and control. The Surrey Audio-Visual Expressed Emotions (SAVEE) database has been chosen to procure the utterances corresponding to the chosen mood states. Initially, several feature vectors have been extracted that comprise Spectral Rolloff, Spectral Centroid, Spectral flux, Log Energy, Fundamental frequency, Jitter, and Shimmer to develop the desired models. The resultant Adaptive Neuro-Fuzzy Inference (ANFIS) algorithm can distinguish the chosen states based on the simulation models as revealed by the results. Performance measures such as the Root Mean Square Error at the start, convergence, minimal, checking, training, and testing have been investigated to validate the model performances.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.