Assessing a person’s emotional state may be relevant to security in situations where it may be beneficial to assess one’s intentions or mental state. In various situations, facial expressions that often indicate emotions, may not be communicated or may not necessarily correspond to the actual emotional state. Here we review our study, in which we classify emotional states from very short facial video signals. The emotion classification process does not rely on stereotypical facial expressions or contact-based methods. Our raw data are short facial videos obtained at some different known emotional states. A facial video includes a component of diffused light from the facial skin, affected by the cardiovascular activity that might be influenced by the emotional state. From the short facial videos, we extracted unique spatiotemporal physiological-affected features employed as input features into a deep-learning model. Results show average emotion classification accuracy of about 47.36%, compared to 20% chance accuracy given 5 emotion classes, which can be considered high for the cases where expressions are hardly observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.