Paper
17 January 2005 Anatomically constrained neural network models for the categorization of facial expression
Author Affiliations +
Proceedings Volume 5675, Vision Geometry XIII; (2005) https://doi.org/10.1117/12.593973
Event: Electronic Imaging 2005, 2005, San Jose, California, United States
Abstract
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
© (2005) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Brenton W. McMenamin and Amir H. Assadi "Anatomically constrained neural network models for the categorization of facial expression", Proc. SPIE 5675, Vision Geometry XIII, (17 January 2005); https://doi.org/10.1117/12.593973
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Amygdala

Facial recognition systems

Neural networks

Network architectures

Parallel processing

Visual cortex

Algorithm development

Back to Top