The variable head pose and low-quality eye images in natural scenes can lead to low accuracy of gaze estimation. In this paper, we propose a multi-feature fusion gaze estimation model based on the attention mechanism. First, face and eye feature extractors based on the group convolution channel and spatial attention mechanism (GCCSAM) are designed to use channel and spatial information to adaptively select and enhance important features in face images and two eye images, and suppress information irrelevant to gaze estimation. Then we design two feature fusion networks to fuse the features of face, two eyes and pupil center position, thus avoiding the effects of two-eye asymmetry and inaccurate head pose estimation on gaze estimation. The average angular error of the proposed method is 4.1° on MPIIGaze and 5.2° on EyeDiap. Compared with the current mainstream methods, our method effectively improves the accuracy and robustness of gaze estimation in natural scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.