During videoconferencing, there is an inherent inability for participants to be engaged due to differences in the physical locations of the webcam and the screen. Participants appear as if they are looking down on each other instead of directly at each other. To remedy this, real time gaze correction systems have emerged to give videoconferencing participants the illusion of eye contact.
Our work improves a previous gaze correction system that required users to manually look at the webcam to capture the eye appearance and perform eye replacement. We automate this process by exploring various methods to classify direct gaze (known as gazelocking). We have implemented a system based on image processing techniques and machine learning to solve the gazelocking problem. This system is robust to a variety of factors, including distance to camera, lighting, eye shape, and more. Such a system can be used to replace the manual capture on the old gaze correction system, but also has several additional applications in the HCI realm.