Demos ...

Use the My Publications section for more technical information about the demos


Share

Precise Eye Localization and Tracking
The ubiquitous application of eye tracking is precluded by the requirement of dedicated and expensive hardware, such as infrared high definition cameras. Therefore, systems based solely on appearance (i.e. not involving active infrared illumination) are being proposed in literature. However, although these systems are able to successfully locate eyes, their accuracy is significantly lower than commercial eye tracking devices. Our aim is to perform very accurate eye center location and tracking, using a simple web cam.


Head Pose Tracking
Head pose and eye location estimation are two closely related issues which refer to similar application areas. In recent years, these problems have been studied individually in numerous works in the literature. Previous research shows that cylindrical head models and isophote based schemes provide satisfactory precision in head pose and eye location estimation, respectively. However, the eye locator is not adequate to accurately locate eye in the presence of extreme head poses. Therefore, head pose cues may be suited to enhance the accuracy of eye localization in the presence of severe head poses. Therefore, we propose to utilize the competent head pose cues. A hybrid scheme is proposed in which the transformation matrix obtained from the head posed is used to normalize the eye regions and, in turn the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to (1) enhance the accuracy of eye location estimations in low resolution videos, (2) to extend the operating range of the eye locator and (3) to improve the accuracy and re-initialization capabilities of the pose tracker.


Driver Awareness
This system is thought to analyze the awareness of a car driver by using the head pose information and the visual field. The system should allow studies of the behavior of the driver and report dangerous outcomes (e.g. being distracted too long by the rear view mirror or the event of closing the eyes too often indicating tiredness).


Facial Expression Recognition
The most expressive way humans display emotions is through facial expressions. Humans detect and interpret faces and facial expressions in a scene with little or no effort. Still, development of an automated system that accomplishes this task is rather difficult. There are several related problems: detection of an image segment as a face, facial features extraction and tracking, extraction of the facial expression information, and classification of the expression (e.g., in emotion categories). In this paper, we present our fully integrated system which performs these operations accurately and in real time and represents a major step forward in our aim of achieving a humanlike interaction between the man and machine.


Eye Tracking Using a Webcam
We propose a system which estimates the visual gaze of a user in a controlled environment (e.g. sitting in front of a screen). In order to reduce to a minimum the computational costs, the eye corner locator is built upon the same technology of the eye center locator, tweaked for the specific task. If high mapping precision is not a priority of the application, we claim that the system can achieve acceptable accuracy without the requirements of additional dedicated hardware. We believe that this could bring new gaze based methodologies for human-computer interactions into the mainstream.

Sound generation using Facial Expressions
We present an audiovisual creativity tool that automatically recognizes facial expressions in real time, producing sounds in combination with images. The facial expression recognition component detects and tracks a face and outputs a feature vector of motions of specific locations in the face. The feature vector is used as input to a Bayesian network which classifies facial expressions into several categories (e.g., angry, disgusted, happy, etc.). The classification results are used along with the feature vector to generate a combination of sounds that change in real time depending on the person’s facial expressions.