Accurate classification of eye state is a prerequisite for preventing automobile accidents due to driver drowsiness. Previous methods of classification, based on features extracted for a single eye, are vulnerable to eye localization errors and visual obstructions, and most use a fixed threshold for classification, irrespective of variations in the driver's eye shape and texture. To address these deficiencies, we propose a new method for eye state classification that combines three innovations: (1) extraction and fusion of features from both eyes, (2) initialization of driver-specific thresholds to account for differences in eye shape and texture, and (3) modeling of driver-specific blinking patterns for normal (non-drowsy) driving. Experimental results show that the proposed method achieves significant improvements in detection accuracy.
Bibliographical noteFunding Information:
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2012-0005223 ).
All Science Journal Classification (ASJC) codes
- General Engineering
- Computer Science Applications
- Artificial Intelligence