Skip to content. | Skip to navigation

Personal tools
Log in
You are here: Home Papers Multi-View Automatic Lip-Reading using Neural Network

Daehyun Lee, Jongmin Lee, and Kee-Eung Kim (2016)

Multi-View Automatic Lip-Reading using Neural Network

In: ACCV 2016 Workshop on Multi-view Lip-reading Challenges.

It is well known that automatic lip-reading (ALR), also known as visual speech recognition (VSR), enhances the performance of speech recognition in a noisy environment and also has applications itself. However, ALR is a challenging task due to various lip shapes and ambiguity of visemes (the basic unit of visual speech information). In this paper, we tackle ALR as a classification task using end-to-end neural network based on convolutional neural network and long short-term memory architecture. We conduct single, cross, and multi-view experiments in speaker independent setting with various network configuration to integrate the multi-view data. We achieve 77.9%, 83.8%, and 78.6% classification accuracies in average on single, cross, and multi-view respectively. This result is better than the best score (76%) of preliminary single-view results given by ACCV 2016 workshop on multi-view lip-reading/audiovisual challenges. It also shows that additional view information helps to improve the performance of ALR with neural network architecture.