A Robust Shape Reconstruction Method for Facial Feature Point DetectionReport as inadecuate




A Robust Shape Reconstruction Method for Facial Feature Point Detection - Download this document for free, or read online. Document in PDF available to download.

Computational Intelligence and Neuroscience - Volume 2017 2017, Article ID 4579398, 11 pages - https:-doi.org-10.1155-2017-4579398

Research ArticleSchool of Automation Engineering, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu 611731, China

Correspondence should be addressed to Shuqiu Tan and Dongyi Chen

Received 24 October 2016; Revised 18 January 2017; Accepted 30 January 2017; Published 19 February 2017

Academic Editor: Ezequiel López-Rubio

Copyright © 2017 Shuqiu Tan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.





Author: Shuqiu Tan, Dongyi Chen, Chenggang Guo, and Zhiqi Huang

Source: https://www.hindawi.com/



DOWNLOAD PDF




Related documents