Adaptive Deep Supervised Autoencoder Based Image Reconstruction for Face RecognitionReport as inadecuate

Adaptive Deep Supervised Autoencoder Based Image Reconstruction for Face Recognition - Download this document for free, or read online. Document in PDF available to download.

Mathematical Problems in Engineering - Volume 2016 2016, Article ID 6795352, 14 pages -

Research Article

Key Laboratory of Pattern Recognition and Intelligent Information Processing, Institutions of Higher Education of Sichuan Province, Chengdu University, Chengdu, Sichuan 610106, China

School of Computer and Software, Sichuan University, Chengdu, Sichuan 610065, China

School of Reliability and System Engineering, Beihang University, Beijing 100191, China

Received 3 June 2016; Revised 30 July 2016; Accepted 28 September 2016

Academic Editor: Simone Bianco

Copyright © 2016 Rongbing Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Based on a special type of denoising autoencoder DAE and image reconstruction, we present a novel supervised deep learning framework for face recognition FR. Unlike existing deep autoencoder which is unsupervised face recognition method, the proposed method takes class label information from training samples into account in the deep learning procedure and can automatically discover the underlying nonlinear manifold structures. Specifically, we define an Adaptive Deep Supervised Network Template ADSNT with the supervised autoencoder which is trained to extract characteristic features from corrupted-clean facial images and reconstruct the corresponding similar facial images. The reconstruction is realized by a so-called -bottleneck- neural network that learns to map face images into a low-dimensional vector and reconstruct the respective corresponding face images from the mapping vectors. Having trained the ADSNT, a new face image can then be recognized by comparing its reconstruction image with individual gallery images, respectively. Extensive experiments on three databases including AR, PubFig, and Extended Yale B demonstrate that the proposed method can significantly improve the accuracy of face recognition under enormous illumination, pose change, and a fraction of occlusion.

Author: Rongbing Huang, Chang Liu, Guoqi Li, and Jiliu Zhou



Related documents