University of Surrey

Test tubes in the lab Research in the ATI Dance Research

When Face Recognition Meets with Deep Learning: an Evaluation of Convolutional Neural Networks for Face Recognition

Hu, G, Yang, Y, Yi, D, Kittler, J, Christmas, WJ, Li, S and Hospedales, T (2015) When Face Recognition Meets with Deep Learning: an Evaluation of Convolutional Neural Networks for Face Recognition In: ICCV workshop ChaLearn Looking at People, 7-13 Dec 2015, Santiago, Chile.

[img]
Preview
Text (licence)
SRI_deposit_agreement.pdf
Available under License : See the attached licence file.

Download (33kB) | Preview
[img]
Preview
Text
hu-LAP-2015.pdf
Available under License : See the attached licence file.

Download (478kB) | Preview

Abstract

Deep learning, in particular Convolutional Neural Network (CNN), has achieved promising results in face recognition recently. However, it remains an open question: why CNNs work well and how to design a ‘good’ architecture. The existing works tend to focus on reporting CNN architectures that work well for face recognition rather than investigate the reason. In this work, we conduct an extensive evaluation of CNN-based face recognition systems (CNN-FRS) on a common ground to make our work easily reproducible. Specifically, we use public database LFW (Labeled Faces in the Wild) to train CNNs, unlike most existing CNNs trained on private databases. We propose three CNN architectures which are the first reported architectures trained using LFW data. This paper quantitatively compares the architectures of CNNs and evaluates the effect of different implementation choices. We identify several useful properties of CNN-FRS. For instance, the dimensionality of the learned features can be significantly reduced without adverse effect on face recognition accuracy. In addition, a traditional metric learning method exploiting CNN-learned features is evaluated. Experiments show two crucial factors to good CNN-FRS performance are the fusion of multiple CNNs and metric learning. To make our work reproducible, source code and models will be made publicly available.

Item Type: Conference or Workshop Item (Conference Paper)
Divisions : Faculty of Engineering and Physical Sciences > Electronic Engineering > Centre for Vision Speech and Signal Processing
Authors :
AuthorsEmailORCID
Hu, GUNSPECIFIEDUNSPECIFIED
Yang, YUNSPECIFIEDUNSPECIFIED
Yi, DUNSPECIFIEDUNSPECIFIED
Kittler, JUNSPECIFIEDUNSPECIFIED
Christmas, WJUNSPECIFIEDUNSPECIFIED
Li, SUNSPECIFIEDUNSPECIFIED
Hospedales, TUNSPECIFIEDUNSPECIFIED
Date : 12 December 2015
Identification Number : 10.1109/ICCVW.2015.58
Copyright Disclaimer : © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Additional Information : This is the arXiv version of the paper. © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Depositing User : Symplectic Elements
Date Deposited : 18 Feb 2016 10:36
Last Modified : 18 Feb 2016 10:36
URI: http://epubs.surrey.ac.uk/id/eprint/809582

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800