University of Surrey

Test tubes in the lab Research in the ATI Dance Research

Generalizing DET Curves Across Application Scenarios

Poh, N and Chan, CH (2015) Generalizing DET Curves Across Application Scenarios IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 10 (10). pp. 2171-2181.

[img]
Preview
Text
Generalized_DET_curves_with_factors15.pdf - Accepted version Manuscript
Available under License : See the attached licence file.

Download (776kB) | Preview
[img]
Preview
Text
Generalized_DET_curves_with_factors15_Appendix.pdf - Accepted version Manuscript
Available under License : See the attached licence file.

Download (222kB) | Preview
[img]
Preview
PDF (licence)
SRI_deposit_agreement.pdf
Available under License : See the attached licence file.

Download (33kB) | Preview

Abstract

Assessing biometric performance is challenging because an experimental outcome depends on the choice of demographics, and the chosen application scenario of an experiment. If one can quantify biometric samples into good, bad, and ugly categories for one application, the proportion of these categories is likely to be different for another application. As a result, a typical performance curve of a biometric experiment cannot generalise to another different application setting, even though the same system is used. We propose an algorithm that is capable of generalising a biometric performance curve in terms of Detection Error Trade-off (DET) or equivalently Receiver’s Operating Characteristics (ROC), by allowing the user (system operator, policy-maker, biometric researcher) to explicitly set the proportion of data differently. This offers the possibility for the user to simulate different operating conditions that can better match the setting of a target application. We demonstrated the utility of the algorithm in three scenarios, namely, estimating the system performance under varying quality; spoof and zero-effort attacks; and cross-device matching. Based on the results of 1300 use-case experiments, we found that the quality of prediction on unseen (test) data, measured in terms of coverage, is typically between 60% and 80%, which is significantly better than random, that is, 50%.

Item Type: Article
Subjects : Computer Science
Divisions : Faculty of Engineering and Physical Sciences > Computing Science
Authors :
AuthorsEmailORCID
Poh, NUNSPECIFIEDUNSPECIFIED
Chan, CHUNSPECIFIEDUNSPECIFIED
Date : 1 October 2015
Identification Number : https://doi.org/10.1109/TIFS.2015.2434320
Copyright Disclaimer : Copyright 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
Uncontrolled Keywords : Science & Technology, Technology, Computer Science, Theory & Methods, Engineering, Electrical & Electronic, Computer Science, Engineering, Biometrics, perform evaluation/prediction, bootstrap subset, PERFORMANCE
Related URLs :
Depositing User : Symplectic Elements
Date Deposited : 20 Oct 2016 08:19
Last Modified : 20 Oct 2016 08:19
URI: http://epubs.surrey.ac.uk/id/eprint/812520

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year


Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800