Fusion of spoofing counter measures for the REPLAY-ATTACK database (competition entry for 2nd competition on counter measures to 2D facial spoofing attacks, ICB 2013)
Project description
- This package implements:
cropping face bounding boxes from Replay-Attack database
extracting GLCM features for spoofing detection
generating classification scores for the features using SVM and LDA
extracting other types of features using other satellite packages it depends on
calculating Q-statistics and fusing classification scores at score-level using other satellite package it depends on.
This satellite package depends on the following satellite packages: antispoofing.lbp , antispoofing.lbptop , antispoofing.motion , antispoofing.fusion , antispoofing.utils . This dependence enables an interface for the scripts in these satellite packages through antispoofing.competition_icb2013, which means easy spoofing score generation using different types of features, as well as analysis of the common errors and fusion of the methods at score-level.
The fused system consisting of several of these counter-measures is submitted to the The 2nd competition on counter measures to 2D facial spoofing attacks, in conjuction with ICB 2013.
If you use this package and/or its results, please cite the following publications:
Bob as the core framework used to run the experiments:
@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }
The 2nd competition on counter measures to 2D facial spoofing attacks:
@INPROCEEDINGS{Chingovska_ICB2013_2013, author = {Chingovska, Ivana and others}, keywords = {Anti-spoofing, Competition, Counter-Measures, face spoofing, presentation attack}, title = {The 2nd competition on counter measures to 2D facial spoofing attacks}, booktitle = {International Conference of Biometrics 2013}, year = {2013} }
If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.
Raw data
The data used in the paper is publicly available and should be downloaded and installed prior to try using the programs described in this package. Visit the REPLAY-ATTACK database portal for more information.
Installation
There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers like pip (or easy_install) or manually download, unpack and use zc.buildout to create a virtual work environment just for this package.
Using an automatic installer
Using pip is the easiest (shell commands are marked with a $ signal):
$ pip install antispoofing.competition_icb2013
You can also do the same with easy_install:
$ easy_install antispoofing.competition_icb2013
This will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.
This scheme works well with virtual environments by virtualenv or if you have root access to your machine. Otherwise, we recommend you use the next option.
Using zc.buildout
Download the latest version of this package from PyPI and unpack it in your working area. The installation of the toolkit itself uses buildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:
$ python bootstrap.py $ ./bin/buildout
These 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.
User Guide
This section explains how to use the package in order to: a) crop face bounding boxes from Replay-Attack; b) calculate the GLCM features on Replay-Attack database; c) generate LBP, LBP-TOP and motion correlation features on Replay-Attack; d) generate classification scores using Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM) and Multi-Layer perceptron (MLP); e) calculate common errors and Q-statistics for each of the features; f) perform fusion at score-level for the different classification scores.
For generation of LBP, LBP-TOP and motion-correlation features, please refer to the corresponding satellite packages (antispoofing.lbp , antispoofing.lbptop , antispoofing.motion respectively). For fusion at score-level, please refer to the corresponding satellite package (antispoofing.fusion).
Crop face bounding boxes
The features used in the paper are generated over the normalized face bounding boxes of the frames in the videos. The script to be used for face cropping and normalization is ./bin/crop_faces.py. It outputs .hdf5 files for each video, containing 3D numpy.array of pixel values of the normalized cropped frames. The first dimension of the array corresponds to the frames of the video files.:
$ ./bin/crop_faces.py replay
To execute this script for the anonymized test-set, please call:
$ ./bin/crop_faces.py replay --ICB-2013
To see all the options for the scripts crop_faces.py. just type --help at the command line. If you want to see all the options for a specific database (e.g. protocols, lighting conditions etc.), type the following command (for Replay-Attack):
$ ./bin/calcglcm.py replay --help
This script uses the automatic face detections provided alongside Replay-Attack database. For frames with no detections, we copy the face detection from the previous frame (if there is one). In our work, we consider all the face bounding boxes smaller then 50x50 pixels as invalid detections (option --ff). Frames with no detected face or invalid detected face (<50x50 pixels) are set to Nan in our .hdf5 files. The face bounding boxes are normalized to 64x64 before storing (option -n).
Calculate the GLCM features
The first stage of the process is calculating the feature vectors on per-frame basis. The script operates on .hdf5 files as obtained using ./bin/crop_faces.py. The first dimension of the array corresponds to the frames of the video files.
The program to be used for calculating the GLCM features is ./bin/calcglcm.py:
$ ./bin/calcglcm.py replay
To execute this script for the anonymized test-set, call:
$ ./bin/calc_faces.py replay --ICB-2013
To see all the options for the scripts calcglcm.py just type --help at the command line. If you want to see all the options for a specific database (e.g. protocols, lighting conditions etc.), type the following command (for Replay-Attack):
$ ./bin/calcglcm.py replay --help
Classification with linear discriminant analysis (LDA)
The classification with LDA is performed using the script ./bin/ldatrain.py. To execute the script with prior normalization and PCA dimensionality reduction as is done in the paper (for Replay-Attack), call:
$ ./bin/ldatrain.py -r -n replay
If you want to normalize the output scores as well, just set the --ns option.
To execute this script for the anonymized test-set, call:
$ ./bin/ldatrain.py -r -n replay --ICB-2013
To reproduce our results, set the parameters cost=-1 (option -c -1) and gamma=3 (option -g 3) in the training of the SVM.
This script can be used to calculate the LDA scores not only for GLCM, but also for any other features computed with any other of the satellite packages. To see all the options for this script, just type --help at the command line.
Classification with support vector machine (SVM)
The classification with SVM is performed using the script ./bin/svmtrain.py. To execute the script with prior normalization of the data in the range [-1, 1] and PCA reduction as in the paper (for Replay-Attack), call:
$ ./bin/svmtrain.py -n -r replay
If you want to normalize the output scores as well, just set the --ns option.
To call this script for the anonymized test-set, call:
$ ./bin/svmtrain.py -n -r replay --ICB-2013
To reproduce our results, set the parameters cost=-1 (option -c -1) and gamma=3 (option -g 3) in the training of the SVM.
This script can be used to calculate the SVM scores not only for GLCM, but also for any other features computed with any other of the satellite packages. To see all the options for this script, just type --help at the command line.
Bounding box countermeasure
A fast countermeasure that takes account the area of the face bounding box as a feature.:
$ ./bin/icb2013_facebb_countermeasure.py --input-dir [Database dir] -v [database]
Q-Statistic
Fusion two or more countermeasures is one way to improve the classification performance. Kuncheva and Whitaker [1] shown the combination of statistically independent classifiers maximises the performance of a fusion and in order to measure this dependency, they proposed the Q-Statistic. For two countermeasures (A and B), the Q-Statistics can be defined.
where mathcal{N} is the number of times that a countermeasure make a correct classification (mathcal{N_1}) or make an incorrect classification (mathcal{N_0}).
To run the Q-Statistic script call:
$ ./bin/icb2013_qstatistic.py --input-dir [Set of scores of each countermeasure] -v [database]
Generating other types of features
This package depends on other satellite packages for calculating other types of features: LBP, LBP-TOP and motion correlation. To read more details and to generate these types of features, please refer to the corresponding satellite packages (antispoofing.lbp , antispoofing.lbptop , antispoofing.motion respectively). Note that it is possoble to call the scripts belonging to these other satellite packages within antispoofing.competition_icb2013 satellite package.
To generate classificatio scores for the other types of features, you can use the methods provided by this or the other correponding satellite packages.
Fusion of counter-measures
The classification scores obtained using different features and classification techniques can be fused at score level. To read about the available fusion techniques as well as to perform the fusion, please refer to the corresponding satellite package antispoofing.fusion . Note that you can call the scripts belonging to antispoofing.fusion satellite package within antispoofing.competition_icb2013 satellite package.
Generating error rates
To calculate the threshold on the classification scores of a single or a fused counter-measure, use ./bin/eval_threshold.py. Note that as an input argument you need to give the file with the developments scores to evaluate the threshold. To calculate the error rates, use ./bin/apply_threshold.py. To see all the options for these two scripts, just type --help at the command line.
References
[1] L. I. Kuncheva and C. J. Whitaker, “Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy,” Mach. Learn., vol. 51, pp. 181–207, May 2003.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file antispoofing.competition_icb2013-1.0.0.tar.gz
.
File metadata
- Download URL: antispoofing.competition_icb2013-1.0.0.tar.gz
- Upload date:
- Size: 17.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 151939701b7b2c10a6aefa9750d0e934cb2d28e429be2832e259ceba088b31eb |
|
MD5 | d5b8472578a9dfa6eba485cdba2b073c |
|
BLAKE2b-256 | a4aada8bcad7655fb1a9e5a6e84bd365bf4c44ac6525f00b1b2e680c104ddbae |