Skip to main content

Running baseline experiments and evaluations for the IJCB 2017 UCCS challenge

Project description

This package implements the baseline algorithms and evaluation for part 2 and 3 of the face recognition challenge. This package relies on the signal processing and machine learning library Bob. For installation instructions and requirements of Bob, please refer to the Bob web page.

Dataset

This package does not include the original image and protocol files for the competition. Please register on the Competition web page and download the UCCS dataset from there.

Installation

The installation of this package follows the Buildout structure. After installing Bob and extracting this package, please run the following command lines to install this package:

$ python bootstrap-buildout.py … $ ./bin/buildout …

The installation procedure automatically generates executable files inside the bin directory, which can be used to run the baseline algorithms or to evaluate the baselines (and your) algorithm.

Running the Baselines

There are two scripts to run the baseline, one for each part.

Face Detection

The first script is a face detection script, which will detect the faces in the validation (and test) set. The baseline face detector simply uses Bob’s built-in face detector bob.ip.facedetect, which is neither optimized for blurry faces nor for profiles.

You can call the face detector baseline script using:

$ ./bin/baseline_detector.py

Please refer to $ ./bin/baseline_detector.py -h for possible options. Here is a subset of options that you might want/need to use/change:

--data-directory: Specify the directory, where you have downloaded the UCCS dataset into

--result-file: The file to write the detection results into; this will be in the required format

--verbose: Increase the verbosity of the script; using --verbose --verbose or -vv is recommended; -vvv will write more information

--debug: Run only over the specified number of images; for debug purposes only

--display: Display the detected bounding boxes and the ground truth; for debug purposes only

--parallel: Run in the given number of parallel processes; can speed up the processing tremendously

On a machine with 32 cores, a good command line for the full baseline experiment would be:

$ ./bin/baseline_detector.py –data-directory YOUR-DATA-DIRECTORY -vv –parallel 32

To run a small-scale experiment, i.e., to display the detected faces on 20 images, a good command line might be:

$ ./bin/baseline_detector.py –data-directory YOUR-DATA-DIRECTORY -vvv –display –debug 20

Face Recognition

For face recognition, we simply adopt a PCA+LDA pipeline on top of LBPHS features. The PCA+LDA projection matrix is estimated from the faces in the training set. For each person, the images of the training set build one class. Open-set recognition is performed by using all training faces of unknown identities in a separate class.

First, the faces in the training images are re-detected, to assure that the bounding boxes of training and test images have similar content. Then, the faces are rescaled and cropped to a resolution of 64x80 pixels. Afterwards, LPBHS features are extracted from these images, and a PCA+LDA projection matrix is computed. All training features are projected into the PCA+LDA subspace. For each identity (including the unknown identity -1), the average of the projected features is stored as a template.

During testing, in each image all faces are detected, cropped, and LBPHS features are extracted. Those probe features are projected into the same PCA+LDA subspace, and compared to all templates using Euclidean distance. For each detected face, the 10 identities with the smallest distances are obtained – if identity -1 is included, all less similar images are not considered anymore. These scores are written into the score file in the desired format.

You can call the face recognition baseline script using:

$ ./bin/baseline_recognizer.py

Please refer to $ ./bin/baseline_recognizer.py -h for possible options. Here is a subset of options that you might want/need to use/change:

--data-directory: Specify the directory, where you have downloaded the UCCS dataset into

--result-file: The file to write the recognition results into; this will be in the required format

--verbose: Increase the verbosity of the script; using --verbose --verbose or -vv is recommended; -vvv will write more information

--temp-dir: Specify the directory, where temporary files are stored; these files will be computed only once and reloaded if present

--force: Ignore existing temporary files and always recompute everything

--debug: Run only over the specified number of identities; for debug purposes only; will modify file names of temporary files and result file

--display: Display the detected bounding boxes and the ground truth; for debug purposes only

--parallel: Run in the given number of parallel processes; can speed up the processing tremendously

On a machine with 32 cores, a good command line would be:

$ ./bin/baseline_recognizer.py –data-directory YOUR-DATA-DIRECTORY -vv –parallel 32

Evaluation

The provided evaluation scripts will be usable to evaluate the validation set only, not the test set. You can use the evaluation scripts for two purposes:

  1. To plot the baseline results in comparison to your results.

  2. To make sure that your score file is in the desired format.

If you are unable to run the baseline experiments on your machine, we provide the score files for the validation set on the competition website.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

challenge.uccs-1.0.0a0.zip (3.0 MB view details)

Uploaded Source

File details

Details for the file challenge.uccs-1.0.0a0.zip.

File metadata

File hashes

Hashes for challenge.uccs-1.0.0a0.zip
Algorithm Hash digest
SHA256 63c3c14c42b2bc003897f54c3de5c64e34126f8e0d8d2feaec0473b76dddfd5e
MD5 49c1f703007a76fc16a495399baa3ff2
BLAKE2b-256 08463c4995158dad084f02788a2d5841fbe387630c886ae709c8f4664bf6c9b5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page