Skip to main content

Example on how to use the scalable implementation of PLDA and how to reproduce experiments of the article

Project description

This package contains scripts that shows how to use the implementation of the scalable formulation of Probabilistic Linear Discriminant Analysis (PLDA), integrated into bob, as well as how to reproduce experiments of the article mentioned below.

If you use this package and/or its results, please cite the following publications:

  1. The original paper with the scalable formulation of PLDA explained in details:

    @article{ElShafey_TPAMI_2013,
      author = {El Shafey, Laurent and McCool, Chris and Wallace, Roy and Marcel, S{\'{e}}bastien},
      title = {A Scalable Formulation of Probabilistic Linear Discriminant Analysis: Applied to Face Recognition},
      year = {2013},
      month = jul,
      journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
      volume = {35},
      number = {7},
      pages = {1788-1794},
      doi = {10.1109/TPAMI.2013.38},
      pdf = {http://publications.idiap.ch/downloads/papers/2013/ElShafey_TPAMI_2013.pdf}
    }
  2. Bob as the core framework used to run the experiments:

    @inproceedings{Anjos_ACMMM_2012,
      author = {A. Anjos and L. El Shafey and R. Wallace and M. G\"unther and C. McCool and S. Marcel},
      title = {Bob: a free signal processing and machine learning toolbox for researchers},
      year = {2012},
      month = oct,
      booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan},
      publisher = {ACM Press},
      url = {http://publications.idiap.ch/downloads/papers/2012/Anjos_Bob_ACMMM12.pdf},
    }
  3. If you decide to use the Multi-PIE database, you should also mention the following paper, where it is introduced:

    @article{Gross_IVC_2010,
     author = {Gross, Ralph and Matthews, Iain and Cohn, Jeffrey and Kanade, Takeo and Baker, Simon},
     title = {Multi-PIE},
     journal = {Image and Vision Computing},
     year = {2010},
     month = may,
     volume = {28},
     number = {5},
     issn = {0262-8856},
     pages = {807--813},
     numpages = {7},
     doi = {10.1016/j.imavis.2009.08.002},
     url = {http://dx.doi.org/10.1016/j.imavis.2009.08.002},
     acmid = {1747071},
    }
  4. If you only use the Multi-PIE annotations, you should cite the following paper since annotations were made for the experiments of this work:

    @article{ElShafey_TPAMI_2013,
      author = {El Shafey, Laurent and McCool, Chris and Wallace, Roy and Marcel, S{\'{e}}bastien},
      title = {A Scalable Formulation of Probabilistic Linear Discriminant Analysis: Applied to Face Recognition},
      year = {2013},
      month = jul,
      journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
      volume = {35},
      number = {7},
      pages = {1788-1794},
      doi = {10.1109/TPAMI.2013.38},
      pdf = {http://publications.idiap.ch/downloads/papers/2013/ElShafey_TPAMI_2013.pdf}
    }

Installation

There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers like pip (or easy_install) or manually download, unpack and use zc.buildout to create a virtual work environment just for this package.

Using an automatic installer

Using pip is the easiest (shell commands are marked with a $ signal):

$ pip install xbob.paper.tpami2013

You can also do the same with easy_install:

$ easy_install xbob.paper.tpami2013

This will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.

This scheme works well with virtual environments by virtualenv or if you have root access to your machine. Otherwise, we recommend you use the next option.

Using zc.buildout

Download the latest version of this package from PyPI and unpack it in your working area:

$ wget http://pypi.python.org/packages/source/x/xbob.paper.tpami2013/xbob.paper.tpami2013-0.1.0a0.zip
$ unzip xbob.paper.tpami2013-0.1.0a0.zip
$ cd xbob.paper.tpami2013

The installation of the toolkit itself uses buildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:

$ python bootstrap.py
$ ./bin/buildout

These 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.

Please note that this package also requires that bob (>= 1.2.0) is installed.

PLDA tutorial

The following example consists of a simple script, that makes use of PLDA modeling on the Fisher’s iris dataset. It performs the following tasks:

  1. Train a PLDA model using the first two classes of the dataset

  2. Enroll a class-specific PLDA model for the third class of the dataset

  3. Compute (verification) scores for both positive and negative samples

  4. Plot the distribution of the scores and save it into a file

To run this simple example, you just need to execute the following command:

$ ./bin/plda_example_iris.py --output-img plda_example_iris.png

Reproducing experiments

It is currently possible to reproduce the experiments on Multi-PIE using the PLDA algorithm. In particular, the Figure 2 can be easily reproduced, by following the steps described below.

The experiments using the three baseline systems reported on Table 3 may be integrated later on in this package, as well as the experiments on the LFW database.

Note for Grid Users

At Idiap, we use the powerful Sun Grid Engine (SGE) to parallelize our job submissions as much as we can. At the Biometrics group, we have developed a little toolbox <http://pypi.python.org/pypi/gridtk> that can submit and manage jobs at the Idiap computing grid through SGE.

The following sections will explain how to reproduce the paper results in single (non-gridified) jobs. If you are at Idiap, you could run the following commands on the SGE infrastructure, by applying the ‘–grid’ flag to any command. This may also work on other locations with an SGE infrastructure, but will likely require some configuration changes in the gridtk utility.

Multi-PIE dataset

Getting the data

You first need to buy and download the Multi-PIE database:

http://multipie.org/

and to download the annotations available here:

http://www.idiap.ch/resource/biometric/

Feature extraction

The following command will extract LBP histograms features. You should set the paths to the data according to your own environment:

$ ./bin/lbph_features.py --image-dir /PATH/TO/MULTIPIE/IMAGES --annotation-dir /PATH/TO/MULTIPIE/ANNOTATIONS --output-dir /PATH/TO/OUTPUT_DIR/

Dimensionality reduction

Once the features has been extracted, they are projected into a lower dimensional subspace using Principal Component Analysis (PCA):

$ ./bin/pca.py --output-dir /PATH/TO/OUTPUT_DIR/

PLDA modeling and scoring

PLDA is then applied on the dimensionality reduced features.

This involves three different steps:
  1. Training

  2. Model enrollment

  3. Scoring

The following command will perform all these steps:

$ ./bin/plda.py --output-dir /PATH/TO/OUTPUT_DIR/

Then, the HTER on the evaluation set can be obtained using the evaluation script from the bob library as follows:

$ ./bin/bob_compute_perf.py -d /PATH/TO/OUTPUT_DIR/U/plda/scores/scores-dev -t /PATH/TO/OUTPUT_DIR/U/plda/scores/scores-eval -x

If you want to reproduce the Figure 2 of the PLDA article mentioned above, you can run the following commands:

$ ./bin/plda_subworld.py --output-dir /PATH/TO/OUTPUT_DIR/
$ ./bin/plot_figure2.py --output-dir /PATH/TO/OUTPUT_DIR/

The previous commands will run the PLDA toolchain several times for a varying number of training samples. Please note, that this will require a lot of time to complete (one to two days on a recent workstation such as one with an Intel Core i7 CPU).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xbob.paper.tpami2013-0.1.0a0.zip (51.9 kB view details)

Uploaded Source

File details

Details for the file xbob.paper.tpami2013-0.1.0a0.zip.

File metadata

File hashes

Hashes for xbob.paper.tpami2013-0.1.0a0.zip
Algorithm Hash digest
SHA256 28dda8c27998c85dacb724869e994bffc2c09541fadb6912c07fdc3b6edf89e5
MD5 cea155af539123044e3bedac855949b3
BLAKE2b-256 bda81a02956502ac52ca960b24f500718b3d7f0a4c63f6096ab878a612472d22

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page