Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / Testing / The 2nd competition on counter measures to 2D facial spoofing attacks

The 2nd competition on counter measures to 2D facial spoofing attacks

Following the first competition on counter measures to 2D spoofing attacks consisting of printed photographs, Tabula Rasa organizes a second competition for comparison of the performance of spoofing counter measures, this time on a more diverse set of spoofing attacks. The competition will be part of the 6th International Conference of Biometrics (ICB-2013):

The 2nd Competition on counter measures to 2D facial spoofing attacks

6th International Conference of Biometrics (ICB-2013)

June 4-7, 2013 Madrid, SPAIN

http://atvs.ii.uam.es/icb2013/

Motivation


Despite the wide usage of the face recognition systems in many environments where security is vital, they are still vulnerable to direct sensory attacks i.e. spoofing attacks. In a spoofing attack, an invalid user may gain access to the system by presenting counterfeit biometric evidence of a valid user.
In the domain of face recognition, more and more advanced spoofing attacks are emerging. Unfortunately, the number of anti-spoofing systems is still limited. Furthermore, the number of baseline anti-spoofing systems whose source code is publicly available for comparison and reproducible research, is even more limited.
The goal of this competition is to compare and evaluate anti-spoofing algorithms against the following types of face spoofing attacks:
  • printed photographs
  • photographs displayed on a screen
  • video replay attacks
Additionally, the competition encourages publishing the source code of the developed spoofing counter-measures as a free software.

Database

The competition will be carried out on the Replay-Attack face spoofing database. The database consists of real accesses and all the three fore-mentioned types of attacks to 50 clients. The following table shows the number of samples divided into protocols.

Real accessesPrinted attacks Photo attacks Video attacks
Training set6060120120
Development set6060120120
Test set8080160160

The samples in the Replay-Attack database are given in the form of video sequences in ".mov" file format. The video samples and the list of the files divided into protocols can be downloaded directly from the official Replay-Attack website. The database also provides face location files for the frames of the videos.

Submission of results

The participating teams can already download the training and development data and start developing their algorithms (End User Licence Agreement needs to be signed). To ensure that the sample labels are not used to infer the class of the video, a set of unlabeled test videos (and corresponding file lists and face detections) will be made available at evaluation time (March 1st, 2013). The anonymized test videos will be trimmed copies of the originals, with a fixed number of frames (100) and a random starting frame.
The participants need to submit two score files: one for the development and one for the anonymized test set by the specified deadline (March 15th, 2013). The files need to have two columns:

filename score

The first column corresponds to the file name of the video sequence, while the second one for the score of the full video sequence. The scores must be normalized between 0 and 1 (0.0 <= score <= 1.0). Please note that we expect that the real accesses have larger scores then the attacks.

Here is an example how the score file for the development set should look like:

devel/real/client022_session01_webcam_authenticate_controlled_2 0.5394
devel/real/client029_session01_webcam_authenticate_adverse_1 0.7444
devel/attack/hand/attack_print_client113_session01_highdef_photo_adverse 0.017

The test set should accordingly look like this (example):

test_sequence_002 0.8921
test_sequence_012 0.0001
test_sequence_033 0.9932

We expect one line for each of the video sequences.

Evaluation and ranking

The ranking will be based on the HTER (Half-Total Error Rate) measured on the anonymized test set data using a 'a priori' threshold calculated on the development data, using the two score files that you are going to provide. The threshold to be used is the value that equalizes the false-acceptance and false-rejections, a.k.a. equal-error rate (EER). The smallest the HTER on the test set, the higher rank you will get.

The logic we will use is the following:

1) Compute the threshold for the EER on the development set

2) Apply the threshold for discriminating the test data, calculate the HTER

The participants will be required to submit a short description of their method as well. Additional credits will be given to participants who will provide the source code of their method as a free software.
The results of all the submitted algorithms will be published in a conference paper at ICB-2013.

Useful links

For Python API for easy access to the Replay-Attack database, participating teams can use the free machine learning and signal processing toolbox Bob and its satellite package for Replay-Attack database.
For baseline methods for comparison purposes, the participants can refer to the following publications:
  • "Counter-Measures to Photo Attacks in Face Recognition: a public database and a baseline" - A. Anjos, S. Marcel (pdf, source code)
  • "On the Effectiveness of Local Binary Patterns in Face Anti-spoofing" - I. Chingovska, A. Anjos, S. Marcel (pdf, source code)
  • "LBP-TOP based countermeasure against facial spoofing attacks" - T. Pereira, A. Anjos, J. Martino, S. Marcel (pdf, source code)
(Note: the free source code accompanying the publications is programmed in Python and depends on Bob.)

How to engage

Please register your team here by January 14th, 2013.

Important dates

Registration dueJanuary 14th, 2013
Availability of test setMarch 1st, 2013
Submission of results and method descriptionMarch 15th, 2013
Publication of the results at ICB-2013April 8th, 2013

Organizers

Ivana Chingovska ( ivana DOT chingovska AT idiap DOT ch)

Dr. André Anjos ( andre DOT anjos AT idiap DOT ch)

Dr. Sébastien Marcel ( sebastien DOT marcel AT idiap DOT ch)

This is themeComment