Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / Testing / BTAS 2016 Speaker Anti-spoofing Competition

BTAS 2016 Speaker Anti-spoofing Competition

The Swiss Centre for Biometrics Research and Testing and the Biometric group at Idiap Research Institute organize the Speaker Anti-spoofing competition. The competition will be part of the IEEE 8th International Conference on Biometrics: Theory, Applications, and Systems (BTAS 2016):

September 6-9, 2016 Niagara Falls, Buffalo, New York (USA)
http://ieee-biometrics.org/btas2016/

Objective

Despite the growing usage and increasing reliability of the speaker verification systems, they are shown to be vulnerable to spoofing attacks. In a spoofing attack, an invalid user attempts to gain access to the system by presenting counterfeit (fake) speech sample(s) as the evidence of a valid user. Counterfeit speech can be synthesized from text, converted using speech of another person, or simply replayed using some playback device such as a mobile phone.

The participants in this anti-spoofing competition will propose countermeasures to protect an automatic speaker verification (ASV) system against spoofing attacks. Essentially, these countermeasures should effectively separate real (genuine) speech recordings from spoofed speech (attacks). AVspoof database will be used in the competition. The participants will be provided two non-overlapping sets (each containing real and spoofed data subsets) for training and calibration of their countermeasure techniques. The submitted techniques will be evaluated on a separate independent independent testing set, which, besides the attacks present in AVspoof database, will also include additional unknown attacks.

Database

The proposed competition will be carried out using the AVspoof database. The database contains real (genuine) speech samples from 44 participants (31 males and 13 females) recorded over the period of two months in four sessions, each scheduled several days apart in different setups and environmental conditions such as background noises. Ten types of attacks were also generated using speech synthesis, voice conversion, and by replaying sample with two different mobile phones and a laptop.

All samples of the database are split into three non-overlapping sets: training, development, and evaluation. Each set consists of two parts: (i) real or genuine data and (ii) spoofed data or attacks. The samples are given in WAV format and have 16 KHz sampling rate.

Submission of results

The participating teams can already download the database (the folder is titled 'BTAS2016') and use provided training and development sets to start developing their algorithms (End User Licence Agreement needs to be signed). To ensure that the sample labels are not used to infer the class of a speech sample, a set of unlabeled test audio samples will be made available at evaluation time (April 1st, 2016).

The participants need to submit two score files: one for the development (deadline: April 1st, 2016) and one for the anonymized test (deadline: April 15th, 2016). Only participants which submit the scores for the development set will be given the test set and will be evaluated in the competition.

The score files need to have two columns:

filename score

The first 'filename' column should include relative path with the name of the file. The second 'score' column represents the score of the decision. Please note that we expect that the real accesses have larger scores then the attacks.

Here is an example of how the score file for the development set looks like:

genuine/male/m005/sess2/phone2/pass/sentence01 3.0694
genuine/male/m005/sess2/phone2/pass/sentence02 10.6895
genuine/male/m005/sess2/phone2/pass/sentence03 1.3846
genuine/male/m005/sess2/phone2/pass/sentence04 19.1034
attacks/replay_phone2/male/m005/sess4/phone2/pass/sentence01 -1.7788
attacks/replay_phone2/male/m005/sess4/phone2/pass/sentence02 -9.3064
attacks/replay_phone2/male/m005/sess4/phone2/pass/sentence03 -13.3729

Since file names in the test set will be randomized (no information could be inferred from the file names), the score file for the test set should be similar to the following:

test_file_002 2.8921
test_file_012 -1.0001
test_file_033 10.9932

Evaluation and ranking

The ranking will be based on the HTER (Half-Total Error Rate) measured on the score file for an anonymized test dataset (should be submitted on April 15th, 2016). HTER will be obtained using a threshold pre-computed from the score file for the development dataset (should be submitted on April 1st, 2016). The threshold to be used is the value that equalizes the false-acceptance and false-rejections, a.k.a. equal-error rate (EER). The smaller is the HTER value computed for the test set, the higher is the ranking of the countermeasure.

The steps of the evaluation process will be as the following:
1) Compute the threshold for the EER on the development set
2) Apply the threshold to discriminate the test data and compute the HTER
3) List participants in the order of increasing HTER (smaller HTER value - higher ranking)

The participants will be required to submit a short description of their method as well. Additional credits will be given to participants who will provide the source code of their method as a free software.
The results of all the submitted algorithms, as well as their short descriptions, will be submitted for publication at BTAS 2016.

Useful links

Participating teams can use the free machine learning and signal processing toolbox Bob, which is based on Python.

How to participate

Please register your team here by January 29th, 2016 (final extended deadline, no more extensions will be available). On January 29th you will receive email with detailed instructions on how to obtain data and code of a baseline system.

Important dates

Please note that the results of the competition will be submitted for the publication in BTAS 2016 proceedings. The contributions submitted by the participants will be described (hence, the description of each method will be required), clearly acknowledged, and cited.

Registration due (FINAL DEADLINE EXTENSION) January 29, 2016
Availability of the Development and Training sets January 29, 2016
Deadline for the submission of the Development set results April 1, 2016
* Availability of the Test set April 1, 2016
Submission of the Test set results and method description April 15, 2016

* Only participants that submit scores for the development set will be provided the test set and evaluated.

Organizers

Dr. Pavel Korshunov ( pavel DOT korshunov AT idiap DOT ch)
Dr. Sébastien Marcel ( sebastien DOT marcel AT idiap DOT ch)

This is themeComment