Facebook announces the results of its Deep fake Detection Challenge, winner hits 65% accuracy
Back in September last year, Facebook announced the Deepfake Detection Challenge (DFDC) to combat the menace of disinformation stemming from deepfakes. The competition officially kicked off on December 11 with a collection of 115,000 videos curated especially for this challenge. Prizes totaling $1 million were up for grabs and 2,114 participants submitted around 35,000 models trained on the dataset.
Now, six months later, Facebook has finally released the names of the winners. First place was bagged by a model trained by Selim Seferbekov, a machine learning engineer hailing from Mapbox. His deepfake detector achieved 65.18% average precision on the test/black box dataset, which had a corpus of 10,000 videos. This is a low number despite the fact that this model achieved 82.56% average precision on the public data set. A similar lack of generalization was discovered in other detectors as well.
This outcome reinforces the importance of learning to generalize to unforeseen examples when addressing the challenges of deepfake detection. The competition was hosted by Kaggle and winners were selected using the log-loss score against the private test set.
The precision exhibited by Seferbekov’s model is now “a new shared baseline as the AI community continues to work on this difficult and important task,” Facebook wrote. But the firm also accentuated how none of the participants achieved 70% average precision on the testing dataset, signaling that substantial development is still needed in the detection of deepfakes.
Details on the rest of the participants can be found on this Kaggle leaderboard. Facebook thanked the competitors as well as all the contributors who made the challenge possible. The firm also emphasized how collaborative effort is the way forward for creating a potent framework to detect deepfakes on the internet.