Deepfake detection¶
The deepfake detection module was trained solely on genuine images using One-Class-Classification (OCC) method. The idea is to learn what genuine images look like and tag as “deepfake” anything “novel”. In short, we’re detecting novelty. This is a powerful idea an make us immune to new deepfake methods. There are many deepfake generative implementations (reve, MidJourney, Ideogram, Dall-E, Flux, Stable Diffusion, gpt4o, Recraft…) and that number will keep growing, it’d would be a nightmare to try to keep up.
People test our deepfake detection implementation using https://www.doubango.org/webapps/face-liveness and doing some drag&drop, that’s useless. In real life you cannot use drag&drop and the only way to use deepfakes is via stream injection (virtual camera). Use the camera and try to inject deepfakes if you really want to check the accuracy.
Next video shows the SDK catching deepfeake injection using OBS Virtual Camera and Face2Face as explained here:
Known issues¶
Drag and Drop¶
The webapp demo at https://www.doubango.org/webapps/face-liveness allows drag&drop to make your life easier but that doesn’t reflect the real accuracy of the SDK. Using drag&drop you can use non standard image resolutions, edit images (e.g. contrast, crop…)… to try to break the implementation but that doesn’t reflect real life. In real life we only acccept 720p images from the camera and you cannot edit the images unless you’re using a virtual camera. Edited images passed througth a virtual camera will be caught by the stream injection module and non standard size will be rejected as spoof.
People are reluctant to give any company authorization to access their camera, that’s why we still offer drag&drop as a way to test our product.