Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
huyibo6 db0b087e4f | 2 years ago | |
---|---|---|
.. | ||
DUM | 2 years ago | |
data | 2 years ago | |
misc | 2 years ago | |
networks | 2 years ago | |
oulu_images/protocols/Protocol_1 | 2 years ago | |
oulu_images_crop/protocols/Protocol_1 | 2 years ago | |
train_list | 2 years ago | |
.gitignore | 2 years ago | |
README.md | 2 years ago | |
generated.py | 2 years ago | |
train_generator.py | 2 years ago | |
train_generator.sh | 2 years ago |
Official Pytorch code of paper Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth Uncertainty Learning in IEEE Transactions on Circuits and Systems for Video Technology.
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks. Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance, which limits the generalization ability of FAS model. In this paper, we propose Dual Spoof Disentanglement Generation (DSDG) framework to tackle this challenge by "anti-spoofing via generation". Depending on the interpretable factorized latent disentanglement in Variational Autoencoder (VAE), DSDG learns a joint distribution of the identity representation and the spoofing pattern representation in the latent space. Then, large-scale paired live and spoofing images can be generated from random noise to boost the diversity of the training set. However, some generated face images are partially distorted due to the inherent defect of VAE. Such noisy samples are hard to predict precise depth values, thus may obstruct the widely-used depth supervised optimization. To tackle this issue, we further introduce a lightweight Depth Uncertainty Module (DUM), which alleviates the adverse effects of noisy samples by depth uncertainty learning. DUM is developed without extra-dependency, thus can be flexibly integrated with any depth supervised network for face anti-spoofing. We evaluate the effectiveness of the proposed method on five popular benchmarks and achieve state-of-the-art results under both intra- and inter- test settings.
Our experiments are conducted under the following environments:
Before training, we need to extract frame images for some video data sets. Then, we use MTCNN for face detection and PRNet for face depth map prediction. We give an example of the OULU-NPU dataset:
./oulu_images/
../ip_checkpoint
../fake_images/
and utilize the PRNet to get the depth map../oulu_images_crop
../fake_images/
to ./oulu_images_crop/
and upgrade the protocol.We provide a CDCN model with DUM trained on OULU-NPU Protocol-1, and the following shows how to test it.
./DUM/checkpoint/
.Please consider citing our paper in your publications if the project helps your research.
This repo is based on the following projects, thank the authors a lot.
SST(Semi-Siamese Training)是一种针对浅层数据的人脸识别模型训练方法,所训练模型为一对半孪生网络,包括一个主模型和一个副模型,每次迭代时网络输入为同一ID的两张人脸图像(注册照和现场照),副模型从注册照中提取人脸特征并构成一个动态的特征队列,随着训练进行同步更新,根据主模型从现场照中提取的人脸特征和动态特征队列计算损失函数,得到损失值后主模型采用随机梯度下降的方式进行更新,副模型基于当前模型状态与主模型采用滑动平均的方式进行更新,训练完成后主模型用于人脸识别测试。
Text Pickle Python
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》