Coder Social home page Coder Social logo

iadg's Introduction

Instance-Aware Domain Generalization for Face Anti-Spoofing

This is the PyTorch implementation of our paper:

[Paper] Instance-Aware Domain Generalization for Face Anti-Spoofing

Qianyu Zhou, Ke-Yue Zhang, Taiping Yao, Xuequan Lu, Ran Yi, Shouhong Ding, Lizhuang Ma.

The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023

[Arxiv] [Paper]

Updates

  • (October 2023) All checkpoints of pretrained models are released.
  • (October 2023) All code of IADG are released.

Installation

Requirements

  • Linux, CUDA>=11.7, GCC>=5.4

  • Python>=3.10

    We recommend you to use Anaconda to create a conda environment:

    conda create -n IADG python=3.10 pip

    Then, activate the environment:

    conda activate IADG
  • PyTorch>=1.13.0, torchvision>=0.14.0 (following instructions here

    For example, if your CUDA version is 11.7, you could install pytorch and torchvision as following:

    conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 -c pytorch
  • Other requirements

    pip install -r requirements.txt

Usage

Checkpoints

Below, we provide all checkpoints, all training logs and inference logs of IADG for different datasets.

DownLoad Link of Google Drive

DownLoad Link of Baidu Netdisk (password:26xc)

Training

Training on single node

You can use the following training command.

CUDA_VISIBLE_DEVICES=0 python3 -u -m torch.distributed.launch --nproc_per_node=1 --master_port 17850 ./train.py -c ./configs/ICM2O.yaml

Evaluation

And then run following command to evaluate it on the testing set:

CUDA_VISIBLE_DEVICES=0 python3 -u  ./test.py -c ./configs/ICM2O_test.yaml --ckpt checkpoint_file

Acknowledgements

This project is based on the following open-source projects. We thank their authors for making the source code publically available.

Citing IADG

If you find IADG useful in your research, please consider citing:

@inproceedings{zhou2023instance,
  title={Instance-Aware Domain Generalization for Face Anti-Spoofing},
  author={Zhou, Qianyu and Zhang, Ke-Yue and Yao, Taiping and Lu, Xuequan and Yi, Ran and Ding, Shouhong and Ma, Lizhuang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={20453--20463},
  year={2023}
}

License

This project is released under the Apache License 2.0, while some specific features in this repository are with other licenses. Please refer to LICENSES.md for the careful check, if you are using our code for commercial matters.

iadg's People

Contributors

qianyuzqy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

iadg's Issues

Regarding the issue of training parameters

Hi, I am currently unable to reproduce the paper results. Can you release the relevant training parameters? Including parameters for data frame acquisition and training parameters. My data training results in ICM2O are as follows:

image

But I don't know what's wrong with me. I followed the parameters given in the paper. As a default parameter, but when fetching data frames. According to the method of taking one frame every 5 frames.

doubts when reproducing the O&C&I to M test

Hi, I am trying to reproduce your results, especially the O&C&I to M evaluation test with the pre-trained model you published, but the results I obtained are too different, and most probably I'm doing something wrong. I appreciate any help you can give me or any checks to do during the test to verify that the execution I am doing is correct. Also, I want to clarify some questions:

I saw that as input I have to provide two files (list_pos.txt, list_neg.txt) but I have some doubts about the format of these resources. I write in each line: "path_image label", is right? Also, I used the label 0 if it is an attack or 1 if it is real. Is this correct or is the expected labeling different? On the other hand, dataset M contains videos, how many frames are used to obtain the final result?

Thanks in advance, any help I will be grateful.
Best regards.

code

Dear author

Congratulations on your work being accepted by CVPR. I would like to inquire further. When will the complete code be released?

Checkpoint download not working

Hello. I am trying to download the checkpoint from Baidu to reproduce your results but it seems that the password is not working. Could you please check or upload the files to an alternative location such as Google Drive? Many thanks.

IADG代码问题咨询

您好,我目前在复现代码的过程中出现了一点问题。
1、报错显示找不到LMDB_root: /path/to/lmdb_database这个路径。请问这个lmdb是自己创建的吗,里面是应该存放那四个数据集吗?
2、像Replay_Real_list/Replay_Fake_list这种文件是需要自己通过数据集生成的吗,具体是什么结构啊?
希望您能给我解答一下,感谢!

Fail to reproduce

Strictly reimplement the code according to the paper. Fail to reproduce, a huge gap with the results in the paper.

Test for single image

Hi I am trying to create inference script where I want to test model for single image but for testing code it required label argument what should I provide in place of it?is it possible for you to provide inference script?

Data preprocessing code

Hello, first of all very interesting method congrats! I want to adapt this method to a diferent problem using diferent datasets. However I cannot find the code to preprocess the datasets.

  • How is this done? Is it by face cropping and alignment and then use PRNet for depth extraction in a similar structure folder? Is there code for this?
  • Also, how are you generating the list path files?

Thanks in advance!

When to preprint your exciting paper?

I believe your work is exciting.
May I ask if your paper can be made public in advance?
This will be a major boost to the FAS community.
Best wishes.

The implementation is quite different from the paper?

About AIAW in the paper. In paper, it's said that the feature will first go through a Instance Norm layer to calculate covariance matrix, and then, the std of covariance matrix will be used as a clue for style-sensitive feature cov finding.
However, is actual implementation, it's only just a batchNorm and a torch.bmm for gram matrix computing.

I wonder why there is a such huge difference?

thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.