This repo is a python reimplementation of Conv-MLP.
Conv-MLP is a simple yet effective architecture which incorporates local patch convolution with global MLP and breaks the inductive bias limitation of traditional full CNNs and can be expected to better exploit long-range dependencies.
Here we give the implementation of multi-modal face anti-spoofing on the WMCA dataset. We also provide Conv-MLP RGB-mode on the OULU-NPU dataset.
We conduct experiments to evaluate the performance of Conv-MLP in terms of accuracy and computational efficiency on multi-modal and RGB benchmarks in comparison with existing state-of-the-art methods, including full CNN models and transformer-based ViT models.
As shown, Conv-MLP ranks first in terms of the mean ACER on the seven unseen protocols (7.16 ± 11.10%), which implies Conv-MLP can extract discriminative representations and performs well on unseen scenarios.
- Visualization
- Python 3.7
- torch 1.6.0
- opencv-python, numpy, shutil, torchvision, tqdm
We train and test on the multi-modal datasets including WMCA and CeFA, and RGB datasets including OULU-NPU and SiW. According to the usage license agreement, we do not have the right to provide the datasets in public. If you need to use them, please refer to the link and apply to the relevant scientific institutions for research usage.
- Starting from scratch
python train.py
- Pretraining
python train.py --pretrained_model='model_name.pth'
Note that for every training, you need to goto ./data/prepare_data.py
and modify the corresponding data path.
You can also find variables, such as batch_size, patch_size, learning rate, and number of epoches in the train.py
.
In addition, if the pre-trained model is used for fine-tuning, the initial learning rate should be set smaller, e.g. 1e-4.
python train.py --mode='infer_val' --pretrained_model='model_name.pth'
Note that for every testing, you also need to goto ./data/prepare_data.py
and modify the corresponding data path.
We provide pre-trained models on two datasets separately.
-
WMCA
BaiduCloud: https://pan.baidu.com/s/1nwk5E8fCwNU-QbXvl0SqGQ (code:1xqu)
GoogleDrive: https://drive.google.com/drive/folders/1wqSUKGrAzk8EVpgE3a18L2OWLBds7RDU?usp=sharing
-
CeFA
BaiduCloud: https://pan.baidu.com/s/12fZebAoM5w77neXd6f69QA (code:51jq)
GoogleDrive: https://drive.google.com/drive/folders/1dWAeIQm9WzbYQOIXtr7qlqk2F0COqLC8?usp=sharing
You need to place the pre-trained model under the path specified in train.py
, for example ./models_wmca/Conv-MLP_48/checkpoint/pretrained_model.pth
. If no existing path, please build it by yourself.