Coder Social home page Coder Social logo

kprokofi / light-weight-face-anti-spoofing Goto Github PK

View Code? Open in Web Editor NEW
143.0 143.0 43.0 129.1 MB

towards the solving spoofing problem

License: MIT License

Python 99.08% Shell 0.92%
anti-spoofing casia celeba-spoof-dataset face-anti-spoofing lcc-fasd

light-weight-face-anti-spoofing's People

Contributors

kprokofi avatar sovrasov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

light-weight-face-anti-spoofing's Issues

Image normalisation while predicting in demo.py

I checked the demo.py script and its related files but I don't see any image normalization while prediction. However in train.py I could see image normalization using albumentations.
Is there a specific reason why normalization has been removed from the inference?

Question about your training details

May i know how many epochs did you trained for the final model based on MN3 trained on celebA_spoof and tested on LCC_FASD?

Is Max Epoch == 71 the optimum number of epochs you had tried and derived?

Thank You

Model Classifies all brown or dark people as spoof

I tested your model in real life against real and spoofed photos and videos of me and some of my friends.

The amount of false negatives for real(predicting spoof where it should predict real) was quite astonishing.

This might be because CelebA-Spoof dataset is biased towards white people in general or the model is a bit weak to focus on actual cues. A pattern based model might fair better against people of other ethnicity

openvino version

The results I tested on the openvino model are not good, most of the results are spoof. Can you tell me which version of openvino works?

No such file or directory: './logs/MobileNet3.pth.tar'

Hello,

I'm trying to convert model from model mobilenetv3-small-0.75-86c972c3.pth to onnx and i get this issues.
I can get this file "MobileNet3.pth.tar" on where?

I was tried download file /driver/AntiSpoofing/spf_models/MN3antispoof.pth.tar on this link but it's doesn't seem right

Thanks you for help me!

Low accuracy on lcc-fasd

Hi. I'm trying to evaluate your model, saved in .onnx format from your drive:
https://drive.google.com/drive/folders/1E1OovqRMEQD_uFIhTDU05efvq3KwwnPE
I use custom face detector for cutting faces from image. It works well.
Here is my preprocessing of face for anti-spuffing:

def preprocess_spoof(imgs, normalize_mean = [0.53875615, 0.45938787, 0.45437699], 
                         normalize_std = [0.28414759, 0.27720253, 0.27737352], 
                         resize_shape = (128, 128), 
                         flipbgr = True):
        normalize_mean = np.array(normalize_mean)
        normalize_std = np.array(normalize_std)
        res = []
        for idx, img in enumerate(imgs):
            #print(img.shape)
            img = cv2.resize(img.astype(np.float32), dsize=resize_shape,
                                     interpolation=cv2.INTER_CUBIC)
            if flipbgr:
                img = cv2.cvtColor(img.astype('float32'),
                                       cv2.COLOR_BGR2RGB)
            img = (img/255 - normalize_mean) / normalize_std
            
            img = np.transpose(img, (2, 0, 1))
            res.append(img)
        return res

I use mean and std counted from train part of dataset.

Then I evaluate model with this code:

TP = 0
FP = 0
TN = 0
FN = 0

total_time = 0
P_probs = []
N_probs = []

detector_errors_P = 0
detector_errors_N = 0

with torch.no_grad():
    for idx,img_name in enumerate(listOfImageNamesUnspoof):
        img = cv2.imread(img_name)

        start = time.time()
        bbox = get_box(img)
        if len(bbox) == 0:
            detector_errors_P += 1
            continue
        face = cutout_bbox(img, bbox)
        prediction = sess.run(None, {'actual_input_1': preprocess_spoof([face])})
        speed = time.time()-start
        #print(prediction[0])
        
        
        #label = np.argmax(prediction[0][0])
        if prediction[0][0][0] > 0.4:
            label = 0
        else:
            label = 1
        value = prediction[0][0][label]

        if label == 0:
            TP += 1
            P_probs.append(value)
        else:
            FN += 1

        total_time += speed

    print("Real images analysis finished")

    P_average_prob = np.mean(P_probs)
    P_min_prob = np.min(P_probs)

    for idx,img_name in enumerate(listOfImageNamesSpoof):
        img = cv2.imread(img_name)

        start = time.time()
        bbox = get_box(img)
        if len(bbox) == 0:
            detector_errors_N += 1
            continue
        face = cutout_bbox(img, bbox)
        prediction = sess.run(None, {'actual_input_1': preprocess_spoof([face])})
        speed = time.time()-start
        #print(prediction[0])
        
        #label = np.argmax(prediction[0][0])
        if prediction[0][0][0] > 0.4:
            label = 0
        else:
            label = 1
        value = prediction[0][0][label]

        if label == 0:
            FP += 1
        else:
            TN += 1
            N_probs.append(value)

        total_time += speed


    N_average_prob = np.mean(N_probs)
    N_min_prob = np.min(N_probs)

    print(f"""
    Total real: {TP + FN}
    TP: {TP} - {TP/(TP + FN)}, FN: {FN} - {FN/(TP + FN)}
    average proba for real: {P_average_prob}, min proba for real: {P_min_prob}
    detector fails on real: {detector_errors_P}
    Total fake: {TN + FP}, TN: {TN} - {TN/(TN + FP)}, FP: {FP} - {FP/(TN + FP)}
    average proba for fake: {N_average_prob}, min proba for fake: {N_min_prob}
    detector fails on fake: {detector_errors_N}

    average_time: {total_time / (TP+TN+FP+FN)}
    """)

And I get:

    Total real: 323
    TP: 253 - 0.7832817337461301, FN: 70 - 0.21671826625386997
    average proba for real: 0.9243021607398987, min proba for real: 0.4475395083427429
    detector fails on real: 0
    Total fake: 7298, TN: 6408 - 0.8780487804878049, FP: 890 - 0.12195121951219512
    average proba for fake: 0.9745228290557861, min proba for fake: 0.6035735607147217
    detector fails on fake: 14

    average_time: 0.012271973548982263

So,it looks like even with low enough threshold real images are counted as fake. Can you give an advise, please: is it a problem with the pretrained model or I do something wrong?
Thanks in advance.

casia cefa train test labels

for the casia cefa dataset, what label corresponds to what? is 0 live; 1 fake? or the otherway around? Is this taken into account in the repo?

Thank you

It doesn't work

I tested your model by converting it using model optimizer but it shows every image as spoof. Not sure what is the problem. I'm using OpenVINO 2021.2 on Windows 10 and python 3.7.

Why is "EER%" lower in MN3_large than AENET?

Hey thanks for your work, I have some theoratical questions for you if it's okay

I noticed the Equal Error Rate is lower in best of your models than Aenet, while you claimed that your model's generality is better than Aenet.

What could be the reason mn3's eer is lower than aenet? How do you conclude that mn3 has better generality?

Also, did you train mn3 only to learn real/spoof or did you use the attributes from celeba-spoof as well (Like the way they trained aenet) ?

transfer learning step?

Hi,
I have tested my images with the shared face detection+spoofing model and found a few images where it is failing.
So can I do training on failure images with your given spoofing model as a pre-trained one?
Otherwise, the celebs dataset has 2 lakhs + images and if I add my failure images on top of that and start the training from scratch, it will take more resources and time.

So please share steps/commands for the custom training with the pre-trained spoofing model.

The results of multi task training are worse.

The accuracy of multi task training on celebboof is as follows:
accuracy on test data = 0.946 AUC = 0.998 EER = 2.45 apcer = 0.55 bpcer = 7.39 acer = 3.97 accuracy on test data = 0.946 AUC = 0.998 EER = 2.45 apcer = 0.55 bpcer = 7.39 acer = 3.97
However, the accuracy of single task is better
accuracy on test data = 0.954 AUC = 0.998 EER = 2.41 apcer = 0.83 bpcer = 6.22 acer = 3.53 accuracy on test data = 0.954 AUC = 0.998 EER = 2.41 apcer = 0.83 bpcer = 6.22 acer = 3.53
This is not consistent with the conclusion of celeb spoof's paper.
Why?

hello i am getting error below

File "G:\Data Science\python\Cludstrats\OpenCV\real_vs_fake\light-weight-face-anti-spoofing-master\demo_tools\wrapers.py", line 43, in get_detectioons
_, _, h, w = self.net.get_input_shape().shape
AttributeError: 'list' object has no attribute 'shape'

Convert pytorch module to openvino

I am facing this error when i run convert_model.py Then i got the error below.
(File "convert_model.py", line 46, in main
num_layers = args.num_layers
AttributeError: 'Namespace' object has no attribute 'num_layers')
Screenshot from 2022-06-01 10-46-53

Then i have removed the num_layer by hashing it as picture below.
Screenshot from 2022-06-01 10-50-56

Then i got the error below.
(torch.nn.modules.module.ModuleAttributeError: 'MobileNetV3' object has no attribute 'scaling')
Screenshot from 2022-05-30 16-05-39
Anyone facing this issue while converting pytorch mobilenetv3 module to onnx? Appreciate if anyone can help to solve this. Thank you.

training as multi-classification task, the result was poor.

Hi,

I prepare the data as describe in https://github.com/Davidzhangyuanhan/CelebA-Spoof which have 11 different spoof types, I crop the face using the BBox in the json files . then I training it just as multi-classification, having 11 classes. (NOT as multi-label task as you do.)

I try different SOTA nets, Efficientnet-V2, MobileNetV3, etc.

but strange enough, after 300 epochs , I got 92% top1 accuracy. is there anything wrong with it?

As binary classification, spoof or not , I got ACC1 98+, probabaly because of unbalanced data ? "live" class have much more images.

Have you examine the different spoof types's accuracy ?

The confidence score

Hi kprokofi, thanks for your great and very details training pipeline project.
I want to ask about the confidence score: it returns like [array([-0.04811478, -0.20894697, 0.00486435, ..., 0.3441045 , 0.04338361, -0.12098037], dtype=float32)] when i test on an image. And the label prediction is confidence[i][1], negative numbers sometimes returned.

  • Is it correct for the confidence score return ?
  • Do I have to sort the list ?
    It seems not the same as your video demo. Looking forward to hearing from you. Thank you

image net weights

There are no files in image net weights folder in the drive. Were can I find them?

how to solve this problem?

While training python train.py --config configs/config.py , I meet this problem:

Traceback (most recent call last):
File "train.py", line 150, in
main()
File "train.py", line 55, in main
train(config, device, args.save_checkpoint)
File "train.py", line 101, in train
train_dataset, val_dataset, test_dataset = make_dataset(config, train_transform, val_transform)
File "/home/xiongzhexiao/light-weight-face-anti-spoofing-master/utils.py", line 177, in make_dataset
datasets = get_datasets(config)
File "/home/xiongzhexiao/light-weight-face-anti-spoofing-master/datasets/database.py", line 67, in get_datasets
'external_train': partial(external_reader, **config.external.train_params),
File "/home/xiongzhexiao/anaconda3/lib/python3.8/site-packages/attrdict/mixins.py", line 80, in getattr
raise AttributeError(
AttributeError: 'AttrDict' instance has no attribute 'train_params'
I have changed the root dir of the three datasets.

problem with running train.py

Hi I ran into this error message while running train.py.
The training was able to run smoothly at the start but will somehow end abruptly with the error message:

data = [self.dataset[idx] for idx in possibly_batched_index]

File "/home/students/acct1002_03/fyp/fyp/linuxEnv/light-weight-face-anti-spoofing/datasets/celeba_spoof.py", line 50, in getitem
data_item = self.data[str(idx)]
KeyError: '4046'

This error will surface whenever the training process is still on its first epoch. For instance, before the training progress gets to 100%, it will abruptly bring up this error.

How will u propose to solve this issue?

Thank you in advance!

Can't run bash init_venv.sh

Hi i am new to these stuff and i get an error when try to run the venv such as below :

init_venv.sh: line 3: realpath: command not found
init_venv.sh: line 20: virtualenv: command not found
init_venv.sh: line 27: venv/bin/activate: No such file or directory
cat: requirements.txt: No such file or directory
[WARNING] Model optimizer requirements were not installed. Please install the OpenVino toolkit to use one.

Activate a virtual environment to start working:
$ . venv/bin/activate

Is there is something i need to change inside venv.sh file?

How to use MN3_antispoof.pth.tar

Hello,

i want to use state_dict.

My code :
check_point = torch.load("MN3_antispoof.pth.tar", map_location=torch.device('cpu'))
weight = check_point['state_dict']

and i want to use look like
model = VectorCNN()
model_state_dict = torch.load(weight)
model.load_state_dict(model_state_dict)

"I currently have a class VectorCNN that takes in a .xml and looks for a .bin to load the model in IE format. However, I want to use it with PyTorch itself. I want to save the entire model. For example, 'torch.load(model, 'model.pt').' "

Thanks a lot.

Can't find IEPlugin

Hi Kirill,

Thank you very much for shearing this project. I would just like to know if you could help me out.

I am attempting to run the demo.py part to test you spoof detection but I am getting the below error

ImportError: cannot import name 'IEPlugin' from 'openvino.inference_engine'

image

I am I missing something when I imported openvino for python.

Any help would really be appreciated.

Question about amsoftmax.

`class ArcMarginProduct(nn.Module):
def init(self, in_feature=128, out_feature=10575, s=32.0, m=0.50, easy_margin=False):
super(ArcMarginProduct, self).init()
self.in_feature = in_feature
self.out_feature = out_feature
self.s = s
self.m = m
self.weight = Parameter(torch.Tensor(out_feature, in_feature))
nn.init.xavier_uniform_(self.weight)

    self.easy_margin = easy_margin
    self.cos_m = math.cos(m)
    self.sin_m = math.sin(m)

    # make the function cos(theta+m) monotonic decreasing while theta in [0°,180°]
    self.th = math.cos(math.pi - m)
    self.mm = math.sin(math.pi - m) * m

def forward(self, x, label):
    # cos(theta)
    cosine = F.linear(F.normalize(x), F.normalize(self.weight))
    # cos(theta + m)
    sine = torch.sqrt(1.0 - torch.pow(cosine, 2))
    phi = cosine * self.cos_m - sine * self.sin_m

    if self.easy_margin:
        phi = torch.where(cosine > 0, phi, cosine)
    else:
        phi = torch.where((cosine - self.th) > 0, phi, cosine - self.mm)

    #one_hot = torch.zeros(cosine.size(), device='cuda' if torch.cuda.is_available() else 'cpu')
    one_hot = torch.zeros_like(cosine)
    one_hot.scatter_(1, label.view(-1, 1), 1)
    output = (one_hot * phi) + ((1.0 - one_hot) * cosine)
    output = output * self.s

    return output`

This is the implementation of amsoftmax, but in your code, cos_theta is equal to output without normalization.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.