Coder Social home page Coder Social logo

Comments (23)

cavalleria avatar cavalleria commented on June 7, 2024

Thanks for your interset. I have not yet compared the effects of these data augmentation methods, i will conduct a series of contrast experiment soon.

from cavaface.

fuxuliu avatar fuxuliu commented on June 7, 2024

@cavalleria thanks for your reply. Looking forward to the results of your experiments.

from cavaface.

fuxuliu avatar fuxuliu commented on June 7, 2024

@cavalleria thanks for your reply. Looking forward to the results of your experiments.

from cavaface.

cavalleria avatar cavalleria commented on June 7, 2024

@Gary-Deeplearning the data augmentation result is update in model_zoo.

from cavaface.

xsacha avatar xsacha commented on June 7, 2024

I think for the data augmentation, you need to train it longer.
You can also resume from baseline and train from there instead (which is what I do). I always seem to get more accuracy than baseline.

from cavaface.

cavalleria avatar cavalleria commented on June 7, 2024

yes , data augmentation may need to train more epochs. can you provide your experiment result and training details?

from cavaface.

cavalleria avatar cavalleria commented on June 7, 2024

@xsacha if it's possiable, you can add results that resume training in model_zoo

from cavaface.

fuxuliu avatar fuxuliu commented on June 7, 2024

@cavalleria I checked the result from model_zoo. It seems the data aug get lower accuracy?

from cavaface.

cavalleria avatar cavalleria commented on June 7, 2024

@cavalleria I checked the result from model_zoo. It seems the data aug get lower accuracy?

The results that i trained from scratch have the same hyperparameters with baseline seem not better than baseline.

from cavaface.

xsacha avatar xsacha commented on June 7, 2024

@cavalleria my results wouldn't be compatible as I use a different face detector and dataset(s).

I start with the fully trained baseline backbone and retrain with LR=0.01 + augmentations turned on

Some blur augmentations are good btw. I use a third-party package (kornia) to do Motion Blur (very effective!) and Gaussian Blur. I also have success improving accuracy with the augmentations in here.

I think the comparisons done in Model Zoo probably aren't very helpful though as augmented data is always going to perform worse than non-augmented with the same training time. It's more information for the model to learn.

from cavaface.

cavalleria avatar cavalleria commented on June 7, 2024

@cavalleria my results wouldn't be compatible as I use a different face detector and dataset(s).

I start with the fully trained baseline backbone and retrain with LR=0.01 + augmentations turned on

Some blur augmentations are good btw. I use a third-party package (kornia) to do Motion Blur (very effective!) and Gaussian Blur. I also have success improving accuracy with the augmentations in here.

I think the comparisons done in Model Zoo probably aren't very helpful though as augmented data is always going to perform worse than non-augmented with the same training time. It's more information for the model to learn.

In my experience, blur augmentation is definitely effective. I think comparision with same hypeparameter is more fair, but if you want to imporve accuracy , you can tune hypeparameters and finetune models.

from cavaface.

ReverseSystem001 avatar ReverseSystem001 commented on June 7, 2024

The purpose of cutmix, mixup is to increase local information, and can also classify objects when they are occluded. it is usefull in object detection. There is little occlusion information in face recognition data. if we change the cutmix(cut the training data from the middle of the face), Side face effect may be improved. If anyone has time, It's worth trying

from cavaface.

luameows avatar luameows commented on June 7, 2024

I've tried this kind of augmentation by erasing half of the aligned face to simulate face with mask. Loss decreased normally while accuracy was worse. My iteration was the same as baseline which may be the reson for this situation.

from cavaface.

ReverseSystem001 avatar ReverseSystem001 commented on June 7, 2024

I've tried this kind of augmentation by erasing half of the aligned face to simulate face with mask

the cutmix random crop(crop size is random,not fixed) from one batch size images and generate a new pic and target. I mean divide the face into two parts according to the left and right. if you use cutmix. change the codes and let different half parts generate a new pic. This can increase training data.but I am not sure if it works. I think the iteration should be increased. because the this kind of augmentation indirectly increased the training data.

from cavaface.

cavalleria avatar cavalleria commented on June 7, 2024

I've tried this kind of augmentation by erasing half of the aligned face to simulate face with mask. Loss decreased normally while accuracy was worse. My iteration was the same as baseline which may be the reson for this situation.

What's you test data? I think the test data should include such occluded face data which can verify the effectiveness of your data augmentations.

from cavaface.

luameows avatar luameows commented on June 7, 2024

What's you test data? I think the test data should include such occluded face data which can verify the effectiveness of your data augmentations.

Just use LFW as fast validation. Acctually the experiment I did above may provide little information for u guys.
Recently I found my alignment method was too old (just adjust the eyes to the horizontal position and cut the face using the distance between two eyes) but not use Procrustes analysis as most articles did. This method cut too many pixels above eyebrows which may leave little information for masked face recognition.
Also, I've trained a network using cbamblock to do masked face recognition and the result on my maskface-dataset is ok (compare to my baseline without cbam). But the spatial part of cbam cannot be used in caffe (op tensor.max() cannot be realize in caffe prototxt), maybe I need to do some modification.

from cavaface.

xsacha avatar xsacha commented on June 7, 2024

LFW should have a perfect score (99.83+%) very early on.. like after first LR drop.
Edit: You mean on masks? You should be getting around 99.5% after first LR drop I think.

from cavaface.

luameows avatar luameows commented on June 7, 2024

Here I come again~
I use MaskTheFace to wear mask on my own face datasets. After finetune the dataset, I found the accuracy of maskface.VS.nomaskface got better while the normal situation accuracy dropped quickly. That means mask-face may have far different distribution compared with normal-face.

from cavaface.

Hzzone avatar Hzzone commented on June 7, 2024

@cavalleria my results wouldn't be compatible as I use a different face detector and dataset(s).

I start with the fully trained baseline backbone and retrain with LR=0.01 + augmentations turned on

Some blur augmentations are good btw. I use a third-party package (kornia) to do Motion Blur (very effective!) and Gaussian Blur. I also have success improving accuracy with the augmentations in here.

I think the comparisons done in Model Zoo probably aren't very helpful though as augmented data is always going to perform worse than non-augmented with the same training time. It's more information for the model to learn.

Could you tell us what hyper-parameters you used in RandomMotionBlur if convenient? Thanks a lot!

from cavaface.

Hzzone avatar Hzzone commented on June 7, 2024

I have tried to fine-tune the models with motion blur or gaussian blur, but I failed to get better accuracy. I will try them from scratch with p=0.1, any update will be posted here.

from cavaface.

Hzzone avatar Hzzone commented on June 7, 2024

Unfortunately, it has not improved my performance:(

from cavaface.

John1231983 avatar John1231983 commented on June 7, 2024

Very good discussion for data augmentation. I have some questions after reading the thread

  1. Does motion blur and hflip improve performance? Other augmentation may decrease accuracy on normal face (LWF, Age30,...). Am I right?

  2. For Masktheface augmentation, @luameows mentioned that performance on maskface increases but normal face is decreased. It means that model is overfitting to the maskface distribution. Do you agree

In conclusion, data augmentation may be not useful for face recognition task.

from cavaface.

xsacha avatar xsacha commented on June 7, 2024

It depends on the distribution of the test data as to how well the augmentation will score.
Ultimately, if you have perfect frontal and high quality photos for train and test, you'll have over-fitted for this scenario. Adding augmentation like masks and blur will likely improve your model overall but do worse on your tests. There is only so much your model can encode until you go deeper.

from cavaface.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.