Coder Social home page Coder Social logo

fahadshamshad / clip2protect Goto Github PK

View Code? Open in Web Editor NEW
95.0 6.0 11.0 45.82 MB

[CVPR 2023] Official repository of paper titled "CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via Adversarial Latent Search".

Home Page: https://fahadshamshad.github.io/Clip2Protect/

Python 93.15% Cuda 6.85%
text-guidance text-guided-image-manipulation face-manipulation face-recognition privacy-protection dodging impersonation makeup-transfer stylegan vision-language

clip2protect's Introduction

Hi there ๐Ÿ‘‹

  • ๐Ÿ•˜ I have held technical roles as Senior Machine Learning Engineer at OMNO.AI and as a Research Associate at Information Technology University, Lahore.
  • ๐Ÿฅ‡ I did my Masters in Electrical Engineering from NUST Islamabad. I did my bachelors from Institute of Space Technology, Islamabad.
  • ๐ŸŒฑ My research interests include Image Reconstruction, Medical Image Analysis, and Computational Imaging.
  • ๐Ÿ“ซ How to reach me: https://fahadshamshad.github.io | | [email protected]

clip2protect's People

Contributors

fahadshamshad avatar yuuma002 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

clip2protect's Issues

experiment problem

I appreciate your sharing work.However, I'm facing some challenges with the generated results when I directly input the command "python main.py --data_dir input_images --latent_path latents.pt --protected_face_dir results". The generated images do not have satisfactory visual quality. I am specifically interested in performing makeup transfer. Could you please provide guidance on how to resolve this issue?"

AttributeError: 'dict' object has no attribute 'eval'

Configurations:
I have Cuda 12.0 in local system, so I had to install latest PyTorch version to enable GPU for PyTorch.
I was able to get latent code (latent.pt), obtain the FR model and placed everything in the appropriate directories

When I tried running

python main.py --data_dir input_images --latent_path latents.pt --protected_face_dir results

I got

AttributeError: 'dict' object has no attribute 'eval'

This is caused in line 209 in adversarial_optimization.py.
i.e g_ema = torch.load(self.generators[ff]).eval() #loading fine-tuned generator

Generally, we use model.load(weight.pt) after creating the model instance model=Model(). Is the above error because of this or this is arising because of different Pytorch versions?

Paired Data for Dodging Attack

Thanks for your excellent work! Could you provide paired data for CelebA-HQ and LFW for face verification and face identification tasks?

Maybe a bug in your code

I found a bug in your code.
In pivot_tuning.py, line 79, it is supposed to be self.noise_save.append(noise_), instead of self.noise_save.append(noises), which leads to continuously increasing cuda usage.

Some questions about the lfw experiment

Sorry to bother you, looked at your public code and it is very well written. Following your process, I selected 6 pieces of data from the lfw dataset and obtained their own latent code as well as the stitched together implicit code (obtaining the latent code for each requires modifying e4e's code), which I then put into your code for finetune and generation, but the encrypted image that ended up being generated with the red lips is very strange, and has a lot of and the original image has a big difference, and the first phase of the inverse image generated is not very clear, can I ask you where this problem is?

LADN split

Hi,

Thanks for great work,

Do you have the split file of LADN dataset into 4 groups and the target Id. I try to look at the AMT-GAN paper and its github repository, but I am still unable to see it.

Thank you,

Some questions about the calculation of protection success rate(PSR) and dodging attack.

Your work is excellent, but I have a few questions:

  1. In the adversarial loss for dodging attacks, why is the cosine distance between the generated identity and the target identity used as the first term of the adversarial loss? As far as I know, dodging attacks should not require a target identity to guide, that is, it is not necessary to minimize the cosine distance between the generated identity and the target identity (the first term of the adversarial loss). It is only necessary for the distance between the generated identity and the original identity to be large enough (the second term of the adversarial loss).
  2. When calculating the protection success rate(PSR), you calculate the cosine similarity between the generated portrait (i.e., the protected portrait) and the target portrait identity in the function "black_box". Then, in the function "quan", you consider the cosine similarity is greater than the system threshold ฯ„ as a successful attack. My understanding is that the cosine similarity should be greater than (1-ฯ„) , this is a successful protection, or the cosine distance should be less than the ฯ„.
  3. If the first point is correct, that is, dodging attacks involves two the adversarial losses, then in order to ensure that the protected image can be identified as the target identity, the cosine distance between the protected image and the target identity should be less than the cosine distance between the protected image and the original identity. That is, the optimization should be terminated when the adversarial loss is less than 0. However, you terminated the optimization after only 50 iterations, and I think it's too early to terminate and the protection effect seems not very good.
    I would be very grateful if you could reply to me.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.