tengfei-wang / hfgi Goto Github PK
View Code? Open in Web Editor NEWCVPR 2022 HFGI: High-Fidelity GAN Inversion for Image Attribute Editing
Home Page: https://tengfei-wang.github.io/HFGI/
CVPR 2022 HFGI: High-Fidelity GAN Inversion for Image Attribute Editing
Home Page: https://tengfei-wang.github.io/HFGI/
May I ask how the pt file for editing attributes was obtained?
When I try to use the checkpoint which is trained on my datasets, I meet this peoblem.
RuntimeError: Error(s) in loading state_dict for Encoder4Editing: Unexpected key(s) in state_dict: "styles.14.convs.0.weight", "styles.14.convs.0.bias", "styles.14.convs.2.weight", "styles.14.convs.2.bias", "styles.14.convs.4.weight", "styles.14.convs.4.bias", "styles.14.convs.6.weight", "styles.14.convs.6.bias", "styles.14.convs.8.weight", "styles.14.convs.8.bias", "styles.14.convs.10.weight", "styles.14.convs.10.bias", "styles.14.linear.weight", "styles.14.linear.bias", "styles.15.convs.0.weight", "styles.15.convs.0.bias", "styles.15.convs.2.weight", "styles.15.convs.2.bias", "styles.15.convs.4.weight", "styles.15.convs.4.bias", "styles.15.convs.6.weight", "styles.15.convs.6.bias", "styles.15.convs.8.weight", "styles.15.convs.8.bias", "styles.15.convs.10.weight", "styles.15.convs.10.bias", "styles.15.linear.weight", "styles.15.linear.bias", "styles.16.convs.0.weight", "styles.16.convs.0.bias", "styles.16.convs.2.weight", "styles.16.convs.2.bias", "styles.16.convs.4.weight", "styles.16.convs.4.bias", "styles.16.convs.6.weight", "styles.16.convs.6.bias", "styles.16.convs.8.weight", "styles.16.convs.8.bias", "styles.16.convs.10.weight", "styles.16.convs.10.bias", "styles.16.linear.weight", "styles.16.linear.bias", "styles.17.convs.0.weight", "styles.17.convs.0.bias", "styles.17.convs.2.weight", "styles.17.convs.2.bias", "styles.17.convs.4.weight", "styles.17.convs.4.bias", "styles.17.convs.6.weight", "styles.17.convs.6.bias", "styles.17.convs.8.weight", "styles.17.convs.8.bias", "styles.17.convs.10.weight", "styles.17.convs.10.bias", "styles.17.linear.weight", "styles.17.linear.bias".
but if use the checkpoint proviede by you, there is no problem.
Thank you very much for sharing!
I am very interested in your ADA module and want to use it in my work.
Could you open source the training code for this model?
Thank you very much.
What if I want to use this model to put a mask on a person instead of modifying age and smile? How can i generate masked face attribution edited codes?
Thanks!
May I ask why the image generated by changing the encoder is not very high-definition?
Hello, I'm struggling to train a model on the FFHQ 256x256 dataset. I trained an Encoder4Editing model on the entire FFHQ dataset (66k images for training, 4k for validation) and the results look comparable to the ones in the Encoder4Editing paper. Then I trained a HFGI model based on that e4e checkpoint with good results aswell. But when I try to project an image the inversion looks noticeably different than the input image. This problem doesn't appear when I use your pretrained FFHQ 1024x1024 model. I'm assuming that it should be possible to train a 256x256 model with comparable quality.
Could you share a FFHQ 256x256 checkpoint so that I can validate my results? Thank you!
Really thanks for your great works! However, when I implement it on styleCLIP for hair change, after the step of adding conditions to the generator, it not only fine-tune the face but also add back the original hair on it. Could you give me some suggestions on that? Really thanks!
Thanks for sharing your code and your excellent work!
While I have a question about how the ADA work on X_edit ? I notice that when training the ADA module , low-fidelity X_o is taken as target image I for alignment ,but there is no X_edit. Thanks for your reply.
Excuse me, why is it normal only to use cuda: 0 to invert images, while images inverted by other devices are all solid colors?
In your code, you do not use a discriminator and an additional adversarial loss for better reconstruction.
This is different from what is written in the paper.
Is there another version of code that leverages a well-trained discriminator, or are the checkpoint results based on the official code without discriminator???
Hi, thank for sharing code!
I have a question about the resolution of consultation branch. As the default resolution is 64x64 in layer 7. Have you test other higher resolution, like 11 for 256, 9 for 128 as shown below:
HFGI/models/stylegan2/model.py
Line 530 in e30f33c
hi there,
thanks for your great work!
couldn't find any demo on editing eyes (open/close), quiet interested about it.
waiting for your response.
Thank you for the great work! I have tried the inference code with the pretrained checkpoint for pose editing, but there are obvious artifacts in the edited images. Could you please double check that the checkpoint is correct?
BTW, why the pose editing is not included in the inference code or playground notebook?
Excuse me, why is the conversion image generated when I run the playground.ipynb file solid color?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.