Coder Social home page Coder Social logo

face-vid2vid's People

Contributors

zhengkw18 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

face-vid2vid's Issues

Keypoint prior loss function

Thank you for your work. May I ask why your keypoint prior loss function is slightly different from the one in the original paper?

In the paper (A.2), the keypoint prior loss function is:

Screenshot from 2021-12-16 16-02-13

However, yours in losses.py is:

loss = (
    torch.max(0 * dist_mat, self.Dt - dist_mat).sum((1, 2)).mean()
    + torch.abs(kp_d[:, :, 2].mean(1) - self.zt).mean()
    - kp_d.shape[1] * self.Dt
)

I was wondering why you subtracted kp_d.shape[1] * self.Dt in the end.

load_videos.py - Can not load video UW1c9E8nfxQ, broken link

Hi! Thank you for your great contribution! I would love to use your model! I'm trying to build the default dataset folder but:

python load_videos.py --workers=8

Number of videos: 3442
0it [00:00, ?it/s]Can not load video UW1c9E8nfxQ, broken link
1it [00:08, 8.73s/it]Can not load video pbm-5KhWXlc, broken link
2it [00:10, 4.82s/it]Can not load video tMP5U3jYNkg, broken link
Can not load video LZ_Hw9J62KE, broken link
4it [00:10, 1.84s/it]Can not load video u3odsIbYouc, broken link
Can not load video yLA2n3coUgk, broken link
6it [00:11, 1.01s/it]Can not load video ULBH3A8DjPM, broken link
Can not load video B5jqlhXWkOo, broken link
8it [00:11, 1.59it/s]Can not load video LNlufCgIx_E, broken link
Can not load video shR-y9jzeHg, broken link
10it [00:22, 2.44s/it]Can not load video 8xomuTM5Jm8, broken link
Can not load video zWig265SViA, broken link
Can not load video q1mNeW_BrSw, broken link
13it [00:22, 1.42s/it]Can not load video vN5K8HEgafI, broken link
Can not load video ldAbe81ePpE, broken link
15it [00:22, 1.02s/it]Can not load video daZUIa8FA_M, broken link
Can not load video dwnIdViJS0U, broken link
17it [00:26, 1.23s/it]Can not load video QdBQTHX55yI, broken link
18it [00:33, 2.25s/it]Can not load video 1fpTDuFfoB0, broken link
19it [00:33, 1.83s/it]Can not load video DE089Obo6L4, broken link
20it [00:33, 1.46s/it]Can not load video sh6J3wEmceA, broken link
21it [00:33, 1.14s/it]Can not load video Hyzl8482nfY, broken link
22it [00:33, 1.14it/s]Can not load video vuVdwmx_1yQ, broken link

Can you help me?!

这块的sota还是这个吗

大佬,想咨询下,这块的sota还是这个吗,最近看到sadtalker的工作,对比效果貌似也没有比这个效果更好。大佬了解还有什么做的更好的项目或者文章吗

training data and command

Hello~Thank you for your great contribution~
I want to know how to train on this project~Which format(A series of folders containing video frames(pictures) or a series of videos or others) and directory are the data put into during training? What are the corresponding python training commands?
Thank you~

Continuing training on your shared model

Thank you for your shared model! And now I'm continuing training on the shared model using the voxceleb2 sub-datasets (part_b part_c and part_d, about 380k videos, the paper said using 280k videos).
After every epoch, I evaluated the model but seems that the performance is gradually worse.
Although the training losses are decreasing, the PSNR of generated videos is decreasing, and the visual quality is also worse. It's so strange.

Do you have any thinking about it? Could you share more training details you thought necessary? Thank you a lot.

up: shared model
below: the continuing training model
You can see the background is moving using my model.

now_output
now1_output

about the nework

Hello, zhengkw18, thank you for your contribution!

the output “delta” of the the HPE_EDE model should be the expression of the persion, not head pose, right ?
but , when i frozen the yaw,pitch and roll matrixs, and only extract delta feature from HPE model of driving person , the source persion still have a head movtion. so , what's wrong with me?

I want to transfer one person's expression from another, with no head movtion. how shoud i do.

about the ckp epoch

Thanks a lot for your code and pre-trained model.

Now I want to continue training on your pretrained model, after loading the pretrained model, the epoch begins from the 12400, but the ckp name is 00000100-ckp.pth.tar, which means the ckp was generated after the 100 epochs? Do you have any idea about the issue? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.