Coder Social home page Coder Social logo

progressive-gan-pytorch's Introduction

Progressive GAN in PyTorch

Implementation of Progressive Growing of GANs (https://arxiv.org/abs/1710.10196) in PyTorch

Currently implemented and tested up to 128x128 images.

Usage:

python train.py -d {celeba, lsun} PATH

Currently CelebA and LSUN dataset is supported. (Warning: Using LSUN dataset requires vast amount of time for creating index cache.)

Sample

  • Sample from the model trained on CelebA

Sample of the model trained on CelebA

  • Sample from the model trained on LSUN (dog)

Sample of the model trained using LSUN (dog)

progressive-gan-pytorch's People

Contributors

rosinality avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

progressive-gan-pytorch's Issues

Discriminator Loss Formula

Thanks for your nice and clean work!

When I tried to understand your code, I found that

b_size = real_image.size(0)
        real_image = real_image.to(device)
        label = label.to(device)
        real_predict = encoder(
            real_image, step=step, alpha=alpha)
        real_predict = real_predict.mean() \
                       - 0.001 * (real_predict ** 2).mean() 

I don't quite understand why the variable real_predict needs to be modified to eal_predict.mean() \- 0.001 * (real_predict ** 2).mean(). Why don't we just modify the discriminator to output a single value? and how do you come out of this formula?

Again, many thanks for your excellent work. I am new to deep learning and GAN so sorry for any inconvenience caused.

SpectralNorm

First of all, thanks for sharing! Very interesting.

I see that you experimented with SpectralNorm as well, can you share your insights on the effect it had on the training ?

Discriminator Loss

First of all thanks for your very intuitive implementation.
I was following your code but I saw

    fake_image = generator(
        Variable(torch.randn(b_size, code_size)).cuda(),
        label, step, alpha)
    fake_predict, fake_class_predict = discriminator(
        fake_image, step, alpha)
    fake_predict = fake_predict.mean()
    fake_predict.backward(one)
    real_predict, real_class_predict = discriminator(
    real_image, step, alpha)
    real_predict = real_predict.mean() \
     - 0.001 * (real_predict ** 2).mean()
    real_predict.backward(mone)

But I think so we should be passing one to real_predict and mone to fake_predict because discriminator must realize that fake_images must be given mone label and real_images must be given one label ?
Please explain me

trained-model

Thank you for grate work !!
I would appreciate it if you could open pre-trained model

Generating new samples using a trained model

Hi @rosinality,

Thank you for writing this code, and sorry to open an issue on such an old repository, but I was hoping to ask you a question about generating new samples once a model has been trained, as I can't see any code to do so in the repository. I'm currently training a model using the CelebA dataset using your code and am not sure how to generate a new image once it's completed.

Specifically, what values for alpha and step should I pass to my generator call in this code? Can I just pass step=5 and alpha=1?

Why use two generators ?

Hello,

I don't understand why you have two generators, with one being the running average of the other. I guess you are using the average for evaluation but why not just evaluate the generator that you are training ?

Best,
Ridha.

adapt the model to 256 resolution

Hi, thanks for your efforts! I was wondering where I should change to adapt the current model to a generation for 256 resolution? Appreciate your reply in advance.

how to load the model?

안녕하세요, 우선 코드 공유를 해주심에 감사드립니다.

중간에 저장된 .model 파일을 이용하여 model을 load하려고 합니다.
혹시 어떻게 load할 수 있을지 알려주시면 감사하겠습니다.

loss backward 관련 질문

안녕하세요, @rosinality

공개해주신 코드 덕분에 공부하는데 있어 많은 도움을 받고있습니다

코드 리뷰 중, 이해가 잘 가지 않는 부분이 있어 질문드립니다

train.py 144, 152 line에서

    real_predict.backward(Tensor(-1.0))
    fake_predict.backward(Tensor(1.0))

backward 뒤에 gradient값이 real_predict에서 -1.0, fake_predict에서 1.0을 주는 이유가 궁금합니다

감사합니다

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.