alexiajm / relativisticgan Goto Github PK
View Code? Open in Web Editor NEWCode for replication of the paper "The relativistic discriminator: a key element missing from standard GAN"
Code for replication of the paper "The relativistic discriminator: a key element missing from standard GAN"
Hi @AlexiaJM,
I ran an experiment on my proposed fagan (full attention GAN) architecture using the relativistic version of the hinge-loss as proposed in this work of yours.
The details of the experiment are here -> https://github.com/akanimax/fagan#relativistic-hinge-gan-loss-experiment. Could you provide your insights about the results. Are these as expected? Any help would be great.
Thank you!
Best regards,
@akanimax
Hi,
I'd noticed the disabled tensorboard logging code, stating it was incompatible with tensorflow. However, since the code doesn't seem to be using tensorflow, why not turn it back on? It seems to work fine when I try.
Hi
Very nice work. Thanks for sharing !
I have some questions
when make the 256x256
images, you used PACGAN2
! In your paper, you concat x1
& x2
x1
and x2
are same images ? or different image ?
In RelativisticGAN, doesn't see mode collapse
?
What is the performance of RLSGAN
? I only see the RaLSGAN in your paper.
Is batch_size
not important for RelativisticGAN ?
Thank you
Everything seems great in RSGAN and RaSGAN, but when I test SGAN in CAT 64x64, it can not generator anything with lr=0.0002. Shall I change it ?
hi
I see in Relativistic average Standard GAN code.
It said "You may want to resample again from real and fake data"
And so,if I dont't resample the data again, what will happen?
I get very poor results. And the loss equals 0 after trainning some iters . What causes it?
Hi,
Thanks for this interesting work .. just one question regarding the following comments of the code snippet you have provided in the readme page:
No sigmoid activation in last layer of generator because BCEWithLogitsLoss() already adds it
No activation in generator
Is this true? or you mean the discriminator is what should include/exclude sigmoid activation?
Thanks
I read your paper. It's really a good job. But the thing which I don't understand is the gradient of generator. Why is there
If you can help me, I will be very grateful.
take RaGAN for example, according to Algorithm 2 of the paper, the loss of RaGAN should be:
errD = (BCE_stable(y_pred - torch.mean(y_pred_fake), y) + BCE_stable(y_pred_fake - torch.mean(y_pred), y2))/2
errG = (BCE_stable(y_pred - torch.mean(y_pred_fake), y2) + BCE_stable(y_pred_fake - torch.mean(y_pred), y))/2
while yours be:
errD = (BCE_stable(y_pred - torch.mean(y_pred_fake), y) + BCE_stable(torch.mean(y_pred_fake) - y_pred, y2))/2
errG = (BCE_stable(y_pred - torch.mean(y_pred_fake), y2) + BCE_stable(torch.mean(y_pred_fake) - y_pred, y))/2
why?
Could you tell me the version of Tensorflow and Pytorch? Thank you very much!
For some reason, model quality completely falls apart after a resume. G/D_err stays stable, so idk. Something is still going wrong. :(
Hi @AlexiaJM; firstly, great job with your work. I am currently still reading your paper. Just upon scrolling to the FID comparison results, I didn't notice ProGAN (Progressive growing of GANs) in your experiments. The reason why I suggested ProGAN, is that I have recently worked on them and found them to be quite good and stable. Perhaps augmenting the RaLSGAN with Progressive growing, you could achieve an even better FID. This is just a suggestion though. Again great job ๐.
Best regards,
akanimax
Hey @AlexiaJM ,
Good job to your work. I am trying to add Relativism to cycleGAN but I'm little confused about the way to add Relativism to this GAN, since cycleGAN has 2 Generator and 2 Discriminator.
Generators loss in cyclegan calculated as follow:
# GAN loss
fake_B = G_AB(real_A)
loss_GAN_AB = criterion_GAN(D_B(fake_B), valid)
fake_A = G_BA(real_B)
loss_GAN_BA = criterion_GAN(D_A(fake_A), valid)
loss_GAN = (loss_GAN_AB + loss_GAN_BA) / 2
Discriminator loss:
# Train Discriminator A
optimizer_D_A.zero_grad()
# Real loss
loss_real = criterion_GAN(D_A(real_A), valid)
# Fake loss (on batch of previously generated samples)
fake_A_ = fake_A_buffer.push_and_pop(fake_A)
loss_fake = criterion_GAN(D_A(fake_A_.detach()), fake)
# Total loss
loss_D_A = (loss_real + loss_fake) / 2
loss_D_A.backward()
optimizer_D_A.step()
# Train Discriminator B
optimizer_D_B.zero_grad()
# Real loss
loss_real = criterion_GAN(D_B(real_B), valid)
# Fake loss (on batch of previously generated samples)
fake_B_ = fake_B_buffer.push_and_pop(fake_B)
loss_fake = criterion_GAN(D_B(fake_B_.detach()), fake)
# Total loss
loss_D_B = (loss_real + loss_fake) / 2
loss_D_B.backward()
optimizer_D_B.step()
loss_D = (loss_D_A + loss_D_B) / 2
and criterion_GAN is MSELoss().
I modified the code and add relativisim but I am not sure I did it correctly.
Generators loss:
loss_GAN_AB = (torch.mean((D_B(real_A) - torch.mean(D_B(fake_B)) + valid) ** 2) +
torch.mean((D_B(fake_B) - torch.mean(D_B(real_A)) - valid) ** 2)) / 2
loss_GAN_BA = (torch.mean((D_A(real_B) - torch.mean(D_A(fake_A)) + valid) ** 2) +
torch.mean((D_A(fake_A) - torch.mean(D_A(real_B)) - valid) ** 2)) / 2
loss_GAN = (loss_GAN_AB + loss_GAN_BA) / 2
Discriminators loss:
optimizer_D_A.zero_grad()
fake_A_ = fake_A_buffer.push_and_pop(fake_A)
errD_A = (torch.mean((D_A(real_A) - torch.mean(D_A(fake_A_.detach())) - valid) ** 2) +
torch.mean((D_A(fake_A_.detach()) - torch.mean(D_A(real_A)) + valid) **2)) / 2
errD_A.backward()
optimizer_D_A.step()
# Train Second Discriminator (B)
optimizer_D_B.zero_grad()
fake_B_ = fake_B_buffer.push_and_pop(fake_B)
errD_B =(torch.mean((D_B(real_B) - torch.mean(D_B(fake_B_.detach())) - valid) ** 2) +
torch.mean((D_B(fake_B_.detach()) - torch.mean(D_B(real_B)) + valid) **2)) / 2
errD_B.backward()
optimizer_D_B.step()
loss_D = (errD_A + errD_B) / 2
I would be appreciated if you can help me to figure out how can I add Relativism to cycleGAN.
Thanks in advance!
If I want to compute the real/fake accuracy of the discriminator for a batch using this method, do I need to do:
real_accuracy = sigmoid(real_logits - mean(fake_logits)) >= 0.5
fake_accuracy = sigmoid(fake_logits - mean(real_logits)) < 0.5
e.g. the same input that I would give to the BCE with logits loss goes to the sigmoid
Or do I do the standard GAN thing and compute
real_accuracy = sigmoid(real_logits) >= 0.5
fake_accuracy = sigmoid(fake_logits) < 0.5
I read your paper. It's really a good job. But the thing which I don't understand is the
Hi, I have read your paper. It was a really interesting idea!
I've been trying to implement you paper in TensorFlow, and I wonder if my implementation was right. I'm familiar with WGAN-GP, so I tried RSGAN-GP first. I looked at the training curve of the loss of the discriminator, and found it fluctuating around 0.5. I wonder if this is a normal phenomena?
Also, I wonder if the idea of RGAN is extendable to hybrid models, ex. VAEGAN or collaborating with other MSE-like loss function?
Hi!
First, thank you for sharing the code. I appreciate it.
I have a question.
When you update the G, the gradient of D was not computed?
Is there a specific reason you set requires_grad=False
for D before update the G?
The code is applied to all types of GAN including original GAN where the original implementation does not.
Could you tell me how to change the dataset?the dataset of CelebA is OK? Thank you
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.