Coder Social home page Coder Social logo

universome / inr-gan Goto Github PK

View Code? Open in Web Editor NEW
228.0 228.0 21.0 1.54 MB

[CVPR 2021] Adversarial Generation of Continuous Images

Home Page: https://universome.github.io/inr-gan

Python 69.15% C++ 1.51% Cuda 5.45% Jupyter Notebook 23.90%
cvpr2021 gan generative-model inr positional-encoding siren

inr-gan's People

Contributors

universome avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

inr-gan's Issues

Periodic Activation Functions

Dear Ivan,

Thank you for your great work. I really like it.

Have you try to use Periodic Activation Functions from siren? You mention the fourier feature from siren in the paper.

Thank you for your help.

Best Wishes,

Alex

Question about multi-scaling

Hi Ivan,

You work is fantastic and I like the interesting idea very much!

I am implementing your idea to my project but got some issues. Sometime I got some pixelation in part of the images (256x256 images) when I tried your code on my dataset, here is just an example (not what I obtained) about what I mean for pixelation:

pixilation-02

I kinda feel like this issue is caused by multi-scale, I use default code setting for multiscaling (start from 32x32 and gradually increase to 256x256). I notice that in the table 1 of paper, you make a comparison between with and w/o multi-scaling part, and in terms of the quality, with multi-scaling produces better results. Here are my questions:

  1. have you even come across this pixelation issues when you run experiments?
  2. how good are output images if you do not use multi-scaling? Do you think it worth trying?
  3. If I want to turn off multi-scaling, is there a way to implement this quickly based on your code?

Thank you very much for your time! I appreciate your help!

Use of PatchConcatAndResize

Hi, thanks for sharing your awesome work!

I was just wondering about the PatchConcatAndResize transform in your code. Did you use this in any of the experiments reported in the paper? In particular, I'm curious if this kind of data augmentation was needed to get the extrapolation results to work well.

Thanks!
Allan

model save and load module

I search all the codes and found there isn't torch.save and load function.
I would add this function and create inference scripts, could you give me some hints?

`ModuleNotFoundError` when importing modules in `src`.

Hi. thank you for your great work.

As you intended, src/infra/launch_local.py changes the directory to experiments/my_experiment and trains the model. However, I encountered ModuleNotFoundError at line 24 and line 25 in src/training/training_loop.py. After I changed the lines, everything goes fine. The problem kept occurring even when I added experiments/my_experiment to sys.path. I wonder that this problem didn't occur in your case.
Thanks again and stay safe.

Best Wishes,
Lee

Question about the super-resolution

Hi Ivan,

Thank you for the wonderful research!
Really enjoyed reading your paper (also, the recent ICCV paper was great!)

While I was reading the paper, I recognize the details for the super-resolution (SR) experiments are somewhat missing.
For me SR was not trivial in INR-GAN as it utilizes MultiScale-INR.
If possible could you share the details of SR experiments? e.g., increasing the resolution of the first input grid...?

If you can share the implementation of SR, it couldn't be better!

Hope you have a nice day,
Jihoon

multi-GPU training killed by `SIGSEGV`

This is another issue besides my last one.

When I run multi-GPU training, the processes are killed: torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with signal SIGSEGV when evaluating metrics: line 369 to line 380 in src/training/training_loop.py. When I comment all these lines, the training goes fine without any error. Here is my brief system info.

OS: Ubuntu 20.04.2
GPUs: RTX 3090 X 2

python==3.8.10
cudatoolkit==11.1
torch==1.9.0

Here are the error messages.

Traceback (most recent call last):
  File "src/train.py", line 563, in <module>
    main() # pylint: disable=no-value-for-parameter
  File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "src/train.py", line 558, in main
    torch.multiprocessing.spawn(fn=subprocess_fn, args=(args, temp_dir), nprocs=args.num_gpus)
  File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
    while not context.join():
  File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 130, in join
    raise ProcessExitedException(
torch.multiprocessing.spawn.ProcessExitedException: process 1 terminated with signal SIGSEGV

I searched on Google to solve this problem but I can't find any clue for this problem. Do you have any idea about this problem? Thanks.

Best Wishes,
Lee

Awesome idea!

Hello fellow Generative researchers!

This is not an issue ๐Ÿ˜„. Just wanted to express my awe ๐Ÿ˜›...
Really cool idea, and thanks for the code! I'll explore your work in more details!

Cheers ๐Ÿป!
@akanimax

p.s. This is not an issue :). Please close it at your disposal.

The batch size would influence size of snapshot(.pkl)?

Hello, thanks for your great job.
I had tried the experiment, and found that different batch size will change the sizes of checkpoint. Does the _fourier_embs_cache item affect the snapshot size? And if so, should train and test on the same snapshot have the same batch size?
tks.

Pretrained model

First of all, thank you for such an interesting work! Can you share the weights for FFHQ model to try it?

Fourier feature sampling?

Hello, thank you for the nice work!

It seems the Fourier feature matrix is "fixed" in current implementation (with a default config) rather than "sampled" for each image, as mentioned in the paper.
Do I misunderstand some details regarding your paper or implementation?

Sincerely,
Sihyun

Question about the paper

Thank you for the awesome work, and I really enjoyed reading it!

I have some questions while reading your paper.

In Section 4.2, you have explored the properties of the implicit neural representation (INR).

Is this property hold for any INR (assuming that the INR shares the same architecture) or INR generated by your framework?
Especially, the meaningful interpolation part (Figure 5) was truly a surprising observation, and does it hold for naively trained INR?

Best,
Jihoon

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.