universome / inr-gan Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2021] Adversarial Generation of Continuous Images
Home Page: https://universome.github.io/inr-gan
[CVPR 2021] Adversarial Generation of Continuous Images
Home Page: https://universome.github.io/inr-gan
Dear Ivan,
Thank you for your great work. I really like it.
Have you try to use Periodic Activation Functions from siren? You mention the fourier feature from siren in the paper.
Thank you for your help.
Best Wishes,
Alex
Hi Ivan,
You work is fantastic and I like the interesting idea very much!
I am implementing your idea to my project but got some issues. Sometime I got some pixelation in part of the images (256x256 images) when I tried your code on my dataset, here is just an example (not what I obtained) about what I mean for pixelation:
I kinda feel like this issue is caused by multi-scale, I use default code setting for multiscaling (start from 32x32 and gradually increase to 256x256). I notice that in the table 1 of paper, you make a comparison between with and w/o multi-scaling part, and in terms of the quality, with multi-scaling produces better results. Here are my questions:
Thank you very much for your time! I appreciate your help!
Hi, thanks for sharing your awesome work!
I was just wondering about the PatchConcatAndResize
transform in your code. Did you use this in any of the experiments reported in the paper? In particular, I'm curious if this kind of data augmentation was needed to get the extrapolation results to work well.
Thanks!
Allan
I search all the codes and found there isn't torch.save and load function.
I would add this function and create inference scripts, could you give me some hints?
Hi. thank you for your great work.
As you intended, src/infra/launch_local.py
changes the directory to experiments/my_experiment
and trains the model. However, I encountered ModuleNotFoundError
at line 24
and line 25
in src/training/training_loop.py
. After I changed the lines, everything goes fine. The problem kept occurring even when I added experiments/my_experiment
to sys.path
. I wonder that this problem didn't occur in your case.
Thanks again and stay safe.
Best Wishes,
Lee
Hi Ivan,
Thank you for the wonderful research!
Really enjoyed reading your paper (also, the recent ICCV paper was great!)
While I was reading the paper, I recognize the details for the super-resolution (SR) experiments are somewhat missing.
For me SR was not trivial in INR-GAN as it utilizes MultiScale-INR.
If possible could you share the details of SR experiments? e.g., increasing the resolution of the first input grid...?
If you can share the implementation of SR, it couldn't be better!
Hope you have a nice day,
Jihoon
This is another issue besides my last one.
When I run multi-GPU training, the processes are killed: torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with signal SIGSEGV
when evaluating metrics: line 369
to line 380
in src/training/training_loop.py
. When I comment all these lines, the training goes fine without any error. Here is my brief system info.
OS: Ubuntu 20.04.2
GPUs: RTX 3090 X 2
python==3.8.10
cudatoolkit==11.1
torch==1.9.0
Here are the error messages.
Traceback (most recent call last):
File "src/train.py", line 563, in <module>
main() # pylint: disable=no-value-for-parameter
File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "src/train.py", line 558, in main
torch.multiprocessing.spawn(fn=subprocess_fn, args=(args, temp_dir), nprocs=args.num_gpus)
File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/user/miniconda3/envs/inr-gan/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 130, in join
raise ProcessExitedException(
torch.multiprocessing.spawn.ProcessExitedException: process 1 terminated with signal SIGSEGV
I searched on Google to solve this problem but I can't find any clue for this problem. Do you have any idea about this problem? Thanks.
Best Wishes,
Lee
How can I use pretrained model offered (like ffhq.pkl) and do inference using the structures mentioned in the paper?
Hello fellow Generative researchers!
This is not an issue ๐. Just wanted to express my awe ๐...
Really cool idea, and thanks for the code! I'll explore your work in more details!
Cheers ๐ป!
@akanimax
p.s. This is not an issue :). Please close it at your disposal.
Hello, thanks for your great job.
I had tried the experiment, and found that different batch size will change the sizes of checkpoint. Does the _fourier_embs_cache item affect the snapshot size? And if so, should train and test on the same snapshot have the same batch size?
tks.
First of all, thank you for such an interesting work! Can you share the weights for FFHQ model to try it?
Following https://vision-cair.s3.amazonaws.com/inr-gan/checkpoints/ffhq.pkl leads to an access denied error.
currently the code may not be used because of module rename, could you please solve this?
Hello, thank you for the nice work!
It seems the Fourier feature matrix is "fixed" in current implementation (with a default config) rather than "sampled" for each image, as mentioned in the paper.
Do I misunderstand some details regarding your paper or implementation?
Sincerely,
Sihyun
Thank you for the awesome work, and I really enjoyed reading it!
I have some questions while reading your paper.
In Section 4.2, you have explored the properties of the implicit neural representation (INR).
Is this property hold for any INR (assuming that the INR shares the same architecture) or INR generated by your framework?
Especially, the meaningful interpolation part (Figure 5) was truly a surprising observation, and does it hold for naively trained INR?
Best,
Jihoon
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.