nerdyrodent / clip-guided-diffusion Goto Github PK
View Code? Open in Web Editor NEWJust playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
License: Other
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
License: Other
Awesome repo. Thank you. Ran in Google Colab with no issue. Figured I would try running locally on my NVIDIA Jetson Xavier SBC. Got all dependencies to work except guided diffusion. Any ideas? TIA.
Obtaining file:///media/dennis/64GB_SD_EXT4/AI_Art/guided-diffusion
Preparing metadata (setup.py) ... done
ERROR: Could not find a version that satisfies the requirement blobfile>=1.0.5 (from guided-diffusion) (from versions: 0.1, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.5, 0.5.0, 0.6.1, 0.7.0, 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.10.1, 0.10.2, 0.11.0)
ERROR: No matching distribution found for blobfile>=1.0.5
I get the following error in Katherine Crowson's code and in running your code as specified in the README.
I started with the following after editing device = "mps" (rather than cuda).
% python generate_diffuse.py -p "A painting of an apple"
Device: mps
Size: 256
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
....and then later down at the end......
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
Do you know of a solution? Note that gaussian_diffusion.py and resample.py contain 'float64' but when trying and editing KC's original notebook, this change to float32 did not solve the data type problem.
Thanks for your repo! I want to know what the following equation means: x_in = out['pred_xstart'] * fac + x * (1 - fac)
I'm getting this error after entering this command: python generate_diffuse.py -p "A painting of an apple"
(cgd) C:\Users\Computer\CLIP-Guided-Diffusion>python generate_diffuse.py -p "A painting of an apple"
Traceback (most recent call last):
File "C:\Users\Computer\CLIP-Guided-Diffusion\generate_diffuse.py", line 40, in
from IPython import display
ModuleNotFoundError: No module named 'IPython'
What should I do to solve this error?
Looks like the S3 bucket has been taken down
Hello... I was wondering, how is this project different from Big Sleep ?
Does it improve it in some way?
I am curious to know because I was actually trying to find similar projects to Big Sleep, because unfortunately my GPU's V-RAM is not enough to run it, and so I was looking for some different, kind of more "modest" implementation... :)
When I say that I'm low on VRAM I mean, desperately low ๐
(2 Gb !) BUT! - because I successfully managed to run Deep Daze which works similarly using CLIP and a SIREN in place of a BigGAN - then, I haven't lost all hope yet...! :)
So yeah, if you have any pointers about your implementation and how it differs from Big Sleep, and most importantly if there could be a way to tune it for working on such a low VRAM amount (even at ridiculously low resolutions, doesn't matter), that would be hugely appreciated!
Also, yes, I know I could use Google Colabs! But witnessing the ML magic happening right within your Machine, makes it for a totally different experience... ;)
And yeah I want to get a new, decent GPU as soon as possible! But I'd feel so stupid to pay 3 times its actual cost just because the IT hardware market it's fucked up (and keeps staying like so...)
So...
Now you know all :)
Thank you for your open source code : ), but I failed to download the 512x512_diffusion_uncond_finetune_008100.pt. I noticed that the download path of the model has been updated to curl -OL https://v-diffusion.s3.us-west-2.amazonaws.com/512x512_diffusion_uncond_finetune_008100.pt
in the corresponding colab.
Do you need to update the README or setup.sh?
Hello,
I have finished installing CLIP-Guided-Diffusion, but when I run it, this error happens:
Device: cpu
Size: 256
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: C:\Users\XXXXXX\Anaconda3\envs\cgd\lib\site-packages\lpips\weights\v0.1\vgg.pth
Seed: 1221123546082200
Text prompt: A rope tied in a figure-eight knot
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\XXXXXX\CLIP-Guided-Diffusion\generate_diffuse.py", line 460, in
do_run()
File "C:\Users\XXXXXX\CLIP-Guided-Diffusion\generate_diffuse.py", line 359, in do_run
for j, sample in enumerate(samples):
File "c:\users\XXXXXX\guided-diffusion\guided_diffusion\gaussian_diffusion.py", line 637, in p_sample_loop_progressive
out = sample_fn(
File "c:\users\XXXXXXX\guided-diffusion\guided_diffusion\gaussian_diffusion.py", line 461, in p_sample
out = self.p_mean_variance(
File "c:\users\XXXXXXX\guided-diffusion\guided_diffusion\respace.py", line 91, in p_mean_variance
return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
File "c:\users\XXXXXX\guided-diffusion\guided_diffusion\gaussian_diffusion.py", line 260, in p_mean_variance
model_output = model(x, self._scale_timesteps(t), **model_kwargs)
File "c:\users\XXXXXX\guided-diffusion\guided_diffusion\respace.py", line 128, in call
return self.model(x, new_ts, **kwargs)
File "C:\Users\XXXXXX\Anaconda3\envs\cgd\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "c:\users\XXXXX\guided-diffusion\guided_diffusion\unet.py", line 656, in forward
h = module(h, emb)
File "C:\Users\XXXXXX\Anaconda3\envs\cgd\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "c:\users\XXXXXX\guided-diffusion\guided_diffusion\unet.py", line 77, in forward
x = layer(x)
File "C:\Users\XXXXXX\Anaconda3\envs\cgd\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\XXXXXX\Anaconda3\envs\cgd\lib\site-packages\torch\nn\modules\conv.py", line 443, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\XXXXXX\Anaconda3\envs\cgd\lib\site-packages\torch\nn\modules\conv.py", line 439, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: "unfolded2d_copy" not implemented for 'Half'
Any help would be appreciated.
Hi:
Thanks for your excellent work,
Would you please provide the training script or code for me?
My email is: [email protected]
Thank you very much!
Hello! Just noticed that the first link doesn't work. The model is however available at huggingface
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.