Coder Social home page Coder Social logo

karlo's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

karlo's Issues

Can anybody help add the I2I function with text text describe.

I noticed that in i2i.py, the function can receive prompt, but in the gradio ui, there only picture token.
I tried to add the args for that, but I didn't good at gradio website and failed, maybe anybody can help that?

And batch_size not a good option to choose, only one batch size need lots of memories of CUDA even 23GB at NVIDIA 3090.
If there can be a batch count option, the model canbe perfect......

NaN outputs

Hi!

Thanks a lot for your work on this, it's great.

However, I'm having trouble running the example.py script. All I get are tensors full of nans returned on line 66 of example.py. Did you see such an error before, and have an idea how I might be able to fix this?

Thanks!

Dynamic Height / Width

Fantastic library - really appreciate all the work you all have done to provide such an amazing tool.

I have been poking around a bit with image interpolation and was curious if there was a path to using this model as a means of generating images of various sizes (instead of just 256x256).

I thought that I would be able to just hard code a few parameters (e.g. decoder_latents, super_res_latents), but, when I do this, I get something along the lines of:

Internal server error with unclip_images: Unexpected latents shape, got torch.Size([12, 3, 512, 512]), expected (12, 3, 256, 256)

This is due to the fact that what you pass in is expect to match the these are expected to match the UNet2DModels passed into super_res_first and super_res_last. This leads me to believe that I must be misunderstanding something for the fact that it's unclear why these parameters would even be included if they are just going to be checked against the related models, anyhow.

Any insight here is greatly appreciated.

RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

I set os.environ["CUDA_VISIBLE_DEVICES"]="1" in components.py to use my second gpu and when I try to create a image via the gradio interface I get:
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

set CUDA_VISIBLE_DEVICES=1
in the Anaconda prompt terminal before running the script doesn't fix the error.

And torch.cuda.set_device(1) after the torch import causes
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument index in method wrapper__index_select)

I've changed cpu to "cuda" or cuda() where "cpu" or cpu() was in the repo files, but the error is still thrown when I try to run the model
(gradio started via python demo/product_demo.py --host 0.0.0.0 --port 8085 --root-dir .)

Any suggestions as to what I should change to get everything on the specified GPU?
Suggested solutions that would fit in under 24GB of VRAM?


edit:
I tried

set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'

with

timestep_map_tensor = th.tensor(timestep_map)
cuda_device = th.device("cuda")
timestep_map_tensor = timestep_map_tensor.to(cuda_device)
self.register_buffer("timestep_map", timestep_map_tensor, persistent=False)

in respace.py, and was able to load things, and the image started generating :)
but ran out of vram


Full output:

Exception in thread Thread-2 (_sample):
Traceback (most recent call last):
  File "C:\Users\Jason\.conda\envs\karlo\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\Jason\.conda\envs\karlo\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\Jason\Documents\machine_learning\image_ML\karlo\demo\components.py", line 174, in _sample
    for k, out in enumerate(output_generator):
  File "C:\Users\Jason\Documents\machine_learning\image_ML\karlo\karlo\sampler\t2i.py", line 116, in __call__
    img_feat = self._prior(
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jason\Documents\machine_learning\image_ML\karlo\karlo\models\prior_model.py", line 127, in forward
    sample = sample_fn(
  File "C:\Users\Jason\Documents\machine_learning\image_ML\karlo\karlo\modules\diffusion\gaussian_diffusion.py", line 533, in p_sample_loop
    for sample in self.p_sample_loop_progressive(
  File "C:\Users\Jason\Documents\machine_learning\image_ML\karlo\karlo\modules\diffusion\gaussian_diffusion.py", line 584, in p_sample_loop_progressive
    out = self.p_sample(
  File "C:\Users\Jason\Documents\machine_learning\image_ML\karlo\karlo\modules\diffusion\gaussian_diffusion.py", line 483, in p_sample
    out = self.p_mean_variance(
  File "C:\Users\Jason\Documents\machine_learning\image_ML\karlo\karlo\modules\diffusion\respace.py", line 97, in p_mean_variance
    return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
  File "C:\Users\Jason\Documents\machine_learning\image_ML\karlo\karlo\modules\diffusion\gaussian_diffusion.py", line 338, in p_mean_variance
    model_output = model(x, t, **model_kwargs)
  File "C:\Users\Jason\Documents\machine_learning\image_ML\karlo\karlo\modules\diffusion\respace.py", line 108, in wrapped
    x, self.timestep_map[ts].to(device=ts.device, dtype=ts.dtype), **kwargs
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

(에러) CUDA out of memory. 문의 드립니다.

초보자이므로 문의 내용이 불충분하더라도 양해 부탁드립니다.
관련 예제 실행하는 중에 "CUDA out of memory." 에러가 발생하여 문의 드리며, 해결방법 조언해 주시면 고맙겠습니다.

(실행 환경)

-ubuntu 18.04
-GPU (nviDIA GeForce RTX 3090(Memory : 24G))
-python 3.8
-black 22.6.0
-pytorch 1.10.0 / 1.12.1 (2개버전 각각 진행 - 결과 동일)
-torchvision 0.11.0 / 0.13.1 (2개버전 각각 진행 - 결과 동일)
-einops 0.6.0
-omegaconf 2.2.3
-matplotlib 3.3.4
-gradio 3.12.0

(예제 실행)

> python demo/product_demo.py --host 127.0.0.1 --port 6021 --root-dir /home/jyseo/project/kakaobrain_karlo/karlo

-(예제 결과)
/home/jyseo/miniconda2/envs/mapnet_py38/lib/python3.8/site-packages/torch/serialization.py:707: UserWarning: 'torch.load' received a zip file that looks like a TorchScript archive dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to silence this warning)
warnings.warn("'torch.load' received a zip file that looks like a TorchScript archive"
INFO:root:Loading prior: prior-ckpt-step=01000000-of-01000000.ckpt
INFO:root:done.
INFO:root:Loading decoder: decoder-ckpt-step=01000000-of-01000000.ckpt
INFO:root:done.
INFO:root:Loading SR(64->256): improved-sr-ckpt-step=1.2M.ckpt
INFO:root:done.
Running on local URL: http://127.0.0.1:6021

To create a public link, set share=Ture in launch().
image

(에러)

text_input: a black porcelain in the shape of pikachu
prior_sm: 25
prior_cf_scale: 4
decoder_sm: 25
decoder_cf_scale: 8
sr_sm: 7
seed: 0
max_bsz: 4
Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/jyseo/miniconda2/envs/mapnet_py38/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/jyseo/miniconda2/envs/mapnet_py38/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/jyseo/project/kakaobrain_karlo/karlo/demo/components.py", line 171, in _sample
for k, out in enumerate(output_generator):
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/sampler/t2i.py", line 150, in call
for k, out in enumerate(images_256_outputs):
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/models/sr_64_256.py", line 86, in forward
for x in sample_outputs:
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/modules/diffusion/gaussian_diffusion.py", line 631, in p_sample_loop_progressive_for_improved_sr
out = self.p_sample(
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/modules/diffusion/gaussian_diffusion.py", line 483, in p_sample
out = self.p_mean_variance(
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/modules/diffusion/respace.py", line 97, in p_mean_variance
return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/modules/diffusion/gaussian_diffusion.py", line 338, in p_mean_variance
model_output = model(x, t, **model_kwargs)
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/modules/diffusion/respace.py", line 107, in wrapped
return model(
File "/home/jyseo/miniconda2/envs/mapnet_py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/modules/unet.py", line 691, in forward
return super().forward(x, timesteps, **kwargs)
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/modules/unet.py", line 665, in forward
h = module(h, emb)
File "/home/jyseo/miniconda2/envs/mapnet_py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/modules/unet.py", line 44, in forward
x = layer(x, emb)
File "/home/jyseo/miniconda2/envs/mapnet_py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/modules/unet.py", line 223, in forward
h = self.out_layers(h)
File "/home/jyseo/miniconda2/envs/mapnet_py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jyseo/miniconda2/envs/mapnet_py38/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/home/jyseo/miniconda2/envs/mapnet_py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jyseo/project/kakaobrain_karlo/karlo/karlo/modules/nn.py", line 18, in forward
y = super().forward(x.float()).to(x.dtype)
RuntimeError: CUDA out of memory. Tried to allocate 640.00 MiB (GPU 0; 23.70 GiB total capacity; 20.26 GiB already allocated; 635.88 MiB free; 21.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

(에러메시지)

RuntimeError: CUDA out of memory. Tried to allocate 640.00 MiB (GPU 0; 23.70 GiB total capacity; 20.26 GiB already allocated; 635.88 MiB free; 21.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

(조치 사항) --> karlo/sampler/t2i.py 코드 수정 (기존)[256, 256], --> (변경)[128, 128],

        """ Upsample 64x64 to 256x256 """
        images_256 = TVF.resize(
            images_64,
            #(기존)[256, 256],  
            #(변경)
            [128, 128],
            interpolation=InterpolationMode.BICUBIC,
            antialias=True,
        )

### (조치 사항 결과) -->### 에러 없으나, 결과물은 미완성

python demo/product_demo.py --host 127.0.0.1 --port 6023 --root-dir /home/jyseo/project/kakaobrain_karlo/karlo
/home/jyseo/miniconda2/envs/mapnet_py38/lib/python3.8/site-packages/torch/serialization.py:707: UserWarning: 'torch.load' received a zip file that looks like a TorchScript archive dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to silence this warning)
warnings.warn("'torch.load' received a zip file that looks like a TorchScript archive"
INFO:root:Loading prior: prior-ckpt-step=01000000-of-01000000.ckpt
INFO:root:done.
INFO:root:Loading decoder: decoder-ckpt-step=01000000-of-01000000.ckpt
INFO:root:done.
INFO:root:Loading SR(64->256): improved-sr-ckpt-step=1.2M.ckpt
INFO:root:done.
Running on local URL: http://127.0.0.1:6023

To create a public link, set share=True in launch().


text_input: a black porcelain in the shape of pikachu
prior_sm: 25
prior_cf_scale: 4
decoder_sm: 25
decoder_cf_scale: 8
sr_sm: 7
seed: 0
max_bsz: 4
INFO:root:Generation done. a black porcelain in the shape of pikachu -- 7.740963secs

image

낮은 퀄리티 출력됨.

RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu). And anybody kown how to generate on multiple GPUs?

When I use
CUDA_VISIBLE_DEVICES=0 python demo/product_demo.py --host 0.0.0.0 --port 9870 --root-dir ./models/
(env:python3.10.8 torch==1.13.0+cu116)
and I entered the gradio UI, but when I generate, the error below happend.

But when I change another env:python3.8.13 torch==1.12.1+cu113
It works but OOM...... Seems 24Gb not enough for him......
U kown I have four Nvidia3090, so I want to know if there is any way to generate on multiple GPUs?

/root/anaconda3/envs/py310/lib/python3.10/site-packages/torch/serialization.py:779: UserWarning: 'torch.load' received a zip file that looks like a TorchScript archive dispatching to 'torch.jit.load' (call 'torch.jit.load' directly to silence this warning)
warnings.warn("'torch.load' received a zip file that looks like a TorchScript archive"
INFO:root:Loading prior: prior-ckpt-step=01000000-of-01000000.ckpt
INFO:root:done.
INFO:root:Loading decoder: decoder-ckpt-step=01000000-of-01000000.ckpt
INFO:root:done.
INFO:root:Loading SR(64->256): improved-sr-ckpt-step=1.2M.ckpt
INFO:root:done.
Running on local URL: http://0.0.0.0:9870

To create a public link, set share=True in launch().

text_input: a dog
prior_sm: 25
prior_cf_scale: 4
decoder_sm: 25
decoder_cf_scale: 8
sr_sm: 7
seed: 0
max_bsz: 4
Exception in thread Thread-2 (_sample):
Traceback (most recent call last):
File "/root/anaconda3/envs/py310/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/root/anaconda3/envs/py310/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/tongange/karlo-main/demo/components.py", line 171, in _sample
for k, out in enumerate(output_generator):
File "/home/tongange/karlo-main/karlo/sampler/t2i.py", line 109, in call
img_feat = self._prior(
File "/root/anaconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tongange/karlo-main/karlo/models/prior_model.py", line 120, in forward
sample = sample_fn(
File "/home/tongange/karlo-main/karlo/modules/diffusion/gaussian_diffusion.py", line 533, in p_sample_loop
for sample in self.p_sample_loop_progressive(
File "/home/tongange/karlo-main/karlo/modules/diffusion/gaussian_diffusion.py", line 584, in p_sample_loop_progressive
out = self.p_sample(
File "/home/tongange/karlo-main/karlo/modules/diffusion/gaussian_diffusion.py", line 483, in p_sample
out = self.p_mean_variance(
File "/home/tongange/karlo-main/karlo/modules/diffusion/respace.py", line 97, in p_mean_variance
return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
File "/home/tongange/karlo-main/karlo/modules/diffusion/gaussian_diffusion.py", line 338, in p_mean_variance
model_output = model(x, t, **model_kwargs)
File "/home/tongange/karlo-main/karlo/modules/diffusion/respace.py", line 108, in wrapped
x, self.timestep_map[ts].to(device=ts.device, dtype=ts.dtype), **kwargs
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

when i use the pipeline example, i got this error

RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling cublasGemmStridedBatchedExFix( handle, opa, opb, m, n, k, (void*)(&falpha), a, CUDA_R_16F, lda, stridea, b, CUDA_R_16F, ldb, strideb, (void*)(&fbeta), c, CUDA_R_16F, ldc, stridec, num_batches, CUDA_R_32F, CUBLAS_GEMM_DEFAULT_TENSOR_OP)

Licence update request

Could this clause "You shall undertake reasonable efforts to use the latest version of the Model." be removed from the license?
Ideally people could use the version of the model that they find most useful/the best for their purposes.

Perhaps the newer CreativeML Open RAIL++-M License could be used that does not contain that clause.
https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL

And perhaps MIT on the surrounding code, or explicit permission to modify the code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.