Coder Social home page Coder Social logo

kpthedev / stable-karlo Goto Github PK

View Code? Open in Web Editor NEW
62.0 3.0 6.0 78 KB

Upscaling Karlo text-to-image generation using Stable Diffusion v2.

License: GNU General Public License v3.0

Python 100.00%
ai-art artificial-intelligence generative-art karlo latent-diffusion stable-diffusion txt2img unclip pytorch

stable-karlo's People

Contributors

kpthedev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

stable-karlo's Issues

Model downloads not persisting after crash

I have tried at least a couple times to run a image generation, and each time it seems to start the large file downloads over again and errors out with

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB (GPU 0; 23.93 GiB total capacity; 8.93 GiB already allocated; 14.04 GiB free; 9.23 GiB reserved in total by PyTorch)

I've run

set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
set CUDA_VISIBLE_DEVICES=1

before launching streamlit.

suggestions?

Is the code automatically downloaded from some library behind the scenes?
Not seeing any download urls in the code.

Full error thread:

2023-01-03 01:16:50.104 Uncaught app exception
Traceback (most recent call last):
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
  File "C:\Users\Jason\Documents\machine_learning\image_ML\stable-karlo\app.py", line 143, in <module>
    main()
  File "C:\Users\Jason\Documents\machine_learning\image_ML\stable-karlo\app.py", line 120, in main
    images_up = upscale(
  File "C:\Users\Jason\Documents\machine_learning\image_ML\stable-karlo\models\generate.py", line 107, in upscale
    images = pipe(
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_upscale.py", line 499, in __call__
    image = self.decode_latents(latents.float())
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_upscale.py", line 266, in decode_latents
    image = self.vae.decode(latents).sample
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\diffusers\models\vae.py", line 605, in decode
    decoded = self._decode(z).sample
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\diffusers\models\vae.py", line 577, in _decode
    dec = self.decoder(z)
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\diffusers\models\vae.py", line 213, in forward
    sample = self.mid_block(sample)
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 393, in forward
    hidden_states = attn(hidden_states)
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jason\.conda\envs\karlo\lib\site-packages\diffusers\models\attention.py", line 354, in forward
    torch.empty(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB (GPU 0; 23.93 GiB total capacity; 8.93 GiB already allocated; 14.04 GiB free; 9.23 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Feature suggestion: VRAM optimization

I would like to know if it is possible to optimize karlo to be compatible with a 6 GB VRAM GPU. I installed the repository when it was released, but I had VRAM problems due to my 6 GB GPU. I now realize that a 7 GB GPU may be compatible. I would like to know if it is possible to further optimize the project to improve compatibility with 6 GB VRAM GPUs. I appreciate your response in advance.

AssertionError: Torch not compiled with CUDA enabled

First I get this error message when running the WebUI :
text_proj\diffusion_pytorch_model.safetensors not found

And trying to generate an image :
AssertionError: Torch not compiled with CUDA enabled

But I already installed :
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
and the CMD tells me that "Requirement already satisfied"

Running on Windows11. RTX 3090. Intel i9-12900K

getting the following cuda error

Traceback:

File "D:\stable-karlo.env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "D:\stable-karlo\app.py", line 105, in
main()
File "D:\stable-karlo\app.py", line 69, in main
images = generate(
File "D:\stable-karlo\model\generate.py", line 68, in generate
pipe = make_pipe()
File "D:\stable-karlo.env\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 625, in wrapped_func
return get_or_create_cached_value()
File "D:\stable-karlo.env\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 609, in get_or_create_cached_value
return_value = non_optional_func(*args, **kwargs)
File "D:\stable-karlo\model\generate.py", line 41, in make_pipe
return pipe.to("cuda")
File "D:\stable-karlo.env\lib\site-packages\diffusers\pipeline_utils.py", line 270, in to
module.to(torch_device)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 989, in to
return self._apply(convert)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 664, in apply
param_applied = fn(param)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "D:\stable-karlo.env\lib\site-packages\torch\cuda_init
.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")

Colab - Stable Diffusion images are black

The Colab version of stable karlo seemingly cannot upscale the Karlo images, at all. While it does take more time for the images to actually show up, the output will be a completely blank image. This repeatedly happens, no matter the connected GPU or the parameters.
image

M1 MacOS Install - Cuda Error when trying to run Karlo

Hi there!

Getting the following error after following the macos/linux install in the ReadMe:

Traceback (most recent call last): File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/app.py", line 143, in <module> main() File "/Users/jackwooldridge/StableDiffusion/stable-karlo/app.py", line 104, in main images = generate( File "/Users/jackwooldridge/StableDiffusion/stable-karlo/models/generate.py", line 83, in generate pipe = make_pipeline_generator(cpu=cpu) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/streamlit/runtime/legacy_caching/caching.py", line 629, in wrapped_func return get_or_create_cached_value() File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/streamlit/runtime/legacy_caching/caching.py", line 611, in get_or_create_cached_value return_value = non_optional_func(*args, **kwargs) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/models/generate.py", line 42, in make_pipeline_generator pipe = pipe.to("cuda") File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/diffusers/pipeline_utils.py", line 270, in to module.to(torch_device) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 989, in to return self._apply(convert) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 664, in _apply param_applied = fn(param) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 987, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/cuda/__init__.py", line 221, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

Systems specs:

`DataType SPHardwareDataType
Software:

System Software Overview:

  System Version: macOS 13.0.1 (22A400)
  Kernel Version: Darwin 22.1.0
  Boot Volume: Macintosh HD
  Boot Mode: Normal
  Computer Name: Jack’s MacBook Pro
  User Name: Jack Wooldridge (jackwooldridge)
  Secure Virtual Memory: Enabled
  System Integrity Protection: Enabled

Hardware:

Hardware Overview:

  Model Name: MacBook Pro
  Model Identifier: MacBookPro18,3
  Model Number: MKGP3LL/A
  Chip: Apple M1 Pro
  Total Number of Cores: 8 (6 performance and 2 efficiency)
  Memory: 16 GB
  System Firmware Version: 8419.41.10
  OS Loader Version: 8419.41.10

`

I tried updating the generator file to run on MPS, but this causes fatal errors and crashes the application.

Entirely possible I'm missing something that's glaringly obvious, but can't think of anything at the moment.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.