Coder Social home page Coder Social logo

Comments (12)

JonathanFly avatar JonathanFly commented on May 31, 2024 4

Try https://github.com/JonathanFly/bark with --use_smaller_models should fit even in 6GB.

from bark.

gkucsko avatar gkucsko commented on May 31, 2024 2

added another simple option using the env var SUNO_USE_SMALL_MODELS=True to get smaller models that will prob fit on an 8gb card. Qw haven't implemented quantization yet. As for requirements would love it if people confirm who have the relevant cards (since it also depends on eg bf16 support etc) but i believe the small models work on an 8gb card and the large models work on a 12gb card.

from bark.

mtx2d avatar mtx2d commented on May 31, 2024 1

added another simple option using the env var SUNO_USE_SMALL_MODELS=True to get smaller models that will prob fit on an 8gb card. Qw haven't implemented quantization yet. As for requirements would love it if people confirm who have the relevant cards (since it also depends on eg bf16 support etc) but i believe the small models work on an 8gb card and the large models work on a 12gb card.

Thanks setting this environment variable worked for me!

Steps I took:
On Windows:

set SUNO_USE_SMALL_MODELS=True
jupyter lab

from bark.

EwoutH avatar EwoutH commented on May 31, 2024

What are the memory / VRAM requirements? And is quantization possible?

It would be great if a table with memory requirements could be added to the Readme and/or Docs.

from bark.

Crimsonfart avatar Crimsonfart commented on May 31, 2024

added another simple option using the env var SUNO_USE_SMALL_MODELS=True to get smaller models that will prob fit on an 8gb card. Qw haven't implemented quantization yet. As for requirements would love it if people confirm who have the relevant cards (since it also depends on eg bf16 support etc) but i believe the small models work on an 8gb card and the large models work on a 12gb card.

where and how do I add? SUNO_USE_SMALL_MODELS=True

from bark.

SeanDohertyPhotos avatar SeanDohertyPhotos commented on May 31, 2024

still getting the error

from bark import SAMPLE_RATE, generate_audio, preload_models
from IPython.display import Audio
import os
preload_models(use_gpu=False)
os.environ['SUNO_USE_SMALL_MODELS'] = 'True'



text_prompt = """
    Hello.
"""
audio_array = generate_audio(text_prompt)
Audio(audio_array, rate=SAMPLE_RATE)

CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 4.00 GiB total capacity; 3.46 GiB already allocated; 0 bytes free; 3.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
  File "C:\Users\smast\OneDrive\Desktop\Code Projects\Johnny Five\audio test.py", line 12, in <module>
    audio_array = generate_audio(text_prompt)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 4.00 GiB total capacity; 3.46 GiB already allocated; 0 bytes free; 3.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

from bark.

gkucsko avatar gkucsko commented on May 31, 2024

you have to set the environment variable before the model load. but also you can now more easily specify the model size in the preload function, see also here: #51

from bark.

CarlKenner avatar CarlKenner commented on May 31, 2024

but also you can now more easily specify the model size in the preload function, see also here: #51

No, you can't. It's bugged. The model size you specify in the preload function isn't respected. generate_audio will reload the large models when you call it. I couldn't get out why I was getting CUDA out of memory errors when I specified small and CPU for all the models and CUDA usage should have been zero. lol.

from bark.

gkucsko avatar gkucsko commented on May 31, 2024

oh yikes sorry, lemme check. feel free to also PR if you find the bug

from bark.

gkucsko avatar gkucsko commented on May 31, 2024

works fine for me on a quick test, can anyone else confirm its borked?

from bark.

CarlKenner avatar CarlKenner commented on May 31, 2024

works fine for me on a quick test, can anyone else confirm it's borked?

The bug was in this line:

model_key = str(device) + f"__{model_type}"

It has since been fixed.

from bark.

gkucsko avatar gkucsko commented on May 31, 2024

Ah ok great ya just made some fixes there

from bark.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.