Comments (5)
right now the output is limited by the context window of the model (1024) which equates to about 14s. So text should be around that duration (meaning ~2-3 sentences). For longer texts you can either do them one at a time (and use the same history prompt to continue the same voice) or feed the first generation as the history for the second. i know that that's still a bit inconvenient. will try to add better support for that in the next couple of days
from bark.
Thanks Man @gkucsko Do one thing please send some code example of using history prompt
because i write some complex logic at the top of you model to make a log conversation between 2 people
from bark.
Thanks Man @gkucsko Do one thing please send some code example of using history prompt because i write some complex logic at the top of you model to make a log conversation between 2 people
I added a parameter for it. https://github.com/JonathanFly/bark
from bark.
Hey @JonathanFly @gkucsko , I have a problem in processing large MAN and WOMAN conversation text. i split the large text into to smaller chunks . but after training the voice is not clear and i received different voices for same history_prompt
from bark.
sometimes the history prompt is not respected since a gpt model can technically just come up with a new speaker, so you might need another attempt or two. also some prompts might work better than others. see also here: #21
from bark.
Related Issues (20)
- Languages pt_speaker should be changed to br_speaker as it is not correct.
- BUG slow
- BUG no GPU RAM clean after restart api oobadooga
- How to save long form audio .wav file.
- AttributeError: module 'torch' has no attribute 'compiler' HOT 4
- Inference speed HOT 1
- Does it support streaming process?
- max num tokens supported on inference is ~ 40 max. not 256 as it would appear by reading code. HOT 2
- Bulgarian as supported language
- Timestamp audio generated
- Trying to run with half precision gives error "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'" HOT 1
- Add support and instructions for direct IPA texts.
- Fine tuned prompting.
- Chirp link
- attention mask and the pad token id were not set warning HOT 1
- Get deterministic output (same seed)
- GPU AMD
- Batch processing for long form generation
- ModuleNotFoundError while trying to load entry-point bdist_wheel: No module named '_ctypes'
- ImportError: cannot import name 'AutoProcessor' from partially initialized module 'transformers' (most likely due to a circular import) HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bark.