Coder Social home page Coder Social logo

Comments (4)

erew123 avatar erew123 commented on August 28, 2024

What do you think it is... Christmas? 😄 Was finetuning not enough? haha

Multiple additional models might be complicated, the loader situation may get messy. Its not impossible, but Ill think around it. There are quire a few complexities to the loaders in the "multiple other models" scenario.

It would be relatively easy to do a "detect an extra model in this location and make an extra gradio radio box appear for that 1x model"... I think. But I am talking for 1x extra model in 1x location. Though still, Id need to think around that one.

Would 1x extra model work? or are you looking at literally 2 to ??? extra models?

Would you train the same model over and over (different voices), so just need that 1x model.

from alltalk_tts.

rktvr avatar rktvr commented on August 28, 2024

Initially I thought it'd be like a model selector in Auto1111's stable diffusion webui I suppose.

Would you train the same model over and over (different voices), so just need that 1x model.

Is that possible? So I can train a model with one voice, then use that finetuned model again with a different voice? If that's the case then I suppose a model selector may not be needed, depending on how much other voices effect the previous finetuning.

from alltalk_tts.

erew123 avatar erew123 commented on August 28, 2024

Ive just updated finetuning to not only compact your model, but also deal with all the model moving. Its creating a folder in the /models/ folder called trainedmodel, now that its standardised in some way, I should be able to do something with the gradio interface that looks for a model existing in that folder and if so, gives you an extra loader option. But Ive just spent my day on a load of other bits and Im sure Ive got something else to do. So Ill get there.. but give me time :)

If you do want to compact your existing model go to this link #28

from alltalk_tts.

erew123 avatar erew123 commented on August 28, 2024

image

The model has to be stored in /models/trainedmodel/

This is the default location that finetuning now moves models to after training.

If a model is detected there when AllTalk starts up, it will add the additional loader type into the Gradio interface.

At some point Ill work on the finetuning, so that you can choose to use either the base model OR your already pre-trained model.

from alltalk_tts.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.