Comments (11)
Yes, a list of the required models with URLs and what directory to put them in would help.
And any optional models.
And any documentation/samples so we can replicate the example results.
from diffsynth-studio.
where i can find those models or url
from diffsynth-studio.
@iftekharalammithu @SoftologyPro Guys, please check the scripts under ./examples
folder. Each script has the model URL
from diffsynth-studio.
OK, so all of these...
sdxl_text_to_image
models/stable_diffusion_xl/bluePencilXL_v200.safetensors`: [link](https://civitai.com/api/download/models/245614?type=Model&format=SafeTensor&size=pruned&fp=fp16
sdxl_turbo
models/stable_diffusion_xl_turbo/sd_xl_turbo_1.0_fp16.safetensors`: [link](https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0_fp16.safetensors
sd_video_render
models/stable_diffusion/dreamshaper_8.safetensors`: [link](https://civitai.com/api/download/models/128713?type=Model&format=SafeTensor&size=pruned&fp=fp16
models/ControlNet/control_v11f1p_sd15_depth.pth`: [link](https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth.pth
models/ControlNet/control_v11p_sd15_softedge.pth`: [link](https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge.pth
models/Annotators/dpt_hybrid-midas-501f0c75.pt`: [link](https://huggingface.co/lllyasviel/Annotators/resolve/main/dpt_hybrid-midas-501f0c75.pt
models/Annotators/ControlNetHED.pth`: [link](https://huggingface.co/lllyasviel/Annotators/resolve/main/ControlNetHED.pth
models/RIFE/flownet.pkl`: [link](https://drive.google.com/file/d/1APIzVeI-4ZZCEuIRE1m6WYfSCaOsi_7_/view?usp=sharing
sd_toon_shading
models/stable_diffusion/flat2DAnimerge_v45Sharp.safetensors`: [link](https://civitai.com/api/download/models/266360?type=Model&format=SafeTensor&size=pruned&fp=fp16
models/AnimateDiff/mm_sd_v15_v2.ckpt`: [link](https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15_v2.ckpt
models/ControlNet/control_v11p_sd15_lineart.pth`: [link](https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart.pth
models/ControlNet/control_v11f1e_sd15_tile.pth`: [link](https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile.pth
models/Annotators/sk_model.pth`: [link](https://huggingface.co/lllyasviel/Annotators/resolve/main/sk_model.pth
models/Annotators/sk_model2.pth`: [link](https://huggingface.co/lllyasviel/Annotators/resolve/main/sk_model2.pth
models/textual_inversion/verybadimagenegative_v1.3.pt`: [link](https://civitai.com/api/download/models/25820?type=Model&format=PickleTensor&size=full&fp=fp16
models/RIFE/flownet.pkl`: [link](https://drive.google.com/file/d/1APIzVeI-4ZZCEuIRE1m6WYfSCaOsi_7_/view?usp=sharing
sd_text_to_video
models/stable_diffusion/dreamshaper_8.safetensors`: [link](https://civitai.com/api/download/models/128713?type=Model&format=SafeTensor&size=pruned&fp=fp16
models/AnimateDiff/mm_sd_v15_v2.ckpt`: [link](https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15_v2.ckpt
models/RIFE/flownet.pkl`: [link](https://drive.google.com/file/d/1APIzVeI-4ZZCEuIRE1m6WYfSCaOsi_7_/view?usp=sharing
sd_text_to_image
models/stable_diffusion/aingdiffusion_v12.safetensors`: [link](https://civitai.com/api/download/models/229575?type=Model&format=SafeTensor&size=full&fp=fp16
models/ControlNet/control_v11p_sd15_lineart.pth`: [link](https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart.pth
models/ControlNet/control_v11f1e_sd15_tile.pth`: [link](https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile.pth
models/Annotators/sk_model.pth`: [link](https://huggingface.co/lllyasviel/Annotators/resolve/main/sk_model.pth
models/Annotators/sk_model2.pth`: [link](https://huggingface.co/lllyasviel/Annotators/resolve/main/sk_model2.pth
sd_prompt_refining
models/stable_diffusion_xl/sd_xl_base_1.0.safetensors`: [link](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
models/BeautifulPrompt/pai-bloom-1b1-text2prompt-sd/`: [link](https://huggingface.co/alibaba-pai/pai-bloom-1b1-text2prompt-sd
models/translator/opus-mt-zh-en/`: [link](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh
diffusion_toon_shading
models/stable_diffusion/aingdiffusion_v12.safetensors`: [link](https://civitai.com/api/download/models/229575
models/AnimateDiff/mm_sd_v15_v2.ckpt`: [link](https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15_v2.ckpt
models/ControlNet/control_v11p_sd15_lineart.pth`: [link](https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart.pth
models/ControlNet/control_v11f1e_sd15_tile.pth`: [link](https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile.pth
models/Annotators/sk_model.pth`: [link](https://huggingface.co/lllyasviel/Annotators/resolve/main/sk_model.pth
models/Annotators/sk_model2.pth`: [link](https://huggingface.co/lllyasviel/Annotators/resolve/main/sk_model2.pth
models/textual_inversion/verybadimagenegative_v1.3.pt`: [link](https://civitai.com/api/download/models/25820?type=Model&format=PickleTensor&size=full&fp=fp16
Are we better off using the example scripts for the various results? If I download all of the above models can I replicate your examples in the Web UI or should I use the example scripts?
from diffsynth-studio.
This URL is wrong. It does not lead to flownet.pkl
models/RIFE/flownet.pkl`: [link](https://drive.google.com/file/d/1APIzVeI-4ZZCEuIRE1m6WYfSCaOsi_7_/view?usp=sharing
from diffsynth-studio.
What are the specific source and target download files for these two?
models/BeautifulPrompt/pai-bloom-1b1-text2prompt-sd/`: [link](https://huggingface.co/alibaba-pai/pai-bloom-1b1-text2prompt-sd
models/translator/opus-mt-zh-en/`: [link](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh
from diffsynth-studio.
@SoftologyPro @iftekharalammithu I am very sorry to reply to you so late. This project is still under development and I don't have time to provide documentation now.
The model file models/RIFE/flownet.pkl
needs to be downloaded manually from the Google Drive provided by the original author, models/BeautifulPrompt/pai-bloom-1b1-text2prompt-sd/
and models/translator /opus-mt-zh-en/
require the entire folder to be downloaded. Please note that you can ignore some model files if you don't need the corresponding functionality.
The code of Diffutoon is in examples/diffutoon_toon_shading.py
, which does not require downloading all the model files.
from diffsynth-studio.
OK, I relaised the flownet.pkl is under a subfolder in the link you provided, so that is fine now.
And clarification needed. To reproduce all your examples, will the Web UI work, or do I need to use the individual example scripts for that?
from diffsynth-studio.
@SoftologyPro The examples and the WebUI are separated. If you want to reproduce the examples, you only need to use the individual example scripts. Some features are not supported in the WebUI.
from diffsynth-studio.
OK, thank you. I have all the models now and will use the example scripts.
from diffsynth-studio.
Ok, so the readme has NO INFO on where the models should be, where controlnets should be
How do You mr dev expect anyone but you to make it work ?
Checking examples does help but come one man, change the scripts so user is asked to point to the model he wants to load , yeah, one by one, and if he does , store the paths in separate file so it autoloads them next time.
There , i solved it for you, i wrote a gui to pick one of the py files with venv activated cause to be honest this is very inefficient the way we run them now.
You should have a menu at the start , so we can choose one of the py files ,video gen,image gen or whatever.
And whoever wants to install this - it runs fine with auto11 venv, you need to pip just one shiz and its good.
|What kind of idea is this - to load only predefined models and if they arent available you will flip the app, on error you should ask user to load model.
from diffsynth-studio.
Related Issues (20)
- How do I specify a graphics card for inference? HOT 3
- Usage (in WebUI) 遇到提示Torch not compiled with CUDA enabled的错误,我已经按照要求安装了cuda12.3,请问这是什么导致的呢 HOT 2
- What are the requirements for image content in Kolors fine-tuned corpus? HOT 1
- Any plan to support InstantID HOT 1
- No module named 'modelscope' HOT 2
- Issue with Koras lora training
- Multi-GPU support via Accelerate/Deep Speed? HOT 1
- 【ExVideo】ExVideo_svd_test.py中给出的是否是最佳参数 HOT 2
- kolors lora train error HOT 6
- where is the output_path of lora train? HOT 1
- kolors lora安装例子进行10批训练加载效果很不好 HOT 8
- Multi-gpu card loading inference? HOT 5
- 可图微调的权重是多个,如何确定最佳的微调权重呢?请问如何获取loss图? HOT 1
- Does Diffutoon support training new custom style models? HOT 1
- 视频风格转变 HOT 1
- kolors训练脚本的疑问 HOT 1
- 很棒的项目,就是issue希望能及时回复,正向反馈推动项目更好 HOT 1
- 建议优先适配ComfyUI HOT 1
- 您好,在尝试streamlit时,会出现"已杀死"问题 HOT 4
- 转换后的视频没有声音,正常吗
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from diffsynth-studio.