Coder Social home page Coder Social logo

furkangozukara / stable-diffusion Goto Github PK

View Code? Open in Web Editor NEW
1.7K 71.0 223.0 2.89 MB

Stable Diffusion, SDXL, LoRA Training, DreamBooth Training, Automatic1111 Web UI, DeepFake, Deep Fakes, TTS, Animation, Text To Video, Tutorials, Guides, Lectures, Courses, ComfyUI, Google Colab, RunPod, NoteBooks, ControlNet, TTS, Voice Cloning, AI, AI News, ML, ML News, News, Tech, Tech News, Kohya LoRA, Kandinsky 2, DeepFloyd IF, Midjourney

Home Page: https://www.youtube.com/SECourses

License: GNU General Public License v3.0

Jupyter Notebook 85.54% Python 13.11% Shell 1.35%
deepfake deepfakes dreambooth guide guides stable-diffusion tts tutorial tutorials text-to-video

stable-diffusion's Introduction

image Hits

Patreon BuyMeACoffee Furkan Gözükara Medium Codio Furkan Gözükara Medium

YouTube Channel Furkan Gözükara LinkedIn Udemy Twitter Follow Furkan Gözükara

Expert-Level Tutorials on Stable Diffusion & SDXL: Master Advanced Techniques and Strategies

Greetings everyone. I am Dr. Furkan Gözükara. I am an Assistant Professor in Software Engineering department of a private university (have PhD in Computer Engineering).

My LinkedIn : https://www.linkedin.com/in/furkangozukara

My Twitter : https://twitter.com/GozukaraFurkan

My Linktr : https://linktr.ee/FurkanGozukara

Our channel address (30,000+ subscribers) if you like to subscribe ⤵️ https://www.youtube.com/@SECourses

Our discord (5,900+ members) to get more help ⤵️ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388

Our 1,400+ Stars GitHub Stable Diffusion and other tutorials repo ⤵️ https://github.com/FurkanGozukara/Stable-Diffusion

I am keeping this list up-to-date. I got upcoming new awesome video ideas. Trying to find time to do that.

I am open to any criticism you have. I am constantly trying to improve the quality of my tutorial guide videos. Please leave comments with both your suggestions and what you would like to see in future videos.

All videos have manually fixed subtitles and properly prepared video chapters. You can watch with these perfect subtitles or look for the chapters you are interested in.

Since my profession is teaching, I usually do not skip any of the important parts. Therefore, you may find my videos a little bit longer.

Playlist link on YouTube: Stable Diffusion Tutorials, Automatic1111 Web UI & Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Video to Anime

Tutorial Videos

1.) Automatic1111 Web UI - PC - Free

How To Install Python, Setup Virtual Environment VENV, Set Default Python System Path & Install Git

image

2.) Automatic1111 Web UI - PC - Free

Easiest Way to Install & Run Stable Diffusion Web UI on PC by Using Open Source Automatic Installer

image

3.) Automatic1111 Web UI - PC - Free

How to use Stable Diffusion V2.1 and Different Models in the Web UI - SD 1.5 vs 2.1 vs Anything V3

image

4.) Automatic1111 Web UI - PC - Free

Zero To Hero Stable Diffusion DreamBooth Tutorial By Using Automatic1111 Web UI - Ultra Detailed

image

5.) Automatic1111 Web UI - PC - Free

DreamBooth Got Buffed - 22 January Update - Much Better Success Train Stable Diffusion Models Web UI

image

6.) Automatic1111 Web UI - PC - Free

How to Inject Your Trained Subject e.g. Your Face Into Any Custom Stable Diffusion Model By Web UI

image

7.) Automatic1111 Web UI - PC - Free

How To Do Stable Diffusion LORA Training By Using Web UI On Different Models - Tested SD 1.5, SD 2.1

image

8.) Automatic1111 Web UI - PC - Free

8 GB LoRA Training - Fix CUDA & xformers For DreamBooth and Textual Inversion in Automatic1111 SD UI

image

9.) Automatic1111 Web UI - PC - Free

How To Do Stable Diffusion Textual Inversion (TI) / Text Embeddings By Automatic1111 Web UI Tutorial

image

10.) Automatic1111 Web UI - PC - Free

How To Generate Stunning Epic Text By Stable Diffusion AI - No Photoshop - For Free - Depth-To-Image

image

11.) Python Code - Hugging Face Diffusers Script - PC - Free

How to Run and Convert Stable Diffusion Diffusers (.bin Weights) & Dreambooth Models to CKPT File

image

12.) NMKD Stable Diffusion GUI - Open Source - PC - Free

Forget Photoshop - How To Transform Images With Text Prompts using InstructPix2Pix Model in NMKD GUI

image

13.) Google Colab Free - Cloud - No PC Is Required

Transform Your Selfie into a Stunning AI Avatar with Stable Diffusion - Better than Lensa for Free

image

14.) Google Colab Free - Cloud - No PC Is Required

Stable Diffusion Google Colab, Continue, Directory, Transfer, Clone, Custom Models, CKPT SafeTensors

image

15.) Automatic1111 Web UI - PC - Free

Become A Stable Diffusion Prompt Master By Using DAAM - Attention Heatmap For Each Used Token - Word

image

16.) Python Script - Gradio Based - ControlNet - PC - Free

Transform Your Sketches into Masterpieces with Stable Diffusion ControlNet AI - How To Use Tutorial

image

17.) Automatic1111 Web UI - PC - Free

Sketches into Epic Art with 1 Click: A Guide to Stable Diffusion ControlNet in Automatic1111 Web UI

image

18.) RunPod - Automatic1111 Web UI - Cloud - Paid - No PC Is Required

Ultimate RunPod Tutorial For Stable Diffusion - Automatic1111 - Data Transfers, Extensions, CivitAI

image

19.) RunPod - Automatic1111 Web UI - Cloud - Paid - No PC Is Required

How To Install DreamBooth & Automatic1111 On RunPod & Latest Libraries - 2x Speed Up - cudDNN - CUDA

image

20.) Automatic1111 Web UI - PC - Free

Fantastic New ControlNet OpenPose Editor Extension & Image Mixing - Stable Diffusion Web UI Tutorial

image

21.) Automatic1111 Web UI - PC - Free

Automatic1111 Stable Diffusion DreamBooth Guide: Optimal Classification Images Count Comparison Test

image

22.) Automatic1111 Web UI - PC - Free

Epic Web UI DreamBooth Update - New Best Settings - 10 Stable Diffusion Training Compared on RunPods

image

23.) Automatic1111 Web UI - PC - Free

New Style Transfer Extension, ControlNet of Automatic1111 Stable Diffusion T2I-Adapter Color Control

image

24.) Automatic1111 Web UI - PC - Free

Generate Text Arts & Fantastic Logos By Using ControlNet Stable Diffusion Web UI For Free Tutorial

image

25.) Automatic1111 Web UI - PC - Free

How To Install New DREAMBOOTH & Torch 2 On Automatic1111 Web UI PC For Epic Performance Gains Guide

image

26.) Automatic1111 Web UI - PC - Free

Training Midjourney Level Style And Yourself Into The SD 1.5 Model via DreamBooth Stable Diffusion

image

27.) Automatic1111 Web UI - PC - Free

Video To Anime - Generate An EPIC Animation From Your Phone Recording By Using Stable Diffusion AI

image

28.) Python Script - Jupyter Based - PC - Free

Midjourney Level NEW Open Source Kandinsky 2.1 Beats Stable Diffusion - Installation And Usage Guide

image

29.) Automatic1111 Web UI - PC - Free

RTX 3090 vs RTX 3060 Ultimate Showdown for Stable Diffusion, ML, AI & Video Rendering Performance

image

30.) Kohya Web UI - Automatic1111 Web UI - PC - Free

Generate Studio Quality Realistic Photos By Kohya LoRA Stable Diffusion Training - Full Tutorial

image

31.) Kaggle NoteBook (Cloud) - Free

DeepFloyd IF By Stability AI - Is It Stable Diffusion XL or Version 3? We Review and Show How To Use

image

32.) Python Script - Automatic1111 Web UI - PC - Free

How To Find Best Stable Diffusion Generated Images By Using DeepFace AI - DreamBooth / LoRA Training

image

33.) PC - Google Colab (Cloud) - Free

Mind-Blowing Deepfake Tutorial: Turn Anyone into Your Favorite Movie Star! PC & Google Colab - roop

image

34.) Automatic1111 Web UI - PC - Free

Stable Diffusion Now Has The Photoshop Generative Fill Feature With ControlNet Extension - Tutorial

image

35.) Automatic1111 Web UI - PC - Free

Human Cropping Script & 4K+ Resolution Class / Reg Images For Stable Diffusion DreamBooth / LoRA

image

36.) Automatic1111 Web UI - PC - Free

Stable Diffusion 2 NEW Image Post Processing Scripts And Best Class / Regularization Images Datasets

image

37.) Automatic1111 Web UI - PC - Free

How To Use Roop DeepFake On RunPod Step By Step Tutorial With Custom Made Auto Installer Script

image

38.) RunPod - Automatic1111 Web UI - Cloud - Paid - No PC Is Required

How To Install DreamBooth & Automatic1111 On RunPod & Latest Libraries - 2x Speed Up - cudDNN - CUDA

image

39.) Automatic1111 Web UI - PC - Free + RunPod (Cloud)

Zero to Hero ControlNet Tutorial: Stable Diffusion Web UI Extension | Complete Feature Guide

image

40.) Automatic1111 Web UI - PC - Free + RunPod (Cloud)

The END of Photography - Use AI to Make Your Own Studio Photos, FREE Via DreamBooth Training

image

41.) Google Colab - Gradio - Free - Cloud

How To Use Stable Diffusion XL (SDXL 0.9) On Google Colab For Free

image

42.) Local - PC - Free - Gradio

Stable Diffusion XL (SDXL) Locally On Your PC - 8GB VRAM - Easy Tutorial With Automatic Installer

image

43.) Cloud - RunPod

How To Use SDXL On RunPod Tutorial. Auto Installer & Refiner & Amazing Native Diffusers Based Gradio

image

44.) Local - PC - Free - Google Colab (Cloud) - RunPod (Cloud) - Custom Web UI

ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod

image

45.) Local - PC - Free - RunPod (Cloud)

First Ever SDXL Training With Kohya LoRA - Stable Diffusion XL Training Will Replace Older Models

image

46.) Local - PC - Free

How To Use SDXL in Automatic1111 Web UI - SD Web UI vs ComfyUI - Easy Local Install Tutorial / Guide

image

47.) Cloud - RunPod - Paid

How to use Stable Diffusion X-Large (SDXL) with Automatic1111 Web UI on RunPod - Easy Tutorial

image

48.) Local - PC - Free

Become A Master Of SDXL Training With Kohya SS LoRAs - Combine Power Of Automatic1111 & SDXL LoRAs

image

49.) Cloud - RunPod - Paid

How To Do SDXL LoRA Training On RunPod With Kohya SS GUI Trainer & Use LoRAs With Automatic1111 UI

image

50.) Cloud - Kaggle - Free

How to Do SDXL Training For FREE with Kohya LoRA - Kaggle - NO GPU Required - Pwns Google Colab

image

51.) Cloud - Kaggle - Free

How Use Stable Diffusion, SDXL, ControlNet, LoRAs For FREE Without A GPU On Kaggle Like Google Colab

image

52.) Windows - Free

Turn Videos Into Animation With Just 1 Click - ReRender A Video Tutorial - Installer For Windows

image

53.) RunPod - Cloud - Paid

Turn Videos Into Animation / 3D Just 1 Click - ReRender A Video Tutorial - Installer For RunPod

image

54.) Local - PC - Free

Double Your Stable Diffusion Inference Speed with RTX Acceleration TensorRT: A Comprehensive Guide

image

55.) RunPod - Cloud - Paid

How to Install & Run TensorRT on RunPod, Unix, Linux for 2x Faster Stable Diffusion Inference Speed

image

56.) Local - PC - Free

SOTA Image PreProcessing Scripts For Stable Diffusion Training - Auto Subject Crop & Face Focus

image

57.) Local - PC - Free

Fooocus Stable Diffusion Web UI - Use SDXL Like You Are Using Midjourney - Easy To Use High Quality

image

58.) Cloud - Kaggle (Cloud) - Free

How To Do Stable Diffusion XL (SDXL) DreamBooth Training For Free - Utilizing Kaggle - Easy Tutorial

image

59.) Free - Local - RunPod (Cloud)

PIXART-α : First Open Source Rival to Midjourney - Better Than Stable Diffusion SDXL - Full Tutorial

image

60.) Free - Local - PC

Essential AI Tools and Libraries: A Guide to Python, Git, C++ Compile Tools, FFmpeg, CUDA, PyTorch

image

61.) Free - Local - PC & RunPod (Cloud)

MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model - Full Tutorial

image

62.) Free - Local - PC - RunPod (Cloud) - Kaggle (Cloud)

Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle

image

63.) Free - Local - PC - RunPod (Cloud) - Kaggle (Cloud)

Detailed Comparison of 160+ Best Stable Diffusion 1.5 Custom Models & 1 Click Script to Download All

image

64.) Free - Local - PC - RunPod (Cloud)

SUPIR: New SOTA Open Source Image Upscaler & Enhancer Model Better Than Magnific & Topaz AI Tutorial

image

65.) Free - Local - PC - Massed Compute (Cloud)

Full Stable Diffusion SD & XL Fine Tuning Tutorial With OneTrainer On Windows & Cloud - Zero To Hero

image

66.) Free - Local - PC - Cloud - Extension

Improve Stable Diffusion Prompt Following & Image Quality Significantly With Incantations Extension

image

stable-diffusion's People

Contributors

furkangozukara avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stable-diffusion's Issues

help me this!

G:\Deepfake\roop\venv\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass rcond=None, to keep using the old, explicitly pass rcond=-1.
P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
93%|█████████████████████████████████████████████████████████████████████████▋ | 261/280 [00:02<00:00, 203.03it/s]Exception in Tkinter callback
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\Deepfake\lib\tkinter_init_.py", line 1921, in call
return self.func(*args)
File "G:\Deepfake\roop\run.py", line 266, in
start_button = tk.Button(window, text="Start", bg="#f1c40f", relief="flat", borderwidth=0, highlightthickness=0, command=lambda: [save_file(), start()])
File "G:\Deepfake\roop\run.py", line 194, in start
seconds, probabilities = predict_video_frames(video_path=args['target_path'], frame_interval=100)
File "G:\Deepfake\roop\venv\lib\site-packages\opennsfw2_inference.py", line 178, in predict_video_frames
cv2.destroyAllWindows() # pylint: disable=no-member
cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:1266: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows'

100%|███████████████████████████████████████████████████████████████████████████████| 280/280 [00:20<00:00, 203.03it/s]

Could not load torch: cuDNN version incompatibility

HELP PLEASE

root@a192be7bc1bc:/workspace/kohya_ss# bash gui.sh --share
16:25:19-866359 INFO nVidia toolkit detected
16:25:20-583053 INFO Torch 2.0.1+cu118
16:25:20-605249 ERROR Could not load torch: cuDNN version incompatibility: PyTorch was compiled against (8, 7, 0) but found runtime version (8, 5, 0). PyTorch already comes bundled with cuDNN. One option to resolving
this error is to ensure PyTorch can find the bundled cuDNN.one possibility is that there is a conflicting cuDNN in LD_LIBRARY_PATH.
screenshot_2023_06_08_at_23_26_58

It stops after this line and I do not know what the problem is (google colab)

after running the second command after the while it just stops

!python run.py -f "pic.jpg" -t "155985_720p.mp4" -o "face_changed_video1.mp4" --keep-frames --keep-fps --gpu-vendor nvidia
2023-06-10 17:09:40.464752: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-10 17:09:41.892312: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-06-10 17:09:44.620582: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:44.622707: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:44.622898: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
/usr/local/lib/python3.10/dist-packages/insightface/utils/transform.py:68: FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass rcond=None, to keep using the old, explicitly pass rcond=-1.
P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
2023-06-10 17:09:52.340637: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.340949: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.341169: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.341565: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.341777: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.341965: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.342163: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 11602 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5
0% 0/323 [00:00<?, ?it/s]2023-06-10 17:09:55.701166: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:424] Loaded cuDNN version 8700
99% 320/323 [00:04<00:00, 68.51it/s]
[ ]

then it just stops and I do not no how to fix this can someone help

customkinter..please help me with this

(venv) D:\deepfake\roop>python run.py --keep-frames --keep-fps --execution-provider cuda
Traceback (most recent call last):
File "D:\deepfake\roop\run.py", line 3, in
from roop import core
File "D:\deepfake\roop\roop\core.py", line 22, in
import roop.ui as ui
File "D:\deepfake\roop\roop\ui.py", line 3, in
import customtkinter as ctk
ModuleNotFoundError: No module named 'customtkinter'

i canot start roop

(venv) E:\Deepface\roop>python run.py --keep-frames --keep-fps --core 1
Traceback (most recent call last):
File "E:\Deepface\roop\run.py", line 3, in
from roop import core
File "E:\Deepface\roop\roop\core.py", line 16, in
import onnxruntime
ModuleNotFoundError: No module named 'onnxruntime'

Voice Cloning Training stops unexpectedly

I transcribed an audio using the provided code and then used the cli command to emulate the desktop settings. All my yml file settings are exactly the same. However, the training only shows to start but then stops unexpectedly without saving any model or throwing any error. Attaching the end of the terminal and the log file

image

image

Kohya LoRA - Kaggle

When I want to enter the GUI through ngrok, the screen displays "ERR_NGROK_8012", Traffic was successfully tunneled to the ngrok agent, but the agent failed to establish a connection to the upstream web service at localhost:7860. The error encountered was:dial tcp 127.0 .0.1:7860: connect: connection refused

I get error with onixruntime

Processing: 1%|▌ | 3/342 [00:02<05:42, 1.01s/frame]2023-06-07 11:20:47.2553254 [E:onnxruntime:, sequential_executor.cc:514 onnxruntime::ExecuteKernel] Non-zero status code returned while running FusedConv node. Name:'conv_7_conv2d' Status Message: D:\a_work\1\s\onnxruntime\core\framework\bfc_arena.cc:368 onnxruntime::BFCArena::AllocateRawInternal Failed to allocate memory for requested buffer of size 134217728

[ONNXRuntimeError] : 1 : FAIL : D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUBLAS failure 3: CUBLAS_STATUS_ALLOC_FAILED ; GPU=0 ; hostname=DESKTOP-MK381CK ; file=D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_stream_handle.cc ; line=50 ; expr=cublasCreate(&cublas_handle_);

no output video

Hi Furkan,
many thanks for our tutorial.
It pretty much worked 3 times but then i started to receive this message. I already tried to reconnect or rename the files.

%cd "/content/roop"
!python run.py -s "image (3).png" -t "842.mp4" -o "face_v1.mp4" --keep-frames --keep-fps --temp-frame-quality 1 --output-video-quality 1 --execution-provider cuda

/content/roop
Downloading: 529MB [00:02, 220MB/s]
download_path: /root/.insightface/models/buffalo_l
Downloading /root/.insightface/models/buffalo_l.zip from https://github.com/deepinsight/insightface/releases/download/v0.7/buffalo_l.zip...
100% 281857/281857 [00:05<00:00, 53199.74KB/s]
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/.insightface/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/.insightface/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/.insightface/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/.insightface/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/.insightface/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
Pre-trained weights will be downloaded.
Downloading...
From: https://github.com/bhky/opennsfw2/releases/download/v0.1.0/open_nsfw_weights.h5
To: /root/.opennsfw2/weights/open_nsfw_weights.h5
100% 24.2M/24.2M [00:00<00:00, 69.8MB/s]
100% 221/222 [00:02<00:00, 73.73it/s]

After this nothing happens

[WARNING] Please select an image containing a face.

i have added proper image and video, but still get this error ....
iam using colab notebook btw

2023-06-06 05:48:31.242736: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-06 05:48:32.168744: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-06-06 05:48:35.492152: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-06 05:48:35.494989: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-06 05:48:35.496282: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355

[WARNING] Please select an image containing a face.

CudaCall CUBLAS failure

Hi, I followed your tutorial and got everthing set up. I am running on windows 11. when running python run.py it opens the roop interface and on trying to swap faces, it shows this error code. I dont know if its related to low VRAM or anything else. I am running on AMD Ryzen 5 series with NVIDIA 1650 4GB. Please let me know if anything else can be made. Thank you for your tutorials.

onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUBLAS failure 3: CUBLAS_STATUS_ALLOC_FAILED ; GPU=0 ; hostname=GRA5ITER ; file=D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=164 ; expr=cublasCreate(&cublas_handle_);

FileNotFound

FileNotFoundError: [Errno 2] No such file or directory:
'/root/.cache/huggingface/hub/models--stabilityai--stable-diffusion-xl-base-0.9/
refs/main'

Can't Swap face in video.

Hello, I have been using collab from past few months and it working smoothly until today. when I want to change the face it give me this error, can you fix this!

/content/roop
Traceback (most recent call last):
File "/content/roop/run.py", line 3, in
from roop import core
File "/content/roop/roop/core.py", line 20, in
import roop.ui as ui
File "/content/roop/roop/ui.py", line 15, in
from roop.predictor import predict_frame, clear_predictor
File "/content/roop/roop/predictor.py", line 3, in
import opennsfw2
File "/usr/local/lib/python3.10/dist-packages/opennsfw2/init.py", line 4, in
from ._inference import Aggregation
File "/usr/local/lib/python3.10/dist-packages/opennsfw2/_inference.py", line 12, in
from keras import KerasTensor, Model # type: ignore
ImportError: cannot import name 'KerasTensor' from 'keras' (/usr/local/lib/python3.10/dist-packages/keras/init.py)

AttributeError: module 'tensorflow' has no attribute 'keras'

Hi,FurkanGozukara, I'm big fan of your YT channel and subscriber
above all I can't Thank you enough for creating this app.

but i can't run this app because If I type that "python run.py --keep-frames --keep-fps --max-cores 1" or "python run.py --keep-frames --keep-fps --gpu-vendor nvidia"

I got a message this that :" AttributeError: module 'tensorflow' has no attribute 'keras'"

so did it that "pip install --upgrade pip" and "pip install tensorflow==2.12.*"

but it didn't work, so even i did that" Step 7:Activate venv once again" but it didn't work too

스크린샷 2023-06-18 195838

How can i fix this sir?

스크린샷 2023-06-18 195011

If i see your reply I would really appreciate that.

No module named 'opennsfw2'

Getting this error when trying to execute
(venv) D:\Deepface\roop>python run.py
Traceback (most recent call last):
File "run.py", line 3, in
from roop import core
File "D:\Deepface\roop\roop\core.py", line 15, in
from opennsfw2 import predict_video_frames, predict_image
ModuleNotFoundError: No module named 'opennsfw2'

How do I fix it?

PIP Install -r requirements.txt Fail

When i run the following command "pip install -r requirements.txt"
my system appears to hang at
installing backend dependencies . . . \

Can you please help too advsie

OSError: [WinError 127] 找不到指定的程序。 Error loading "D:\deepface\roop\venv\lib\site-packages\torch\lib\torch_cuda_cpp.dll" or one of its dependencies.

Please help, I have this problem:

(venv) D:\deepface\roop>python run.py --keep-frames --keep-fps --gpu-vendor nvidia
Traceback (most recent call last):
File "D:\deepface\roop\run.py", line 3, in
from roop import core
File "D:\deepface\roop\roop\core.py", line 14, in
import torch
File "D:\deepface\roop\venv\lib\site-packages\torch_init_.py", line 122, in
raise err
OSError: [WinError 127] 找不到指定的程序。 Error loading "D:\deepface\roop\venv\lib\site-packages\torch\lib\torch_cuda_cpp.dll" or one of its dependencies.

AttributeError: module 'torch.nn.utils.parametrizations' has no attribute 'weight_norm'

When trying to run tortoise-tts-fast, I recieve this error

(venv) E:\X\Voice Training\tortoise-tts-fast>python "E:\X\Voice Training\tortoise-tts-fast\scripts\tortoise_tts.py" --preset high_quality --ar_checkpoint "E:\X\Voice Training\DL-Art-School\experiments\Matthew_VC\models\875_gpt.pth" "Hello. Can you hear me? Is this thing on?."
Traceback (most recent call last):
File "E:\X\Voice Training\tortoise-tts-fast\scripts\tortoise_tts.py", line 240, in
from tortoise.inference import (
File "E:\X\Voice Training\tortoise-tts-fast\tortoise\inference.py", line 167, in
vfixer = VoiceFixer()
File "E:\X\Voice Training\tortoise-tts-fast\venv\lib\site-packages\voicefixer\base.py", line 13, in init
self._model = voicefixer_fe(channels=2, sample_rate=44100)
File "E:\X\Voice Training\tortoise-tts-fast\venv\lib\site-packages\voicefixer\restorer\model.py", line 180, in init
self.vocoder = Vocoder(sample_rate=44100)
File "E:\X\Voice Training\tortoise-tts-fast\venv\lib\site-packages\voicefixer\vocoder\base.py", line 19, in init
self._load_pretrain(Config.ckpt)
File "E:\X\Voice Training\tortoise-tts-fast\venv\lib\site-packages\voicefixer\vocoder\base.py", line 25, in _load_pretrain
self.model = Generator(Config.cin_channels)
File "E:\X\Voice Training\tortoise-tts-fast\venv\lib\site-packages\voicefixer\vocoder\model\generator.py", line 34, in init
nn.utils.parametrizations.weight_norm(
AttributeError: module 'torch.nn.utils.parametrizations' has no attribute 'weight_norm'"

ERROR: Could not find a version that satisfies the requirement xformers==0.0.21.dev564

The problem started to occur today:
ERROR: Could not find a version that satisfies the requirement xformers==0.0.21.dev564 (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.16rc424, 0.0.16rc425, 0.0.16, 0.0.17rc481, 0.0.17rc482, 0.0.17, 0.0.18, 0.0.19, 0.0.20, 0.0.21.dev569, 0.0.21.dev571)
ERROR: No matching distribution found for xformers==0.0.21.dev564

All huggingface model permissions are granted,

this problem has been troubling me since this week have there been any new update

2023-06-10 20:20:37.394766: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/kohya_ss/train_network.py:17 in │
│ │
│ 14 from accelerate.utils import set_seed │
│ 15 from diffusers import DDPMScheduler │
│ 16 │
│ ❱ 17 import library.train_util as train_util │
│ 18 from library.train_util import ( │
│ 19 │ DreamBoothDataset, │
│ 20 ) │
│ │
│ /home/kohya_ss/library/train_util.py:56 in │
│ │
│ 53 │ KDPM2AncestralDiscreteScheduler, │
│ 54 ) │
│ 55 from huggingface_hub import hf_hub_download │
│ ❱ 56 import albumentations as albu │
│ 57 import numpy as np │
│ 58 from PIL import Image │
│ 59 import cv2 │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/albumentations/init.py:5 in │
│ │
│ 2 │
│ 3 version = "1.3.0" │
│ 4 │
│ ❱ 5 from .augmentations import * │
│ 6 from .core.composition import * │
│ 7 from .core.serialization import * │
│ 8 from .core.transforms_interface import * │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/albumentations/augmentations/init.py:2 in │
│ │
│ │
│ 1 # Common classes │
│ ❱ 2 from .blur.functional import * │
│ 3 from .blur.transforms import * │
│ 4 from .crops.functional import * │
│ 5 from .crops.transforms import * │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/albumentations/augmentations/blur/init.py:1 │
│ in │
│ │
│ ❱ 1 from .functional import * │
│ 2 from .transforms import * │
│ 3 │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/albumentations/augmentations/blur/functional.py │
│ :5 in │
│ │
│ 2 from math import ceil │
│ 3 from typing import Sequence, Union │
│ 4 │
│ ❱ 5 import cv2 │
│ 6 import numpy as np │
│ 7 │
│ 8 from albumentations.augmentations.functional import convolve │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/cv2/init.py:181 in │
│ │
│ 178 │ if DEBUG: print('OpenCV loader: DONE') │
│ 179 │
│ 180 │
│ ❱ 181 bootstrap() │
│ 182 │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/cv2/init.py:153 in bootstrap │
│ │
│ 150 │ │
│ 151 │ py_module = sys.modules.pop("cv2") │
│ 152 │ │
│ ❱ 153 │ native_module = importlib.import_module("cv2") │
│ 154 │ │
│ 155 │ sys.modules["cv2"] = py_module │
│ 156 │ setattr(py_module, "_native", native_module) │
│ │
│ /opt/conda/lib/python3.10/importlib/init.py:126 in import_module │
│ │
│ 123 │ │ │ if character != '.': │
│ 124 │ │ │ │ break │
│ 125 │ │ │ level += 1 │
│ ❱ 126 │ return _bootstrap._gcd_import(name[level:], package, level) │
│ 127 │
│ 128 │
│ 129 _RELOADING = {} │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/kohya_ss/venv/bin/accelerate:8 in │
│ │
│ 5 from accelerate.commands.accelerate_cli import main │
│ 6 if name == 'main': │
│ 7 │ sys.argv[0] = re.sub(r'(-script.pyw|.exe)?$', '', sys.argv[0]) │
│ ❱ 8 │ sys.exit(main()) │
│ 9 │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py:45 in │
│ main │
│ │
│ 42 │ │ exit(1) │
│ 43 │ │
│ 44 │ # Run │
│ ❱ 45 │ args.func(args) │
│ 46 │
│ 47 │
│ 48 if name == "main": │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py:918 in │
│ launch_command │
│ │
│ 915 │ elif defaults is not None and defaults.compute_environment == ComputeEnvironment.AMA │
│ 916 │ │ sagemaker_launcher(defaults, args) │
│ 917 │ else: │
│ ❱ 918 │ │ simple_launcher(args) │
│ 919 │
│ 920 │
│ 921 def main(): │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py:580 in │
│ simple_launcher │
│ │
│ 577 │ process.wait() │
│ 578 │ if process.returncode != 0: │
│ 579 │ │ if not args.quiet: │
│ ❱ 580 │ │ │ raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) │
│ 581 │ │ else: │
│ 582 │ │ │ sys.exit(1) │
│ 583 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
CalledProcessError: Command '['/home/kohya_ss/venv/bin/python', 'train_network.py', '--enable_bucket',
'--pretrained_model_name_or_path=runwayml/stable-diffusion-v1-5', '--train_data_dir=/home/chimu/img',
'--reg_data_dir=/home/chimu/reg', '--resolution=768,768', '--output_dir=/home/chimu/model', '--logging_dir=/home/chimu/log',
'--network_alpha=1', '--save_model_as=safetensors', '--network_module=networks.lora', '--text_encoder_lr=5e-05',
'--unet_lr=0.0001', '--network_dim=128', '--output_name=test1', '--lr_scheduler_num_cycles=10', '--learning_rate=0.0001',
'--lr_scheduler=cosine', '--train_batch_size=1', '--max_train_steps=30400', '--save_every_n_epochs=1', '--mixed_precision=fp16',
'--save_precision=fp16', '--seed=1234', '--caption_extension=.txt', '--cache_latents', '--optimizer_type=AdamW8bit',
'--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale']' returned non-zero exit status

ModuleNotFoundError: No module named 'gradio'

Hi, thanks for the steps and the tutorial. I followed the instructions on GitHub and YouTube, but I got the same error with both methods:

ModuleNotFoundError: No module named 'gradio'

Here is the text from the Command Prompt, after I followed the instructions on GitHub:

Microsoft Windows [Version 10.0.22621.2283]
(c) Microsoft Corporation. All rights reserved.

D:\Installers>git clone https://github.com/facebookresearch/audiocraft
Cloning into 'audiocraft'...
remote: Enumerating objects: 953, done.
remote: Counting objects: 100% (237/237), done.
remote: Compressing objects: 100% (122/122), done.
remote: Total 953 (delta 153), reused 143 (delta 114), pack-reused 716Receiving objects:  99% (944/953), 848.00 KiB | 1.64 MiB/s
Receiving objects: 100% (953/953), 1.74 MiB | 2.27 MiB/s, done.
Resolving deltas: 100% (480/480), done.

D:\Installers>cd audiocraft

D:\Installers\audiocraft>python -m venv venv

D:\Installers\audiocraft>cd venv

D:\Installers\audiocraft\venv>cd scripts

D:\Installers\audiocraft\venv\Scripts>activate

(venv) D:\Installers\audiocraft\venv\Scripts>cd ..

(venv) D:\Installers\audiocraft\venv>cd ..

(venv) D:\Installers\audiocraft>pip install -e .
Obtaining file:///D:/Installers/audiocraft
  Preparing metadata (setup.py) ... done
Collecting av
  Using cached av-10.0.0-cp310-cp310-win_amd64.whl (25.3 MB)
Collecting einops
  Using cached einops-0.7.0-py3-none-any.whl (44 kB)
Collecting flashy>=0.0.1
  Using cached flashy-0.0.2.tar.gz (72 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting hydra-core>=1.1
  Using cached hydra_core-1.3.2-py3-none-any.whl (154 kB)
Collecting hydra_colorlog
  Using cached hydra_colorlog-1.2.0-py3-none-any.whl (3.6 kB)
Collecting julius
  Using cached julius-0.2.7.tar.gz (59 kB)
  Preparing metadata (setup.py) ... done
Collecting num2words
  Using cached num2words-0.5.12-py3-none-any.whl (125 kB)
Collecting numpy
  Using cached numpy-1.26.0-cp310-cp310-win_amd64.whl (15.8 MB)
Collecting sentencepiece
  Using cached sentencepiece-0.1.99-cp310-cp310-win_amd64.whl (977 kB)
Collecting spacy==3.5.2
  Using cached spacy-3.5.2-cp310-cp310-win_amd64.whl (12.2 MB)
Collecting torch>=2.0.0
  Downloading torch-2.1.0-cp310-cp310-win_amd64.whl (192.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 192.3/192.3 MB 4.9 MB/s eta 0:00:00
Collecting torchaudio>=2.0.0
  Downloading torchaudio-2.1.0-cp310-cp310-win_amd64.whl (2.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 6.4 MB/s eta 0:00:00
Collecting huggingface_hub
  Using cached huggingface_hub-0.17.3-py3-none-any.whl (295 kB)
Collecting tqdm
  Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Collecting transformers>=4.31.0
  Using cached transformers-4.34.0-py3-none-any.whl (7.7 MB)
Collecting xformers
  Using cached xformers-0.0.22-cp310-cp310-win_amd64.whl (97.6 MB)
Collecting demucs
  Using cached demucs-4.0.1.tar.gz (1.2 MB)
  Preparing metadata (setup.py) ... done
Collecting librosa
  Using cached librosa-0.10.1-py3-none-any.whl (253 kB)
Collecting gradio
  Using cached gradio-3.47.1-py3-none-any.whl (20.3 MB)
Collecting torchmetrics
  Using cached torchmetrics-1.2.0-py3-none-any.whl (805 kB)
Collecting encodec
  Using cached encodec-0.1.1.tar.gz (3.7 MB)
  Preparing metadata (setup.py) ... done
Collecting protobuf
  Using cached protobuf-4.24.4-cp310-abi3-win_amd64.whl (430 kB)
Collecting jinja2
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting pathy>=0.10.0
  Using cached pathy-0.10.2-py3-none-any.whl (48 kB)
Collecting packaging>=20.0
  Using cached packaging-23.2-py3-none-any.whl (53 kB)
Collecting langcodes<4.0.0,>=3.2.0
  Using cached langcodes-3.3.0-py3-none-any.whl (181 kB)
Collecting cymem<2.1.0,>=2.0.2
  Using cached cymem-2.0.8-cp310-cp310-win_amd64.whl (39 kB)
Requirement already satisfied: setuptools in d:\installers\audiocraft\venv\lib\site-packages (from spacy==3.5.2->audiocraft==1.0.0) (65.5.0)
Collecting srsly<3.0.0,>=2.4.3
  Using cached srsly-2.4.8-cp310-cp310-win_amd64.whl (481 kB)
Collecting wasabi<1.2.0,>=0.9.1
  Using cached wasabi-1.1.2-py3-none-any.whl (27 kB)
Collecting murmurhash<1.1.0,>=0.28.0
  Using cached murmurhash-1.0.10-cp310-cp310-win_amd64.whl (25 kB)
Collecting smart-open<7.0.0,>=5.2.1
  Using cached smart_open-6.4.0-py3-none-any.whl (57 kB)
Collecting catalogue<2.1.0,>=2.0.6
  Using cached catalogue-2.0.10-py3-none-any.whl (17 kB)
Collecting spacy-loggers<2.0.0,>=1.0.0
  Using cached spacy_loggers-1.0.5-py3-none-any.whl (22 kB)
Collecting requests<3.0.0,>=2.13.0
  Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting pydantic!=1.8,!=1.8.1,<1.11.0,>=1.7.4
  Using cached pydantic-1.10.13-cp310-cp310-win_amd64.whl (2.1 MB)
Collecting preshed<3.1.0,>=3.0.2
  Using cached preshed-3.0.9-cp310-cp310-win_amd64.whl (122 kB)
Collecting thinc<8.2.0,>=8.1.8
  Using cached thinc-8.1.12-cp310-cp310-win_amd64.whl (1.5 MB)
Collecting spacy-legacy<3.1.0,>=3.0.11
  Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)
Collecting typer<0.8.0,>=0.3.0
  Using cached typer-0.7.0-py3-none-any.whl (38 kB)
Collecting dora-search
  Using cached dora_search-0.1.12.tar.gz (87 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting colorlog
  Using cached colorlog-6.7.0-py2.py3-none-any.whl (11 kB)
Collecting antlr4-python3-runtime==4.9.*
  Using cached antlr4-python3-runtime-4.9.3.tar.gz (117 kB)
  Preparing metadata (setup.py) ... done
Collecting omegaconf<2.4,>=2.2
  Using cached omegaconf-2.3.0-py3-none-any.whl (79 kB)
Collecting filelock
  Using cached filelock-3.12.4-py3-none-any.whl (11 kB)
Collecting sympy
  Downloading sympy-1.12-py3-none-any.whl (5.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 MB 6.4 MB/s eta 0:00:00
Collecting typing-extensions
  Using cached typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Collecting networkx
  Downloading networkx-3.1-py3-none-any.whl (2.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 6.0 MB/s eta 0:00:00
Collecting fsspec
  Using cached fsspec-2023.9.2-py3-none-any.whl (173 kB)
Collecting colorama
  Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting pyyaml>=5.1
  Using cached PyYAML-6.0.1-cp310-cp310-win_amd64.whl (145 kB)
Collecting regex!=2019.12.17
  Using cached regex-2023.10.3-cp310-cp310-win_amd64.whl (269 kB)
Collecting tokenizers<0.15,>=0.14
  Using cached tokenizers-0.14.1-cp310-none-win_amd64.whl (2.2 MB)
Collecting safetensors>=0.3.1
  Using cached safetensors-0.4.0-cp310-none-win_amd64.whl (277 kB)
Collecting lameenc>=1.2
  Using cached lameenc-1.6.1-cp310-cp310-win_amd64.whl (148 kB)
Collecting openunmix
  Using cached openunmix-1.2.1-py3-none-any.whl (46 kB)
Collecting matplotlib~=3.0
  Using cached matplotlib-3.8.0-cp310-cp310-win_amd64.whl (7.6 MB)
Collecting altair<6.0,>=4.2.0
  Using cached altair-5.1.2-py3-none-any.whl (516 kB)
Collecting uvicorn>=0.14.0
  Using cached uvicorn-0.23.2-py3-none-any.whl (59 kB)
Collecting aiofiles<24.0,>=22.0
  Using cached aiofiles-23.2.1-py3-none-any.whl (15 kB)
Collecting semantic-version~=2.0
  Using cached semantic_version-2.10.0-py2.py3-none-any.whl (15 kB)
Collecting pandas<3.0,>=1.0
  Using cached pandas-2.1.1-cp310-cp310-win_amd64.whl (10.7 MB)
Collecting ffmpy
  Using cached ffmpy-0.3.1.tar.gz (5.5 kB)
  Preparing metadata (setup.py) ... done
Collecting websockets<12.0,>=10.0
  Using cached websockets-11.0.3-cp310-cp310-win_amd64.whl (124 kB)
Collecting fastapi
  Using cached fastapi-0.103.2-py3-none-any.whl (66 kB)
Collecting python-multipart
  Using cached python_multipart-0.0.6-py3-none-any.whl (45 kB)
Collecting orjson~=3.0
  Using cached orjson-3.9.7-cp310-none-win_amd64.whl (134 kB)
Collecting importlib-resources<7.0,>=1.3
  Using cached importlib_resources-6.1.0-py3-none-any.whl (33 kB)
Collecting pydub
  Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Collecting gradio-client==0.6.0
  Using cached gradio_client-0.6.0-py3-none-any.whl (298 kB)
Collecting markupsafe~=2.0
  Using cached MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl (17 kB)
Collecting pillow<11.0,>=8.0
  Using cached Pillow-10.0.1-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting httpx
  Using cached httpx-0.25.0-py3-none-any.whl (75 kB)
Collecting scikit-learn>=0.20.0
  Using cached scikit_learn-1.3.1-cp310-cp310-win_amd64.whl (9.3 MB)
Collecting lazy-loader>=0.1
  Using cached lazy_loader-0.3-py3-none-any.whl (9.1 kB)
Collecting pooch>=1.0
  Using cached pooch-1.7.0-py3-none-any.whl (60 kB)
Collecting joblib>=0.14
  Using cached joblib-1.3.2-py3-none-any.whl (302 kB)
Collecting numba>=0.51.0
  Using cached numba-0.58.0-cp310-cp310-win_amd64.whl (2.6 MB)
Collecting audioread>=2.1.9
  Using cached audioread-3.0.1-py3-none-any.whl (23 kB)
Collecting scipy>=1.2.0
  Using cached scipy-1.11.3-cp310-cp310-win_amd64.whl (44.1 MB)
Collecting msgpack>=1.0
  Using cached msgpack-1.0.7-cp310-cp310-win_amd64.whl (222 kB)
Collecting soundfile>=0.12.1
  Using cached soundfile-0.12.1-py2.py3-none-win_amd64.whl (1.0 MB)
Collecting decorator>=4.3.0
  Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting soxr>=0.3.2
  Using cached soxr-0.3.7-cp310-cp310-win_amd64.whl (184 kB)
Collecting docopt>=0.6.2
  Using cached docopt-0.6.2.tar.gz (25 kB)
  Preparing metadata (setup.py) ... done
Collecting lightning-utilities>=0.8.0
  Using cached lightning_utilities-0.9.0-py3-none-any.whl (23 kB)
Collecting xformers
  Using cached xformers-0.0.21-cp310-cp310-win_amd64.whl (97.5 MB)
  Using cached xformers-0.0.20-cp310-cp310-win_amd64.whl (97.6 MB)
Collecting pyre-extensions==0.0.29
  Downloading pyre_extensions-0.0.29-py3-none-any.whl (12 kB)
Collecting xformers
  Using cached xformers-0.0.19-cp310-cp310-win_amd64.whl (96.7 MB)
  Downloading xformers-0.0.18-cp310-cp310-win_amd64.whl (112.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 112.3/112.3 MB 5.3 MB/s eta 0:00:00
Collecting pyre-extensions==0.0.23
  Downloading pyre_extensions-0.0.23-py3-none-any.whl (11 kB)
Collecting xformers
  Downloading xformers-0.0.17-cp310-cp310-win_amd64.whl (112.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 112.6/112.6 MB 5.2 MB/s eta 0:00:00
  Downloading xformers-0.0.16-cp310-cp310-win_amd64.whl (40.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.0/40.0 MB 6.1 MB/s eta 0:00:00
  Downloading xformers-0.0.13.tar.gz (292 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 292.5/292.5 kB 4.6 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [6 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\bayus\AppData\Local\Temp\pip-install-nz6ycovj\xformers_5e722d04a7d34dd8b3069d58fb05c922\setup.py", line 18, in <module>
          import torch
      ModuleNotFoundError: No module named 'torch'
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

[notice] A new release of pip is available: 23.0.1 -> 23.2.1
[notice] To update, run: python.exe -m pip install --upgrade pip

(venv) D:\Installers\audiocraft>pip uninstall torch -y
WARNING: Skipping torch as it is not installed.

(venv) D:\Installers\audiocraft>pip uninstall torchvision -y
WARNING: Skipping torchvision as it is not installed.

(venv) D:\Installers\audiocraft>pip uninstall torchaudio -y
WARNING: Skipping torchaudio as it is not installed.

(venv) D:\Installers\audiocraft>pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Looking in indexes: https://download.pytorch.org/whl/cu118
Collecting torch
  Using cached https://download.pytorch.org/whl/cu118/torch-2.1.0%2Bcu118-cp310-cp310-win_amd64.whl (2722.7 MB)
Collecting torchvision
  Using cached https://download.pytorch.org/whl/cu118/torchvision-0.16.0%2Bcu118-cp310-cp310-win_amd64.whl (5.0 MB)
Collecting torchaudio
  Using cached https://download.pytorch.org/whl/cu118/torchaudio-2.1.0%2Bcu118-cp310-cp310-win_amd64.whl (3.9 MB)
Collecting sympy
  Using cached https://download.pytorch.org/whl/sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting typing-extensions
  Using cached https://download.pytorch.org/whl/typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Collecting fsspec
  Using cached https://download.pytorch.org/whl/fsspec-2023.4.0-py3-none-any.whl (153 kB)
Collecting filelock
  Using cached https://download.pytorch.org/whl/filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting networkx
  Using cached https://download.pytorch.org/whl/networkx-3.0-py3-none-any.whl (2.0 MB)
Collecting jinja2
  Using cached https://download.pytorch.org/whl/Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting pillow!=8.3.*,>=5.3.0
  Using cached https://download.pytorch.org/whl/Pillow-9.3.0-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting numpy
  Downloading https://download.pytorch.org/whl/numpy-1.24.1-cp310-cp310-win_amd64.whl (14.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.8/14.8 MB 6.4 MB/s eta 0:00:00
Collecting requests
  Using cached https://download.pytorch.org/whl/requests-2.28.1-py3-none-any.whl (62 kB)
Collecting MarkupSafe>=2.0
  Using cached https://download.pytorch.org/whl/MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl (16 kB)
Collecting urllib3<1.27,>=1.21.1
  Using cached https://download.pytorch.org/whl/urllib3-1.26.13-py2.py3-none-any.whl (140 kB)
Collecting certifi>=2017.4.17
  Using cached https://download.pytorch.org/whl/certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting charset-normalizer<3,>=2
  Using cached https://download.pytorch.org/whl/charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting idna<4,>=2.5
  Using cached https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB)
Collecting mpmath>=0.19
  Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torchaudio
Successfully installed MarkupSafe-2.1.2 certifi-2022.12.7 charset-normalizer-2.1.1 filelock-3.9.0 fsspec-2023.4.0 idna-3.4 jinja2-3.1.2 mpmath-1.3.0 networkx-3.0 numpy-1.24.1 pillow-9.3.0 requests-2.28.1 sympy-1.12 torch-2.1.0+cu118 torchaudio-2.1.0+cu118 torchvision-0.16.0+cu118 typing-extensions-4.4.0 urllib3-1.26.13

[notice] A new release of pip is available: 23.0.1 -> 23.2.1
[notice] To update, run: python.exe -m pip install --upgrade pip

(venv) D:\Installers\audiocraft>pip install -U --pre xformers
Collecting xformers
  Downloading xformers-0.0.23.dev639-cp310-cp310-win_amd64.whl (97.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.6/97.6 MB 5.5 MB/s eta 0:00:00
Requirement already satisfied: torch==2.1.0 in d:\installers\audiocraft\venv\lib\site-packages (from xformers) (2.1.0+cu118)
Requirement already satisfied: numpy in d:\installers\audiocraft\venv\lib\site-packages (from xformers) (1.24.1)
Requirement already satisfied: filelock in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (3.9.0)
Requirement already satisfied: networkx in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (3.0)
Requirement already satisfied: fsspec in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (2023.4.0)
Requirement already satisfied: typing-extensions in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (4.4.0)
Requirement already satisfied: sympy in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (1.12)
Requirement already satisfied: jinja2 in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (3.1.2)
Requirement already satisfied: MarkupSafe>=2.0 in d:\installers\audiocraft\venv\lib\site-packages (from jinja2->torch==2.1.0->xformers) (2.1.2)
Requirement already satisfied: mpmath>=0.19 in d:\installers\audiocraft\venv\lib\site-packages (from sympy->torch==2.1.0->xformers) (1.3.0)
Installing collected packages: xformers
Successfully installed xformers-0.0.23.dev639

[notice] A new release of pip is available: 23.0.1 -> 23.2.1
[notice] To update, run: python.exe -m pip install --upgrade pip

(venv) D:\Installers\audiocraft>python .\demos\musicgen_app.py --inbrowser
Traceback (most recent call last):
  File "D:\Installers\audiocraft\demos\musicgen_app.py", line 21, in <module>
    import gradio as gr
ModuleNotFoundError: No module named 'gradio'

(venv) D:\Installers\audiocraft>

Finish / Start of AutoInst on Runpod not possible

Hi, and first of all, thank you a lot for your work and explanation!

Worked at Runpod, RTX A6000 like you described in your youtube auto install video.

And every think worked fine till i executed:
cd /workspace/stable-diffusion-xl-demo
source venv/bin/activate
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256 SHARE=true ENABLE_REFINER=true python app7.py

Here i got several warnings as a feedback (see below what i reproduced when started again)
Interestingly it gives me a 127. ip to start.

  • when i click the 127, there is nothing - as i expected
  • wenn i start the web terminal and connect to web terminal, i get only this: "root@38f87f9b1020:/workspace#" on a black page.
  • When i click connect to HTTP... Port3001, i get the 502 Page

Thanks for the support
El


root@38f87f9b1020:/workspace/stable-diffusion-xl-demo# cd /workspace/stable-diffusion-xl-demo
source venv/bin/activate
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256 SHARE=true ENABLE_REFINER=true python app7.py
Loading model /workspace/stable-diffusion-xl-base-0.9
Loading pipeline components...: 29%|█████████████████████████████████████▋ | 2/7 [0Loading pipeline components...: 43%|████████████████████████████████████████████████████████▌ | 3/7 [0Loading pipeline components...: 86%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 6/7 [0Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:32<00:00, 4.62s/it]
Loading model /workspace/stable-diffusion-xl-refiner-0.9
Loading pipeline components...: 100%|█| 5/5 [00:11<00:00,
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/deprecation.py:43: UserWarning: You have unused kwarg parameters in Blocks, please remove them: {'timeout': 300}
warnings.warn(
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/layouts.py:75: UserWarning: mobile_collapse is no longer supported.
warnings.warn("mobile_collapse is no longer supported.")
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/components.py:198: UserWarning: 'rounded' styling is no longer supported. To round adjacent components together, place them in a Column(variant='box').
warnings.warn(
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/components.py:216: UserWarning: 'border' styling is no longer supported. To place adjacent components in a shared border, place them in a Column(variant='box').
warnings.warn(
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/components.py:207: UserWarning: 'margin' styling is no longer supported. To place adjacent components together without margin, place them in a Column(variant='box').
warnings.warn(
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/deprecation.py:43: UserWarning: You have unused kwarg parameters in Slider, please remove them: {'enabled': True}
warnings.warn(
Running on local URL: http://127.0.0.1:7860

Hello, great teaching creator, would like to see how to train the Lora model of SDxl-1.0 in Colab

Hello, great teaching creator, would like to see how to train the Lora model of SDxl-1.0 in Colab,

https://github.com/Linaqruf/kohya-trainer/blob/main/kohya-LoRA-trainer-XL.ipynb

This is the Colab training code for SDxl 1.0, but I won't use it. The quality of the trained Lora model is very poor, which is very important for people without good graphics cards.Your YouTube video tutorials are very detailed and useful in every issue.

Errors while install tortoise-tts-fast

Hi Furkan,
While executing python -m pip install -e . , the following errors pop up. Can you help? Thanks.
ModuleNotFoundError: No module named 'setuptools'

image

session stoped when run !bash ...

hi and thanks for your efforts on free kaggle notebook

when i run !bash gui.sh --share --headless, session goes off and can not start gradio

output screen shot
image

do you have this problem?

i just pay as colab pro but not work porpaly

i jus pay as bolab pro but not work propaly show error as below

ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
pls do somtin as soon as posibal
thanks
manish kummar

Problem with image processing

Hello, I have been using Colab for several months without any issues in processing images (just face swap), and it had been working smoothly until today when it won't allow me to process the image. This is the code that appears:

/content/roop
Traceback (most recent call last):
File "/content/roop/run.py", line 3, in
from roop import core
File "/content/roop/roop/core.py", line 20, in
import roop.ui as ui
File "/content/roop/roop/ui.py", line 15, in
from roop.predictor import predict_frame, clear_predictor
File "/content/roop/roop/predictor.py", line 3, in
import opennsfw2
File "/usr/local/lib/python3.10/dist-packages/opennsfw2/init.py", line 4, in
from ._inference import Aggregation
File "/usr/local/lib/python3.10/dist-packages/opennsfw2/_inference.py", line 16, in
from ._model import make_open_nsfw_model
File "/usr/local/lib/python3.10/dist-packages/opennsfw2/_model.py", line 12, in
from tensorflow.keras import layers # type: ignore # pylint: disable=import-error
File "/usr/local/lib/python3.10/dist-packages/keras/api/_v2/keras/init.py", line 3, in
from keras.api._v2.keras import internal
File "/usr/local/lib/python3.10/dist-packages/keras/api/_v2/keras/init.py", line 3, in
from keras.api._v2.keras import internal
File "/usr/local/lib/python3.10/dist-packages/keras/api/_v2/keras/internal/init.py", line 3, in
from keras.api._v2.keras.internal import backend
File "/usr/local/lib/python3.10/dist-packages/keras/api/_v2/keras/internal/backend/init.py", line 3, in
from keras.src.backend import _initialize_variables as initialize_variables
ImportError: cannot import name '_initialize_variables' from 'keras.src.backend' (/usr/local/lib/python3.10/dist-packages/keras/src/backend/init.py)

Missing LICENSE

I see you have no LICENSE file for this project. The default is copyright.

I would suggest releasing the code under the GPL-3.0-or-later or AGPL-3.0-or-later license so that others are encouraged to contribute changes back to your project.

CC-BY-SA-4.0 might also be appropriate for this repository.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.