Coder Social home page Coder Social logo

sd-img2img-batch-interrogator's Issues

Adding Custom Interrogator

I'd like to use custom interrogator. Could be from CLIP Interrogator tool like ViT-bigG-14/laion2b_s39b_b160k. Where to put models so batch interrogator will find them or how to do it any other way. Also, would be great to have same possibility to choose between: best, fast classic and negative mode.
Thanks for great tool.

adding up prompts

right now it adds up the interrogator prompt to the previous prompt, so if we have 10 images, the 10th one ends up with 11 prompts (Original prompt + 10 prompts from the 10 images) maybe there could be sort of an identifier added in so reset the prompt to the original one for each new image?

[Bug] - BLIP interrogation fails w/ " RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0" - (Use deepbooru works & A1111 naitive BLIP works)

Hey!

First of all I really like the script you've put together here. It seems like a simple idea, but it really adds some spice to IMG2IMG when just playing around w/ images.

I'm experiencing an error whenever I try to use the extension w/o Deepbooru. If that's untoggled the script errors and fails to produce the interrogated prompt. The error I receive is in the attached console log.

I fed the error to Chatgpt-4 which said it seemed to be an image file format issue (having an alpha channel), but I tried resaving images to .jpg which should have removed any extra channels and I still received the same error.

I hope you can take a quick look at this and see if there's a fix for it.

Thanks so much!!

Details

To create a public link, set share=True in launch().
Startup time: 12.7s (prepare environment: 0.6s, import torch: 1.7s, import gradio: 0.5s, setup paths: 0.3s, initialize shared: 0.1s, other imports: 0.3s, list SD models: 0.2s, load scripts: 4.4s, refresh VAE: 0.1s, create ui: 3.8s, gradio launch: 0.3s, app_started_callback: 0.3s).

img2img:
load checkpoint from D:\stable-diffusion-webui\models\BLIP\model_base_caption_capfilt_large.pth
100%|████████████████████████████████████████| 890M/890M [00:07<00:00, 128MiB/s]
*** Error interrogating
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\interrogate.py", line 194, in interrogate
caption = self.generate_caption(pil_image)
File "D:\stable-diffusion-webui\modules\interrogate.py", line 174, in generate_caption
gpu_image = transforms.Compose([
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py", line 95, in call
img = t(img)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self.call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py", line 277, in forward
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional.py", line 363, in normalize
return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms_functional_tensor.py", line 928, in normalize
return tensor.sub
(mean).div
(std)
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0


Prompt:
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:05<00:00, 4.81it/s]

img2img:
*** Error interrogating
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\interrogate.py", line 194, in interrogate
caption = self.generate_caption(pil_image)
File "D:\stable-diffusion-webui\modules\interrogate.py", line 174, in generate_caption
gpu_image = transforms.Compose([
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py", line 95, in call
img = t(img)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self.call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py", line 277, in forward
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional.py", line 363, in normalize
return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms_functional_tensor.py", line 928, in normalize
return tensor.sub
(mean).div
(std)
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0


Prompt:
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.23it/s]

img2img:
Prompt: 1boy, 3d, backwards hat, baseball cap, beard, blurry, blurry background, blurry foreground, building, depth of field, dog tags, facial hair, hat, jewelry, male focus, mustache, necklace, photo (medium), photo background, realistic, shirt, solo, upper body
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.20it/s]

img2img:
*** Error interrogating
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\interrogate.py", line 194, in interrogate
caption = self.generate_caption(pil_image)
File "D:\stable-diffusion-webui\modules\interrogate.py", line 174, in generate_caption
gpu_image = transforms.Compose([
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py", line 95, in call
img = t(img)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self.call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py", line 277, in forward
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional.py", line 363, in normalize
return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms_functional_tensor.py", line 928, in normalize
return tensor.sub
(mean).div
(std)
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0


Prompt:
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00, 5.21it/s]

System Settings

Details

{
"Platform": "Windows-10-10.0.19045-SP0",
"Python": "3.10.6",
"Version": "v1.7.0",
"Commit": "cf2772fab0af5573da775e7437e6acdca424f26e",
"Script path": "D:\stable-diffusion-webui",
"Data path": "D:\stable-diffusion-webui",
"Extensions dir": "D:\stable-diffusion-webui\extensions",
"Checksum": "cd22ba34e2aba7a6f776fb21588a63f7979f4400d83326e96d820de28184664a",
"Commandline": [
"launch.py",
"--opt-sdp-attention",
"--no-half-vae",
"--opt-channelslast",
"--disable-safe-unpickle",
"--skip-torch-cuda-test",
"--disable-nan-check",
"--skip-version-check",
"--ckpt-dir",
"e:\stable Diffusion Checkpoints"
],
"Torch env info": {
"torch_version": "2.1.2+cu121",
"is_debug_build": "False",
"cuda_compiled_version": "12.1",
"gcc_version": null,
"clang_version": null,
"cmake_version": null,
"os": "Microsoft Windows 10 Pro",
"libc_version": "N/A",
"python_version": "3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)",
"python_platform": "Windows-10-10.0.19045-SP0",
"is_cuda_available": "True",
"cuda_runtime_version": null,
"cuda_module_loading": "LAZY",
"nvidia_driver_version": "551.23",
"nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 4090",
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.23.5",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.1.2+cu121",
"torchdiffeq==0.2.3",
"torchmetrics==1.2.1",
"torchsde==0.2.6",
"torchvision==0.16.2+cu121"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": [
"Architecture=9",
"CurrentClockSpeed=3200",
"DeviceID=CPU0",
"Family=207",
"L2CacheSize=16384",
"L2CacheSpeed=",
"Manufacturer=GenuineIntel",
"MaxClockSpeed=3200",
"Name=Intel(R) Core(TM) i9-14900K",
"ProcessorType=3",
"Revision="
]
},
"Exceptions": [
{
"exception": "The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0",
"traceback": [
[
"D:\stable-diffusion-webui\modules\interrogate.py, line 194, interrogate",
"caption = self.generate_caption(pil_image)"
],
[
"D:\stable-diffusion-webui\modules\interrogate.py, line 174, generate_caption",
"gpu_image = transforms.Compose(["
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py, line 95, call",
"img = t(img)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1527, call_impl",
"return forward_call(*args, **kwargs)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py, line 277, forward",
"return F.normalize(tensor, self.mean, self.std, self.inplace)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional.py, line 363, normalize",
"return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional_tensor.py, line 928, normalize",
"return tensor.sub
(mean).div
(std)"
]
]
},
{
"exception": "The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0",
"traceback": [
[
"D:\stable-diffusion-webui\modules\interrogate.py, line 194, interrogate",
"caption = self.generate_caption(pil_image)"
],
[
"D:\stable-diffusion-webui\modules\interrogate.py, line 174, generate_caption",
"gpu_image = transforms.Compose(["
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py, line 95, call",
"img = t(img)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1527, call_impl",
"return forward_call(*args, **kwargs)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py, line 277, forward",
"return F.normalize(tensor, self.mean, self.std, self.inplace)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional.py, line 363, normalize",
"return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional_tensor.py, line 928, normalize",
"return tensor.sub
(mean).div
(std)"
]
]
},
{
"exception": "The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0",
"traceback": [
[
"D:\stable-diffusion-webui\modules\interrogate.py, line 194, interrogate",
"caption = self.generate_caption(pil_image)"
],
[
"D:\stable-diffusion-webui\modules\interrogate.py, line 174, generate_caption",
"gpu_image = transforms.Compose(["
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py, line 95, call",
"img = t(img)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1527, call_impl",
"return forward_call(*args, **kwargs)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py, line 277, forward",
"return F.normalize(tensor, self.mean, self.std, self.inplace)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional.py, line 363, normalize",
"return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional_tensor.py, line 928, normalize",
"return tensor.sub
(mean).div
(std)"
]
]
},
{
"exception": "The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0",
"traceback": [
[
"D:\stable-diffusion-webui\modules\interrogate.py, line 194, interrogate",
"caption = self.generate_caption(pil_image)"
],
[
"D:\stable-diffusion-webui\modules\interrogate.py, line 174, generate_caption",
"gpu_image = transforms.Compose(["
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py, line 95, call",
"img = t(img)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1527, call_impl",
"return forward_call(*args, **kwargs)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py, line 277, forward",
"return F.normalize(tensor, self.mean, self.std, self.inplace)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional.py, line 363, normalize",
"return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional_tensor.py, line 928, normalize",
"return tensor.sub
(mean).div
(std)"
]
]
},
{
"exception": "The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0",
"traceback": [
[
"D:\stable-diffusion-webui\modules\interrogate.py, line 194, interrogate",
"caption = self.generate_caption(pil_image)"
],
[
"D:\stable-diffusion-webui\modules\interrogate.py, line 174, generate_caption",
"gpu_image = transforms.Compose(["
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py, line 95, call",
"img = t(img)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(args, **kwargs)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py, line 1527, call_impl",
"return forward_call(*args, **kwargs)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\transforms.py, line 277, forward",
"return F.normalize(tensor, self.mean, self.std, self.inplace)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional.py, line 363, normalize",
"return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace)"
],
[
"D:\stable-diffusion-webui\venv\lib\site-packages\torchvision\transforms\functional_tensor.py, line 928, normalize",
"return tensor.sub
(mean).div
(std)"
]
]
}
],
"CPU": {
"model": "Intel64 Family 6 Model 183 Stepping 1, GenuineIntel",
"count logical": 32,
"count physical": 24
},
"RAM": {
"total": "64GB",
"used": "14GB",
"free": "50GB"
},
"Extensions": [
{
"name": "3sd-webui-controlnet",
"path": "D:\stable-diffusion-webui\extensions\3sd-webui-controlnet",
"version": "e081a3a0",
"branch": "main",
"remote": "https://github.com/Mikubill/sd-webui-controlnet.git"
},
{
"name": "a2-adetailer",
"path": "D:\stable-diffusion-webui\extensions\a2-adetailer",
"version": "a0b4c56e",
"branch": "main",
"remote": "https://github.com/Bing-su/adetailer.git"
},
{
"name": "b1111-sd-webui-tagcomplete",
"path": "D:\stable-diffusion-webui\extensions\b1111-sd-webui-tagcomplete",
"version": "08d3436f",
"branch": "main",
"remote": "https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git"
},
{
"name": "extension-style-vars",
"path": "D:\stable-diffusion-webui\extensions\extension-style-vars",
"version": "7e528995",
"branch": "master",
"remote": "https://github.com/SirVeggie/extension-style-vars"
},
{
"name": "sd-Img2img-batch-interrogator",
"path": "D:\stable-diffusion-webui\extensions\sd-Img2img-batch-interrogator",
"version": "7f4e4eb0",
"branch": "main",
"remote": "https://github.com/Alvi-alvarez/sd-Img2img-batch-interrogator.git"
},
{
"name": "sd-dynamic-prompts",
"path": "D:\stable-diffusion-webui\extensions\sd-dynamic-prompts",
"version": "284d3ef3",
"branch": "main",
"remote": "https://github.com/adieyal/sd-dynamic-prompts.git"
},
{
"name": "sd-extension-system-info",
"path": "D:\stable-diffusion-webui\extensions\sd-extension-system-info",
"version": "72d871b4",
"branch": "main",
"remote": "https://github.com/vladmandic/sd-extension-system-info.git"
},
{
"name": "sd-webui-aspect-ratio-helper",
"path": "D:\stable-diffusion-webui\extensions\sd-webui-aspect-ratio-helper",
"version": "99fcf9b0",
"branch": "main",
"remote": "https://github.com/thomasasfk/sd-webui-aspect-ratio-helper.git"
},
{
"name": "sd-webui-lama-cleaner-masked-content",
"path": "D:\stable-diffusion-webui\extensions\sd-webui-lama-cleaner-masked-content",
"version": "6c363db1",
"branch": "master",
"remote": "https://github.com/light-and-ray/sd-webui-lama-cleaner-masked-content.git"
},
{
"name": "sd-webui-model-mixer",
"path": "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer",
"version": "",
"branch": null,
"remote": null
},
{
"name": "sd-webui-stablesr",
"path": "D:\stable-diffusion-webui\extensions\sd-webui-stablesr",
"version": "4499d796",
"branch": "master",
"remote": "https://github.com/pkuliyi2015/sd-webui-stablesr.git"
},
{
"name": "sd-webui-supermerger",
"path": "D:\stable-diffusion-webui\extensions\sd-webui-supermerger",
"version": "f2be9fa8",
"branch": "main",
"remote": "https://github.com/hako-mikan/sd-webui-supermerger.git"
},
{
"name": "sd-webui-zmultidiffusion-upscaler-for-automatic1111",
"path": "D:\stable-diffusion-webui\extensions\sd-webui-zmultidiffusion-upscaler-for-automatic1111",
"version": "fbb24736",
"branch": "main",
"remote": "https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111.git"
},
{
"name": "stable-diffusion-webui-model-toolkit",
"path": "D:\stable-diffusion-webui\extensions\stable-diffusion-webui-model-toolkit",
"version": "cf824587",
"branch": "master",
"remote": "https://github.com/arenasys/stable-diffusion-webui-model-toolkit.git"
},
{
"name": "ultimate-upscale-for-automatic1111",
"path": "D:\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111",
"version": "728ffcec",
"branch": "master",
"remote": "https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git"
}
],
"Inactive extensions": [],
"Environment": {
"COMMANDLINE_ARGS": "--opt-sdp-attention --no-half-vae --opt-channelslast --disable-safe-unpickle --skip-torch-cuda-test --disable-nan-check --skip-version-check --ckpt-dir "e:\stable Diffusion Checkpoints"",
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"samples_save": true,
"samples_format": "png",
"samples_filename_pattern": "[number]",
"save_images_add_number": true,
"save_images_replace_action": "Replace",
"grid_save": false,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"grid_prevent_empty_spots": false,
"grid_zip_filename_pattern": "[number]",
"n_rows": -1,
"font": "",
"grid_text_active_color": "#000000",
"grid_text_inactive_color": "#999999",
"grid_background_color": "#ffffff",
"save_images_before_face_restoration": false,
"save_images_before_highres_fix": false,
"save_images_before_color_correction": false,
"save_mask": false,
"save_mask_composite": false,
"jpeg_quality": 80,
"webp_lossless": false,
"export_for_4chan": false,
"img_downscale_threshold": 4.0,
"target_side_length": 4000,
"img_max_size_mp": 200,
"use_original_name_batch": true,
"use_upscaler_name_as_suffix": false,
"save_selected_only": true,
"save_init_img": false,
"temp_dir": "D:\temp",
"clean_temp_dir_at_start": false,
"save_incomplete_images": false,
"notification_audio": false,
"notification_volume": 100,
"outdir_samples": "E:\Stable Diffusion Images",
"outdir_txt2img_samples": "E:\Stable Diffusion Images",
"outdir_img2img_samples": "E:\Stable Diffusion Images",
"outdir_extras_samples": "E:\Stable Diffusion Images",
"outdir_grids": "E:\Stable Diffusion Images",
"outdir_txt2img_grids": "E:\Stable Diffusion Images",
"outdir_img2img_grids": "E:\Stable Diffusion Images",
"outdir_save": "E:\Stable Diffusion Images",
"outdir_init_images": "E:\Stable Diffusion Images",
"save_to_dirs": true,
"grid_save_to_dirs": false,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "[date]",
"directories_max_prompt_words": 8,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN 4x+",
"R-ESRGAN 4x+ Anime6B"
],
"upscaler_for_img2img": null,
"face_restoration": false,
"face_restoration_model": "CodeFormer",
"code_former_weight": 0.5,
"face_restoration_unload": false,
"auto_launch_browser": "Local",
"enable_console_prompts": true,
"show_warnings": false,
"show_gradio_deprecation_warnings": true,
"memmon_poll_rate": 8,
"samples_log_stdout": false,
"multiple_tqdm": false,
"print_hypernet_extra": false,
"list_hidden_files": true,
"disable_mmap_load_safetensors": false,
"hide_ldm_prints": true,
"dump_stacks_on_signal": false,
"api_enable_requests": true,
"api_forbid_local_requests": true,
"api_useragent": "",
"unload_models_when_training": false,
"pin_memory": false,
"save_optimizer_state": false,
"save_training_settings_to_txt": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 1,
"training_write_csv_every": 500,
"training_xattention_optimizations": false,
"training_enable_tensorboard": false,
"training_tensorboard_save_images": false,
"training_tensorboard_flush_every": 120,
"sd_model_checkpoint": "SDXL\2024-01-24 - Buddystrong Person - 13img (swap 1) - basemodel - 3rd-step00002250.safetensors [fdc167a7bd]",
"sd_checkpoints_limit": 1,
"sd_checkpoints_keep_in_cpu": true,
"sd_checkpoint_cache": 0,
"sd_unet": "Automatic",
"enable_quantization": false,
"enable_emphasis": true,
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"CLIP_stop_at_last_layers": 1,
"upcast_attn": true,
"randn_source": "GPU",
"tiling": false,
"hires_fix_refiner_pass": "second pass",
"sdxl_crop_top": 0,
"sdxl_crop_left": 0,
"sdxl_refiner_low_aesthetic_score": 2.5,
"sdxl_refiner_high_aesthetic_score": 6.0,
"sd_vae_checkpoint_cache": 0,
"sd_vae": "sdxl_vae.safetensors",
"sd_vae_overrides_per_model_preferences": true,
"auto_vae_precision": true,
"sd_vae_encode_method": "Full",
"sd_vae_decode_method": "Full",
"inpainting_mask_weight": 1.0,
"initial_noise_multiplier": 1.0,
"img2img_extra_noise": 0.0,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"img2img_background_color": "#ffffff",
"img2img_editor_height": 720,
"img2img_sketch_default_brush_color": "#ffffff",
"img2img_inpaint_mask_brush_color": "#ffffff",
"img2img_inpaint_sketch_default_brush_color": "#ffffff",
"return_mask": false,
"return_mask_composite": false,
"img2img_batch_show_results_limit": 32,
"cross_attention_optimization": "sdp-no-mem - scaled dot product without memory efficient attention",
"s_min_uncond": 0.0,
"token_merging_ratio": 0.0,
"token_merging_ratio_img2img": 0.0,
"token_merging_ratio_hr": 0.0,
"pad_cond_uncond": false,
"persistent_cond_cache": true,
"batch_cond_uncond": true,
"use_old_emphasis_implementation": false,
"use_old_karras_scheduler_sigmas": false,
"no_dpmpp_sde_batch_determinism": false,
"use_old_hires_fix_width_height": false,
"dont_fix_second_order_samplers_schedule": false,
"hires_fix_use_firstpass_conds": false,
"use_old_scheduling": false,
"interrogate_keep_models_in_memory": false,
"interrogate_return_ranks": false,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500,
"interrogate_clip_skip_categories": [],
"interrogate_deepbooru_score_threshold": 0.5,
"deepbooru_sort_alpha": true,
"deepbooru_use_spaces": true,
"deepbooru_escape": true,
"deepbooru_filter_tags": "",
"extra_networks_show_hidden_directories": true,
"extra_networks_dir_button_function": false,
"extra_networks_hidden_models": "When searched",
"extra_networks_default_multiplier": 1.0,
"extra_networks_card_width": 0,
"extra_networks_card_height": 0,
"extra_networks_card_text_scale": 1.0,
"extra_networks_card_show_desc": true,
"extra_networks_card_order_field": "Path",
"extra_networks_card_order": "Ascending",
"extra_networks_add_text_separator": " ",
"ui_extra_networks_tab_reorder": "",
"textual_inversion_print_at_load": false,
"textual_inversion_add_hashes_to_infotext": true,
"sd_hypernetwork": "None",
"keyedit_precision_attention": 0.1,
"keyedit_precision_extra": 0.05,
"keyedit_delimiters": ".,\/!?%^
;:{}=`~() ",
"keyedit_delimiters_whitespace": [
"Tab",
"Carriage Return",
"Line Feed"
],
"keyedit_move": true,
"disable_token_counters": false,
"return_grid": false,
"do_not_show_images": false,
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"js_modal_lightbox_gamepad": false,
"js_modal_lightbox_gamepad_repeat": 250,
"gallery_height": "",
"compact_prompt_box": false,
"samplers_in_dropdown": true,
"dimensions_and_batch_together": true,
"sd_checkpoint_dropdown_use_short": false,
"hires_fix_show_sampler": false,
"hires_fix_show_prompts": false,
"txt2img_settings_accordion": false,
"img2img_settings_accordion": false,
"localization": "None",
"quicksettings_list": [
"sd_model_checkpoint",
"sd_vae"
],
"ui_tab_order": [],
"hidden_tabs": [
"Train",
"Checkpoint Merger"
],
"ui_reorder_list": [],
"gradio_theme": "Default",
"gradio_themes_cache": true,
"show_progress_in_title": true,
"send_seed": true,
"send_size": true,
"enable_pnginfo": true,
"save_txt": false,
"add_model_name_to_info": true,
"add_model_hash_to_info": true,
"add_vae_name_to_info": true,
"add_vae_hash_to_info": true,
"add_user_name_to_info": false,
"add_version_to_infotext": true,
"disable_weights_auto_swap": true,
"infotext_skip_pasting": [],
"infotext_styles": "Apply if any",
"show_progressbar": true,
"live_previews_enable": true,
"live_previews_image_format": "png",
"show_progress_grid": false,
"show_progress_every_n_steps": -1,
"show_progress_type": "Full",
"live_preview_allow_lowvram_full": false,
"live_preview_content": "Combined",
"live_preview_refresh_period": 1000,
"live_preview_fast_interrupt": false,
"js_live_preview_in_modal_lightbox": true,
"hide_samplers": [],
"eta_ddim": 0.0,
"eta_ancestral": 1.0,
"ddim_discretize": "uniform",
"s_churn": 0.0,
"s_tmin": 0.0,
"s_tmax": 0.0,
"s_noise": 1.0,
"k_sched_type": "Automatic",
"sigma_min": 0.0,
"sigma_max": 0.0,
"rho": 0.0,
"eta_noise_seed_delta": 0,
"always_discard_next_to_last_sigma": false,
"sgm_noise_multiplier": false,
"uni_pc_variant": "bh1",
"uni_pc_skip_type": "time_uniform",
"uni_pc_order": 3,
"uni_pc_lower_order_final": true,
"postprocessing_enable_in_main_ui": [],
"postprocessing_operation_order": [],
"upscaling_max_images_in_cache": 5,
"postprocessing_existing_caption_action": "Ignore",
"disabled_extensions": [],
"disable_all_extensions": "none",
"restore_config_state_file": "",
"sd_checkpoint_hash": "fdc167a7bd9ba7549af1e1e7f4cb222e37210b55061440a5f8c6ce382ef5c6b4",
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"control_net_detectedmap_dir": "detected_maps",
"control_net_models_path": "",
"control_net_modules_path": "",
"control_net_unit_count": 3,
"control_net_model_cache_size": 1,
"control_net_inpaint_blur_sigma": 7,
"control_net_no_detectmap": false,
"control_net_detectmap_autosaving": false,
"control_net_allow_script_control": true,
"control_net_sync_field_args": true,
"controlnet_show_batch_images_in_ui": false,
"controlnet_increment_seed_during_batch": false,
"controlnet_disable_openpose_edit": false,
"controlnet_disable_photopea_edit": false,
"controlnet_ignore_noninpaint_mask": false,
"ad_max_models": 2,
"ad_save_previews": false,
"ad_save_images_before": false,
"ad_only_seleted_scripts": true,
"ad_script_names": "dynamic_prompting,dynamic_thresholding,wildcard_recursive,wildcards,lora_block_weight",
"ad_bbox_sortby": "None",
"tac_tagFile": "danbooru.csv",
"tac_active": true,
"tac_activeIn.txt2img": true,
"tac_activeIn.img2img": true,
"tac_activeIn.negativePrompts": true,
"tac_activeIn.thirdParty": true,
"tac_activeIn.modelList": "",
"tac_activeIn.modelListMode": "Blacklist",
"tac_slidingPopup": true,
"tac_maxResults": 5,
"tac_showAllResults": false,
"tac_resultStepLength": 100,
"tac_delayTime": 100,
"tac_useWildcards": true,
"tac_sortWildcardResults": true,
"tac_wildcardExclusionList": "",
"tac_skipWildcardRefresh": false,
"tac_useEmbeddings": true,
"tac_includeEmbeddingsInNormalResults": false,
"tac_useHypernetworks": true,
"tac_useLoras": true,
"tac_useLycos": true,
"tac_useLoraPrefixForLycos": true,
"tac_showWikiLinks": false,
"tac_showExtraNetworkPreviews": true,
"tac_modelSortOrder": "Name",
"tac_replaceUnderscores": true,
"tac_escapeParentheses": true,
"tac_appendComma": true,
"tac_appendSpace": true,
"tac_alwaysSpaceAtEnd": true,
"tac_modelKeywordCompletion": "Never",
"tac_modelKeywordLocation": "Start of prompt",
"tac_wildcardCompletionMode": "To next folder level",
"tac_alias.searchByAlias": true,
"tac_alias.onlyShowAlias": false,
"tac_translation.translationFile": "None",
"tac_translation.oldFormat": false,
"tac_translation.searchByTranslation": true,
"tac_translation.liveTranslation": false,
"tac_extra.extraFile": "extra-quality-tags.csv",
"tac_extra.addMode": "Insert before",
"tac_chantFile": "demo-chants.json",
"tac_keymap": "{\n "MoveUp": "ArrowUp",\n "MoveDown": "ArrowDown",\n "JumpUp": "PageUp",\n "JumpDown": "PageDown",\n "JumpToStart": "",\n "JumpToEnd": "",\n "ChooseSelected": "Enter",\n "ChooseFirstOrSelected": "Tab",\n "Close": "Escape"\n}",
"tac_colormap": "{\n "danbooru": {\n "-1": ["red", "maroon"],\n "0": ["lightblue", "dodgerblue"],\n "1": ["indianred", "firebrick"],\n "3": ["violet", "darkorchid"],\n "4": ["lightgreen", "darkgreen"],\n "5": ["orange", "darkorange"]\n },\n "e621": {\n "-1": ["red", "maroon"],\n "0": ["lightblue", "dodgerblue"],\n "1": ["gold", "goldenrod"],\n "3": ["violet", "darkorchid"],\n "4": ["lightgreen", "darkgreen"],\n "5": ["tomato", "darksalmon"],\n "6": ["red", "maroon"],\n "7": ["whitesmoke", "black"],\n "8": ["seagreen", "darkseagreen"]\n }\n}",
"tac_refreshTempFiles": "Refresh TAC temp files",
"polotno_api_key": "bHEpG9Rp0Nq9XrLcwFNu",
"canvas_editor_default_width": 1024,
"canvas_editor_default_height": 1024,
"arh_javascript_aspect_ratio_show": true,
"arh_javascript_aspect_ratio": "1:1, 3:2, 4:3, 5:4, 16:9",
"arh_ui_javascript_selection_method": "Aspect Ratios Dropdown",
"arh_hide_accordion_by_default": true,
"arh_expand_by_default": false,
"arh_ui_component_order_key": "MaxDimensionScaler, MinDimensionScaler, PredefinedAspectRatioButtons, PredefinedPercentageButtons",
"arh_show_max_width_or_height": false,
"arh_max_width_or_height": 1024.0,
"arh_show_min_width_or_height": false,
"arh_min_width_or_height": 1024.0,
"arh_show_predefined_aspect_ratios": false,
"arh_predefined_aspect_ratio_use_max_dim": false,
"arh_predefined_aspect_ratios": "1:1, 4:3, 16:9, 9:16, 21:9",
"arh_show_predefined_percentages": false,
"arh_predefined_percentages": "25, 50, 75, 125, 150, 175, 200",
"arh_predefined_percentages_display_key": "Incremental/decremental percentage (-50%, +50%)",
"mm_max_models": 7,
"mm_debugs": [],
"mm_save_model": [
"safetensors",
"fp16",
"prune",
"overwrite"
],
"mm_save_model_filename": "modelmixer-[hash]",
"mm_use_extra_elements": true,
"mm_use_old_finetune": false,
"mm_use_unet_partial_update": true,
"mm_laplib": "lap",
"mm_use_fast_weighted_sum": true,
"mm_use_precalculate_hash": false,
"mm_use_model_dl": false,
"mm_default_config_lock": false,
"mm_civitai_api_key": "",
"dp_ignore_whitespace": true,
"dp_write_raw_template": true,
"dp_write_prompts_to_file": false,
"dp_parser_variant_start": "{",
"dp_parser_variant_end": "}",
"dp_parser_wildcard_wrap": "__",
"dp_limit_jinja_prompts": false,
"dp_auto_purge_cache": true,
"dp_wildcard_manager_no_dedupe": false,
"dp_wildcard_manager_no_sort": true,
"dp_wildcard_manager_shuffle": true,
"dp_magicprompt_default_model": "Gustavosta/MagicPrompt-Stable-Diffusion",
"dp_magicprompt_batch_size": 1,
"lora_functional": false,
"sd_lora": "None",
"lora_preferred_name": "Alias from file",
"lora_add_hashes_to_infotext": true,
"lora_show_all": false,
"lora_hide_unknown_for_versions": [],
"lora_in_memory_limit": 0,
"extra_options_txt2img": [],
"extra_options_img2img": [],
"extra_options_cols": 1,
"extra_options_accordion": false,
"canvas_hotkey_zoom": "Alt",
"canvas_hotkey_adjust": "Ctrl",
"canvas_hotkey_move": "F",
"canvas_hotkey_fullscreen": "S",
"canvas_hotkey_reset": "R",
"canvas_hotkey_overlap": "O",
"canvas_show_tooltip": true,
"canvas_auto_expand": true,
"canvas_blur_prompt": false,
"canvas_disabled_functions": [
"Overlap"
],
"controlnet_photopea_warning": true,
"freeu_png_info_auto_enable": true,
"model_toolkit_fix_clip": false,
"model_toolkit_autoprune": false,
"upscaling_upscaler_for_lama_cleaner_masked_content": "ESRGAN_4x",
"SWIN_torch_compile": false,
"animatediff_model_path": null,
"animatediff_optimize_gif_palette": false,
"animatediff_optimize_gif_gifsicle": false,
"animatediff_mp4_crf": 23,
"animatediff_mp4_preset": "",
"animatediff_mp4_tune": "",
"animatediff_webp_quality": 80,
"animatediff_webp_lossless": false,
"animatediff_save_to_custom": false,
"animatediff_xformers": "Optimize attention layers with xformers",
"animatediff_disable_lcm": false,
"animatediff_s3_enable": false,
"animatediff_s3_host": null,
"animatediff_s3_port": null,
"animatediff_s3_access_key": null,
"animatediff_s3_secret_key": null,
"animatediff_s3_storge_bucket": null,
"controlnet_clip_detector_on_cpu": false,
"styles_ui": "radio-buttons",
"enable_styleselector_by_default": true,
"style_vars_enabled": true,
"style_vars_random": false,
"style_vars_hires": false,
"style_vars_linebreaks": true,
"style_vars_info": false,
"tac_useStyleVars": true,
"replacer_use_first_positive_prompt_from_examples": true,
"replacer_use_first_negative_prompt_from_examples": true,
"replacer_hide_segment_anything_accordions": true,
"replacer_always_unload_models": "Automatic",
"replacer_detection_prompt_examples": "",
"replacer_avoidance_prompt_examples": "",
"replacer_positive_prompt_examples": "",
"replacer_negative_prompt_examples": "",
"replacer_hf_positive_prompt_suffix_examples": "",
"replacer_examples_per_page_for_detection_prompt": 10,
"replacer_examples_per_page_for_avoidance_prompt": 10,
"replacer_examples_per_page_for_positive_prompt": 10,
"replacer_examples_per_page_for_negative_prompt": 10,
"replacer_save_dir": "e:\Stable Diffusion Images",
"sam_use_local_groundingdino": true
},
"Startup": {
"total": 12.683581113815308,
"records": {
"initial startup": 0.020929813385009766,
"prepare environment/checks": 0.002990245819091797,
"prepare environment/git version info": 0.020930051803588867,
"prepare environment/torch GPU test": 0.0019931793212890625,
"prepare environment/clone repositores": 0.0657799243927002,
"prepare environment/run extensions installers/3sd-webui-controlnet": 0.1634535789489746,
"prepare environment/run extensions installers/a2-adetailer": 0.0817265510559082,
"prepare environment/run extensions installers/b1111-sd-webui-tagcomplete": 0.0,
"prepare environment/run extensions installers/extension-style-vars": 0.0,
"prepare environment/run extensions installers/sd-dynamic-prompts": 0.09169363975524902,
"prepare environment/run extensions installers/sd-extension-system-info": 0.07275652885437012,
"prepare environment/run extensions installers/sd-Img2img-batch-interrogator": 0.0,
"prepare environment/run extensions installers/sd-webui-aspect-ratio-helper": 0.0,
"prepare environment/run extensions installers/sd-webui-lama-cleaner-masked-content": 0.0,
"prepare environment/run extensions installers/sd-webui-model-mixer": 0.0,
"prepare environment/run extensions installers/sd-webui-stablesr": 0.0,
"prepare environment/run extensions installers/sd-webui-supermerger": 0.07574677467346191,
"prepare environment/run extensions installers/sd-webui-zmultidiffusion-upscaler-for-automatic1111": 0.0009963512420654297,
"prepare environment/run extensions installers/stable-diffusion-webui-model-toolkit": 0.0,
"prepare environment/run extensions installers/ultimate-upscale-for-automatic1111": 0.0,
"prepare environment/run extensions installers": 0.4863734245300293,
"prepare environment": 0.5920200347900391,
"launcher": 0.0,
"import torch": 1.6923401355743408,
"import gradio": 0.45946359634399414,
"setup paths": 0.32491326332092285,
"import ldm": 0.0019931793212890625,
"import sgm": 0.0,
"initialize shared": 0.11960029602050781,
"other imports": 0.3229198455810547,
"opts onchange": 0.0,
"setup SD model": 0.0009965896606445312,
"setup codeformer": 0.04485011100769043,
"setup gfpgan": 0.005980014801025391,
"set samplers": 0.0,
"list extensions": 0.0019931793212890625,
"restore config state file": 0.0,
"list SD models": 0.2162771224975586,
"list localizations": 0.0,
"load scripts/custom_code.py": 0.003986358642578125,
"load scripts/img2imgalt.py": 0.0,
"load scripts/loopback.py": 0.0,
"load scripts/outpainting_mk_2.py": 0.0,
"load scripts/poor_mans_outpainting.py": 0.0009965896606445312,
"load scripts/postprocessing_caption.py": 0.0,
"load scripts/postprocessing_codeformer.py": 0.0,
"load scripts/postprocessing_create_flipped_copies.py": 0.0,
"load scripts/postprocessing_focal_crop.py": 0.0,
"load scripts/postprocessing_gfpgan.py": 0.0,
"load scripts/postprocessing_split_oversized.py": 0.0,
"load scripts/postprocessing_upscale.py": 0.0,
"load scripts/processing_autosized_crop.py": 0.0,
"load scripts/prompt_matrix.py": 0.0009968280792236328,
"load scripts/prompts_from_file.py": 0.0,
"load scripts/sd_upscale.py": 0.0,
"load scripts/xyz_grid.py": 0.0009965896606445312,
"load scripts/ldsr_model.py": 0.23421669006347656,
"load scripts/lora_script.py": 0.1255800724029541,
"load scripts/scunet_model.py": 0.011960029602050781,
"load scripts/swinir_model.py": 0.010963201522827148,
"load scripts/hotkey_config.py": 0.0,
"load scripts/extra_options_section.py": 0.0009965896606445312,
"load scripts/hypertile_script.py": 0.019933462142944336,
"load scripts/hypertile_xyz.py": 0.0,
"load scripts/adapter.py": 0.0009965896606445312,
"load scripts/api.py": 0.1285700798034668,
"load scripts/batch_hijack.py": 0.0,
"load scripts/cldm.py": 0.0009968280792236328,
"load scripts/controlmodel_ipadapter.py": 0.0,
"load scripts/controlnet.py": 0.1036531925201416,
"load scripts/controlnet_diffusers.py": 0.0009965896606445312,
"load scripts/controlnet_lllite.py": 0.0,
"load scripts/controlnet_lora.py": 0.0,
"load scripts/controlnet_model_guess.py": 0.0,
"load scripts/controlnet_version.py": 0.0,
"load scripts/enums.py": 0.0009965896606445312,
"load scripts/external_code.py": 0.0,
"load scripts/global_state.py": 0.0,
"load scripts/hook.py": 0.0009968280792236328,
"load scripts/infotext.py": 0.0,
"load scripts/logging.py": 0.0,
"load scripts/lvminthin.py": 0.0,
"load scripts/movie2movie.py": 0.0009965896606445312,
"load scripts/processor.py": 0.0,
"load scripts/utils.py": 0.0009968280792236328,
"load scripts/xyz_grid_support.py": 0.0,
"load scripts/!adetailer.py": 3.307936668395996,
"load scripts/model_keyword_support.py": 0.001993417739868164,
"load scripts/shared_paths.py": 0.0,
"load scripts/tag_autocomplete_helper.py": 0.09567999839782715,
"load scripts/style_vars.py": 0.018936634063720703,
"load scripts/sd_tag_batch.py": 0.0009965896606445312,
"load scripts/dynamic_prompting.py": 0.03687691688537598,
"load scripts/system-info.py": 0.025913238525390625,
"load scripts/sd_webui_aspect_ratio_helper.py": 0.03388667106628418,
"load scripts/lama_cleaner_masked_content_sctipt.py": 0.04185986518859863,
"load scripts/model_mixer.py": 0.05083012580871582,
"load scripts/patches.py": 0.0,
"load scripts/vxa.py": 0.0,
"load scripts/stablesr.py": 0.0029900074005126953,
"load scripts/GenParamGetter.py": 0.08870339393615723,
"load scripts/supermerger.py": 0.011960029602050781,
"load scripts/tilediffusion.py": 0.004983186721801758,
"load scripts/tilevae.py": 0.0009965896606445312,
"load scripts/toolkit_gui.py": 0.03986668586730957,
"load scripts/ultimate-upscale.py": 0.0009968280792236328,
"load scripts/refiner.py": 0.0,
"load scripts/seed.py": 0.0,
"load scripts": 4.415233373641968,
"load upscalers": 0.0009965896606445312,
"refresh VAE": 0.1066434383392334,
"refresh textual inversion templates": 0.0,
"scripts list_optimizers": 0.0009968280792236328,
"scripts list_unets": 0.0,
"reload hypernetworks": 0.0,
"initialize extra networks": 0.019932985305786133,
"scripts before_ui_callback": 0.0019936561584472656,
"create ui": 3.7693932056427,
"gradio launch": 0.29003024101257324,
"add APIs": 0.0029900074005126953,
"app_started_callback/lora_script.py": 0.0,
"app_started_callback/api.py": 0.0009965896606445312,
"app_started_callback/tag_autocomplete_helper.py": 0.001993417739868164,
"app_started_callback/system-info.py": 0.0009965896606445312,
"app_started_callback/GenParamGetter.py": 0.1823902130126953,
"app_started_callback/model_mixer.py": 0.09867000579833984,
"app_started_callback": 0.2850468158721924
}
},
"Packages": [
"absl-py==2.0.0",
"accelerate==0.21.0",
"addict==2.4.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohttp==3.9.1",
"aiosignal==1.3.1",
"albumentations==1.3.1",
"altair==5.2.0",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"async-timeout==4.0.3",
"attrdict==2.0.1",
"attrs==23.2.0",
"av==11.0.0",
"basicsr==1.4.2",
"beautifulsoup4==4.12.2",
"blendmodes==2022",
"cachetools==5.3.2",
"certifi==2023.11.17",
"cffi==1.16.0",
"chardet==5.2.0",
"charset-normalizer==3.3.2",
"clean-fid==0.1.35",
"click==8.1.7",
"clip==1.0",
"colorama==0.4.6",
"coloredlogs==15.0.1",
"colorlog==6.8.0",
"contourpy==1.2.0",
"cryptography==41.0.7",
"cssselect2==0.7.0",
"cssselect==1.2.0",
"cssutils==2.9.0",
"cycler==0.12.1",
"cython==3.0.8",
"defusedxml==0.7.1",
"deprecation==2.1.0",
"depth-anything==2024.1.22.0",
"diffusers==0.25.0",
"dynamicprompts==0.30.4",
"easydict==1.11",
"einops==0.4.1",
"embreex==2.17.7.post4",
"et-xmlfile==1.1.0",
"exceptiongroup==1.2.0",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.3.1",
"filelock==3.13.1",
"filterpy==1.4.5",
"flatbuffers==23.5.26",
"fonttools==4.47.0",
"frozenlist==1.4.1",
"fsspec==2023.12.2",
"ftfy==6.1.3",
"future==0.18.3",
"fvcore==0.1.5.post20221221",
"gdown==4.7.1",
"gfpgan==1.3.8",
"gitdb==4.0.11",
"gitpython==3.1.32",
"google-auth-oauthlib==1.2.0",
"google-auth==2.26.0",
"gputil==1.4.0",
"gradio-client==0.5.0",
"gradio==3.41.2",
"grpcio==1.60.0",
"h11==0.12.0",
"handrefinerportable==2024.1.18.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.20.1",
"humanfriendly==10.0",
"idna==3.6",
"imageio-ffmpeg==0.4.9",
"imageio==2.33.1",
"imgaug==0.4.0",
"importlib-metadata==7.0.1",
"importlib-resources==6.1.1",
"inflection==0.5.1",
"insightface==0.7.3",
"iopath==0.1.9",
"jinja2==3.1.2",
"joblib==1.3.2",
"jsonmerge==1.8.0",
"jsonschema-specifications==2023.12.1",
"jsonschema==4.20.0",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy-loader==0.3",
"lightning-utilities==0.10.0",
"llvmlite==0.41.1",
"lmdb==1.4.1",
"lpips==0.1.4",
"lxml==5.0.0",
"mapbox-earcut==1.0.1",
"markdown-it-py==3.0.0",
"markdown==3.5.1",
"markupsafe==2.1.3",
"matplotlib==3.8.2",
"mdurl==0.1.2",
"mediapipe==0.10.9",
"mpmath==1.3.0",
"multidict==6.0.4",
"networkx==3.2.1",
"numba==0.58.1",
"numpy==1.23.5",
"oauthlib==3.2.2",
"omegaconf==2.2.3",
"onnx==1.15.0",
"onnxruntime-gpu==1.16.3",
"open-clip-torch==2.20.0",
"opencv-contrib-python==4.9.0.80",
"opencv-python-headless==4.9.0.80",
"opencv-python==4.9.0.80",
"openpyxl==3.1.2",
"orjson==3.9.10",
"packaging==23.2",
"pandas==2.1.4",
"piexif==1.1.3",
"pillow==9.5.0",
"pip==22.2.1",
"platformdirs==4.1.0",
"portalocker==2.8.2",
"premailer==3.10.0",
"prettytable==3.9.0",
"protobuf==3.20.3",
"psutil==5.9.5",
"py-cpuinfo==9.0.0",
"pyaescrypt==6.1.1",
"pyasn1-modules==0.3.0",
"pyasn1==0.5.1",
"pyclipper==1.3.0.post5",
"pycocotools==2.0.7",
"pycollada==0.7.2",
"pycparser==2.21",
"pydantic==1.10.13",
"pydub==0.25.1",
"pygments==2.17.2",
"pyparsing==3.1.1",
"pyreadline3==3.4.1",
"pysocks==1.7.1",
"python-dateutil==2.8.2",
"python-multipart==0.0.6",
"pytorch-lightning==1.9.4",
"pytz==2023.3.post1",
"pywavelets==1.5.0",
"pywin32==306",
"pyyaml==6.0.1",
"qudida==0.0.4",
"rapidfuzz==3.6.1",
"realesrgan==0.3.0",
"referencing==0.32.0",
"regex==2023.12.25",
"reportlab==4.0.8",
"requests-oauthlib==1.3.1",
"requests==2.31.0",
"resize-right==0.0.2",
"rich==13.7.0",
"rpds-py==0.16.2",
"rsa==4.9",
"rtree==1.1.0",
"safetensors==0.3.1",
"scikit-image==0.21.0",
"scikit-learn==1.3.2",
"scipy==1.11.4",
"seaborn==0.13.1",
"segment-anything==1.0",
"semantic-version==2.10.0",
"send2trash==1.8.2",
"sentencepiece==0.1.99",
"setuptools==63.2.0",
"shapely==2.0.2",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.0",
"sounddevice==0.4.6",
"soupsieve==2.5",
"spandrel==0.1.6",
"starlette==0.26.1",
"supervision==0.18.0",
"svg.path==6.3",
"svglib==1.5.1",
"sympy==1.12",
"tabulate==0.9.0",
"tb-nightly==2.16.0a20240102",
"tensorboard-data-server==0.7.2",
"termcolor==2.4.0",
"tf-keras-nightly==2.16.0.dev2023123010",
"thop==0.1.1.post2209072238",
"threadpoolctl==3.2.0",
"tifffile==2023.12.9",
"timm==0.9.2",
"tinycss2==1.2.1",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"tomli==2.0.1",
"toolz==0.12.0",
"torch==2.1.2+cu121",
"torchdiffeq==0.2.3",
"torchmetrics==1.2.1",
"torchsde==0.2.6",
"torchvision==0.16.2+cu121",
"tqdm==4.66.1",
"trampoline==0.1.2",
"transformers==4.30.2",
"trimesh==4.0.8",
"typing-extensions==4.9.0",
"tzdata==2023.4",
"ultralytics==8.0.232",
"urllib3==2.1.0",
"uvicorn==0.25.0",
"wcwidth==0.2.12",
"webencodings==0.5.1",
"websockets==11.0.3",
"werkzeug==3.0.1",
"wget==3.2",
"wordcloud==1.9.3",
"xxhash==3.4.1",
"yacs==0.1.8",
"yapf==0.40.2",
"yarl==1.9.4",
"zipp==3.17.0"
]
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.