Coder Social home page Coder Social logo

fenneishi / fooocus-controlnet-sdxl Goto Github PK

View Code? Open in Web Editor NEW

This project forked from lllyasviel/fooocus

215.0 215.0 8.0 74.69 MB

add more control to fooocus

License: GNU General Public License v3.0

JavaScript 0.36% Python 99.52% CSS 0.03% Jupyter Notebook 0.09%

fooocus-controlnet-sdxl's People

Contributors

agarzon avatar bruno-c avatar camenduru avatar fenneishi avatar lllyasviel avatar moonride303 avatar tcmaps avatar ttio2tech avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

fooocus-controlnet-sdxl's Issues

Pleaes bring faceswap

One of the reasons i stll use fooocus original is the faceswap feature. since its a fork of that, why dont you bring it here as well. should be very easy i was thinking, since its already implemented in fooocus!

Doesn't fully work on Colab

I'm not completely sure what the problem is but making the following changes at least helps to get the web UI to start

!pip install pygit2==1.12.2
!pip install einops # This wouldnt get installed from the requirements, so I just added it here.
%cd /content
!git clone https://github.com/fenneishi/Fooocus-Control.git /content/Fooocus/ # to be moving into the Fooocus folder, it should exist first, 
%cd /content/Fooocus # so I am cloning the repo into the fooocus folder
!cp colab_fix.txt user_path_config.txt
# for FooocusControl(follow Fooocus) Realistic Edition.
!python entry_with_update.py --preset realistic --share 
# for FooocusControl(follow Fooocus) Anime Edition.
# !python entry_with_update.py --preset anime  --share 

Even after this, I have to run the cell twice to get it to work. I use python rarely so I'm probably not the best person to dig too deep into the existing code, but I'll see if I can figure out where the problem actually lies when I find some spare time. Good Luck!!!

所有模型自动下载失败(主分支无此问题)

ValueError: Error while deserializing header: MetadataIncompleteBuffer
File corrupted: L:\FooocusControl\Fooocus\models\vae_approx\xl-to-v1_interposer-v3.1.safetensors
Fooocus has tried to move the corrupted file to L:\FooocusControl\Fooocus\models\vae_approx\xl-to-v1_interposer-v3.1.safetensors.corrupted
You may try again now and Fooocus will download models again.

it's not working on colab anymore

Changed the directory name (Fooocus-ControlNet-SDXL) but still getting this error -

Traceback (most recent call last):
File "/content/Fooocus-ControlNet-SDXL/entry_with_update.py", line 47, in
from launch import *
File "/content/Fooocus-ControlNet-SDXL/launch.py", line 10, in
from modules.path import modelfile_path, lorafile_path, vae_approx_path, fooocus_expansion_path,
File "/content/Fooocus-ControlNet-SDXL/modules/path.py", line 10, in
from fooocus_extras.controlnet_preprocess_model.PyramidCanny import PyramidCanny
File "/content/Fooocus-ControlNet-SDXL/fooocus_extras/controlnet_preprocess_model/PyramidCanny/init.py", line 4, in
from fooocus_extras.controlnet_preprocess_model.ZeoDepth import ZoeDetector
File "/content/Fooocus-ControlNet-SDXL/fooocus_extras/controlnet_preprocess_model/ZeoDepth/init.py", line 9, in
from einops import rearrange
ModuleNotFoundError: No module named 'einops'

Api PyraCanny Image Prompt

Describe the problem
По API пытаюсь совместитть изображение с промтом. Через GUI работает великолепно, а вот с теми же настройками, но в скрипте, на выходе генерируется исключительно по промту. В чем может быть причина?
Describe the problem
Using the API, I'm trying to combine the image with the script. It works great through the GUI, but with the same settings, but in the script, the output is generated exclusively by promt. What could be the reason?

    result = client.predict(
        chosen_style_description, # str in 'parameter_10' Textbox component
        chosen_style_neodescription, # str in 'Negative Prompt' Textbox component
        ["Fooocus V2", "Default (Slightly Cinematic)", "SAI Fantasy Art"], # List[str] in 'Selected Styles' Checkboxgroup component
        "Speed", # str in 'Performance' Radio component
        "1152×896", # <span style=\"color: color;\"> ∣ 1:2</span>", # str in 'Aspect Ratios' Radio component
        1, # int | float (numeric value between 1 and 32) in 'Image Number' Slider component
        seed_rnd, # str in 'Seed' Textbox component
        20, # int | float (numeric value between 0.0 and 30.0) in 'Image Sharpness' Slider component
        4, # int | float (numeric value between 1.0 and 30.0) in 'Guidance Scale' Slider component 
        "sd_xl_base_1.0_0.9vae.safetensors", # str (Option from: ['bluePencilXL_v050.safetensors
        "sd_xl_refiner_1.0_0.9vae.safetensors", # str (Option from: ['None', 'bluePencilXL_v050.safetensors', 
        "sd_xl_offset_example-lora_1.0.safetensors", # in 'LoRA 1'
        1, # int | float (numeric value between -2 and 2) in 'Weight' Slider component
        "None", # in 'LoRA 2' 
        -2, # int | float (numeric value between -2 and 2) in 'Weight' Slider component
        "None", # in 'LoRA 3'
        -2, # int | float (numeric value between -2 and 2)
        "None", # in 'LoRA 4'
        -2, # int | float (numeric value between -2 and 2)
        "None", # in 'LoRA 5'
        -2, # int | float (numeric value between -2 and 2)
        True, # bool in 'Input Image' Checkbox component
        "Image Prompt", # str in 'parameter_83' Textbox component
        "Disabled", # str in 'Upscale or Variation:' Radio component
        encoded_image, # str (filepath or URL to image) in 'Drag above image to here' Image component
        "Left, Right, Top, Bottom", # List[str] in 'Outpaint Direction' Checkboxgroup component  Left, 
        encoded_image,      
        1.0, # int | float (numeric value between 0.0 and 1.0) in 'Stop At' Slider component
        0.6, # int | float (numeric value between 0.0 and 2.0) in 'Weight' Slider component
        "PyraCanny", # str in 'Type' Radio component
        "data:image/jpeg;base64," + encoded_image, # str (filepath or URL to image) in 'Image' Image component
        0.4, 	# int | float (numeric value between 0.0 and 1.0) in 'Stop At' Slider component
        0.7, # int | float (numeric value between 0.0 and 2.0) in 'Weight' Slider component
        "sketch", # str in 'Type' Radio component
        encoded_image, # str (filepath or URL to image) in 'Image' Image component
        0, # int | float (numeric value between 0.0 and 1.0) in 'Stop At' Slider component
        0, # int | float (numeric value between 0.0 and 2.0) in 'Weight' Slider component
        "Image Prompt", # str in 'Type' Radio component
        encoded_image, # str (filepath or URL to image) in 'Image' Image component
        0, # int | float (numeric value between 0.0 and 1.0) in 'Stop At' Slider component
        0, # int | float (numeric value between 0.0 and 2.0) in 'Weight' Slider component
        "Image Prompt", # str in 'Type' Radio component
        fn_index=23
    )

В логах ошибок нет
There are no errors in the logs

Ответ API (value_4): {'visible': False, 'type': 'update'}
Ответ API (Preview): {'visible': False, 'type': 'update'}
Ответ API (Gallery): {'visible': True, 'value': [{'name': 'C:\TEmp\gradio\319c5d5643faf7e82331d38149a91a2257960a5c\image.png', 'data': None, 'is_file': True}], 'type': 'update'}

Full Console Log
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.43 seconds
Total time: 44.19 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 20
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] New suffix: intricate, highly detailed 8K, smooth, sharp focus, beautiful and aesthetic shape of face and body, artgerm, artstation, art by zexi guo and nira and junpei suzuki and gharliera and rinotuna
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.26 seconds
[Fooocus] Encoding negative #1 ...
Preparation time: 2.75 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 2.60 seconds
[Sampler] Fooocus sampler is activated.
67%|██████████████████████████████████████████████████████▋ | 20/30 [00:11<00:05, 1.72it/s]Requested to load SDXLRefiner
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 1.21 seconds
Refiner Swapped
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:19<00:00, 1.57it/s]
Image generated with private log at: S:\FooocusControl\Fooocus\outputs\2024-01-08\log.html
Generating and saving time: 26.30 seconds
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.44 seconds
Total time: 30.59 seconds

По факту генерации - Upon generation
По факту генерации - Upon generation

Генерация через GUI - Generation via GUI
Генерация через GUI - Generation via GUI

Конечно ожидаем генерацию через API как на фото через GUI но этого не происходит
Of course, we expect generation via the API as in the photo via the GUI, but this does not happen

Pose

Can Pose support image like these? I tried but could not get it to work

s2

大佬可以添加LCM-lora功能吗?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the idea you'd like
A clear and concise description of what you want to happen.

PyraCanny + Depth throws an error

Describe the problem

errors out when using ImagePrompt with two images: one set to PyraCanny, the other set to Depth. everything else is default.

Full Console Log

E:\ai\FooocusControl>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --listen --port 7861
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.701
Total VRAM 24564 MB, total RAM 130930 MB
Running on local URL: http://0.0.0.0:7861
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
[Fooocus] Disabling smart memory
model_type EPS
adm 2560
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Refiner model loaded: E:\ai\FooocusControl\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors

To create a public link, set share=True in launch().
model_type EPS
adm 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
Base model loaded: E:\ai\FooocusControl\Fooocus\models\checkpoints\sd_xl_base_1.0_0.9vae.safetensors
LoRAs loaded: [('sd_xl_offset_example-lora_1.0.safetensors', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5)]
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.81 seconds
App started successful. Use the app with http://localhost:7861/ or 0.0.0.0:7861
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 7.0
[Fooocus] Downloading control models ...
[Fooocus] Loading control models ...
img_size [384, 512]
E:\ai\FooocusControl\python_embeded\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3527.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Params passed to Resize transform:
width: 512
height: 384
resize_target: True
keep_aspect_ratio: True
ensure_multiple_of: 32
resize_method: minimal
Traceback (most recent call last):
File "E:\ai\FooocusControl\Fooocus\modules\async_worker.py", line 691, in worker
handler(task)
File "E:\ai\FooocusControl\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\ai\FooocusControl\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\ai\FooocusControl\Fooocus\modules\async_worker.py", line 237, in handler
pipeline.refresh_controlnets(
File "E:\ai\FooocusControl\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\ai\FooocusControl\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\ai\FooocusControl\Fooocus\modules\default_pipeline.py", line 65, in refresh_controlnets
cache = {get_1st_path(get_paths(ms)): cache_loader(l, ms) for l, ms in loaders.items()}
File "E:\ai\FooocusControl\Fooocus\modules\default_pipeline.py", line 65, in
cache = {get_1st_path(get_paths(ms)): cache_loader(l, ms) for l, ms in loaders.items()}
File "E:\ai\FooocusControl\Fooocus\modules\default_pipeline.py", line 57, in cache_loader
return loader(path_1st if 1 == len(paths) else paths) if not path_1st in loaded_ControlNets else
File "E:\ai\FooocusControl\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\ai\FooocusControl\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\ai\FooocusControl\Fooocus\modules\core.py", line 58, in load_controlnet
return fcbh.controlnet.load_controlnet(ckpt_filename)
File "E:\ai\FooocusControl\Fooocus\backend\headless\fcbh\controlnet.py", line 289, in load_controlnet
controlnet_data = fcbh.utils.load_torch_file(ckpt_path, safe_load=True)
File "E:\ai\FooocusControl\Fooocus\backend\headless\fcbh\utils.py", line 12, in load_torch_file
if ckpt.lower().endswith(".safetensors"):
AttributeError: 'list' object has no attribute 'lower'
Total time: 6.60 seconds

Recording the wildcard line for the log file

Is your feature request related to a problem? Please describe.
The log html page only contains the raw prompt, in other words it looks like this if I use a wildcard Picture of somone, __wildcardbackground__

Describe the idea you'd like
Basically, I'd like to see a wildcard field that would indicate what was the text of the selected wildcard line(s).

Mixing Image Prompt and Inpaint doesn't work

Describe the problem
When image prompt(controlnet) and inpaint are both enabled, only inpaint is activated, check the result, the table canny is not working

image
image
image

Full Console Log

[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
[Sampler] Fooocus sampler is activated.
 67%|████████████████████████████████████████████████████████████████████████████████████████████████████████                                                    | 20/30 [00:07<00:02,  3.43it/s]Requested to load SDXLRefiner
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 1.19 seconds
Refiner Swapped
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:11<00:00,  2.56it/s]
Image generated with private log at: /data/workspace/Fooocus-ControlNet-SDXL/outputs/2023-12-22/log.html
Generating and saving time: 14.44 seconds
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.47 seconds
Total time: 24.49 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 7.0
[Fooocus] Downloading inpainter ...
[Inpaint] Current inpaint model is /data/workspace/Fooocus-ControlNet-SDXL/models/inpaint/inpaint.fooocus.patch
[Fooocus] Downloading control models ...
[Fooocus] Loading control models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] New suffix: extremely detailed oil painting, unreal 5 render, rhads, Bruce Pennington, Studio Ghibli, tim hildebrandt, digital art, octane render, beautiful composition, trending on artstation, award-winning photograph, masterpiece
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] New suffix: intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by wlop and artgerm and ivan shishkin and andrey shishkin, masterpiece
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.13 seconds
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Fooocus] Image processing ...
[Fooocus] VAE encoding ...
[Fooocus] VAE inpaint encoding ...
Final resolution is (1024, 1024), latent is (1024, 1024).
Preparation time: 5.21 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 2.99 seconds
[Sampler] Fooocus sampler is activated.
 67%|████████████████████████████████████████████████████████████████████████████████████████████████████████                                                    | 20/30 [00:07<00:02,  3.43it/s]Requested to load SDXLRefiner
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 1.19 seconds
Refiner Swapped
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:11<00:00,  2.56it/s]
Image generated with private log at: /data/workspace/Fooocus-ControlNet-SDXL/outputs/2023-12-22/log.html
Generating and saving time: 17.43 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 2.90 seconds
[Sampler] Fooocus sampler is activated.
 67%|████████████████████████████████████████████████████████████████████████████████████████████████████████                                                    | 20/30 [00:07<00:02,  3.41it/s]Requested to load SDXLRefiner
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 1.20 seconds
Refiner Swapped
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:11<00:00,  2.55it/s]
Image generated with private log at: /data/workspace/Fooocus-ControlNet-SDXL/outputs/2023-12-22/log.html
Generating and saving time: 17.46 seconds
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.48 seconds
Total time: 41.95 seconds

Model file is always corrupted and re-downloaded

Describe the problem
When using Depth Image prompt, the model file ZoeD_M12_N.pt is always corrupted and gets redownloaded. I checked the checksum of the file with the checksum on huggingface and it matches.

c97f94c4d53c5b788af46c5da0462262aebb37ea116fd70014bcbba93146c33b  ZoeD_M12_N.pt

Full Console Log

WARNING: Running on CPU. This will be slow. Check your CUDA installation.
img_size [384, 512]
/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:3527.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Params passed to Resize transform:
	width:  512
	height:  384
	resize_target:  True
	keep_aspect_ratio:  True
	ensure_multiple_of:  32
	resize_method:  minimal
Traceback (most recent call last):
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/patch.py", line 497, in loader
    result = original_loader(*args, **kwargs)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 1014, in load
    return _load(opened_zipfile,
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 1422, in _load
    result = unpickler.load()
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 1392, in persistent_load
    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 1366, in load_tensor
    wrap_storage=restore_location(storage, location),
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 381, in default_restore_location
    result = fn(storage, location)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 274, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/serialization.py", line 258, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/async_worker.py", line 704, in worker
    handler(task)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/async_worker.py", line 250, in handler
    pipeline.refresh_controlnets(
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/testing/miniconda3/envs/fooocus-controlnet/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/default_pipeline.py", line 69, in refresh_controlnets
    cache_controlnet_preprocess = {get_1st_path(get_paths(ms)): cache_loader(l, ms) for l, ms in preprocess_loaders.items()}
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/default_pipeline.py", line 69, in <dictcomp>
    cache_controlnet_preprocess = {get_1st_path(get_paths(ms)): cache_loader(l, ms) for l, ms in preprocess_loaders.items()}
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/default_pipeline.py", line 57, in cache_loader
    return loader(path_1st if 1 == len(paths) else paths) if not path_1st in loaded_ControlNets else \
  File "/Users/testing/Fooocus-ControlNet-SDXL/fooocus_extras/controlnet_preprocess_model/ZeoDepth/__init__.py", line 26, in __init__
    model.load_state_dict(torch.load(model_path)['model'])
  File "/Users/testing/Fooocus-ControlNet-SDXL/modules/patch.py", line 513, in loader
    raise ValueError(exp)
ValueError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
File corrupted: /Users/testing/Fooocus-ControlNet-SDXL/models/controlnet/ZoeD_M12_N.pt
Fooocus has tried to move the corrupted file to /Users/testing/Fooocus-ControlNet-SDXL/models/controlnet/ZoeD_M12_N.pt.corrupted
You may try again now and Fooocus will download models again

Colab Notebook is broken

when using Colab Notebook it tries to cd into /content/Fooocus which obviously does not exist as the git clone creates a directory called Fooocus-Control instead.
If one fixes this manually the next error is Module "einops" not found.
This is where Im stuck then

Download link for people in the world outside China

Hi, big thanks for this fork!

People in the world outside China can not download files from Baidu (due to their limitations when registering/login with phone number). Please provide a working link to the massive 35GB+ file (maybe Mega, Google Drive, etc?), I think best of all would be a magnet/torrent link :-)

Kind Regards from Sweden

Chinese Translation for UI-i18n 中文界面翻译

an updated translation after the discussion in this link: lllyasviel#757

just put this json doc in /fooocus/language file

zh_CN.json

then add the --language zh_CN

after the adding, code should be like this: .\python_embeded\python.exe -s Fooocus\entry_with_update.py --language zh_CN

基于之前我讨论贴的一个中文界面翻译版本:lllyasviel#757

把zh_CN.json文档放到根目录/language下面,然后把run.bat(以及anime 还有realistic)文档第一行代码最后面加上 --language zh_CN 然后保存即可

@fenneishi 我要推送一下不?除了上述文档和添加的代码,我不确定要不要把en 文档给加进去啊。要不还是你来吧- -! 我只会简单的一点点代码哈哈哈

can't update to newest version

Describe the problem
Download the zip and use it normally for a week already, found the bug like this: can't update to newest version

Full Console Log
Paste full console log here. You will make our job easier if you give a full log.

Bug: click the run.bat, cmd.exe show the code like below:

G:\foocus-controlnet>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic
Update failed.
Repository not found at G:\foocus-controlnet\Fooocus
Update succeeded.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.701

I found the commit that changing code of ‘entry with update.py'
And I change like that, too.
No change when I restart the run.bat...

So, How should I do?

PLEASE ASK for download models while installation

Is your feature request related to a problem? Please describe.
yes, all in germany has slow internet, that means ~2-10MB/s
so i need 15min to download a 7GB model

Describe the idea you'd like
ask while installation "downlaod model (y/n)"
because maybe i have all models already but not in the directorey fooocus espexct : D

A very practical function [inpaint upload] Mask Redraw, I hope it can be added!

Hello, I learned about your project from lllyasviel, downloaded it and tried it. It is indeed very good and implements some functions that FOOOCUS did not have before. As a graphic designer, I am looking for a better tool. If it can realize the mask redraw function of webui, it will help my design work.
82cb6669c05300e307b4571a7489a92

And I'm sure most designers would love this feature I came up with, thank you for your work!

Install instructions typo

Every mention of cd Fooocus should be replaced with Fooocus-ControlNet-SDXL or the clone should be to a folder with a more readable name (or just change the repo name for that matter)

contronet bug

Describe the problem
Using OpenPose or other newly added contronets gets stuck and generates very, very slowly
Full Console Log
Paste full console log here. You will make our job easier if you give a full log.

apparent CUDA error at compile time. "Torch not compiled with CUDA enabled"

on startup of latest version of F-C-SDXL go to last line of console log to see message "Torch not compiled with CUDA enabled"
Shouldn't a cuda enabled version of torch be used with cuda hardware?

Full Console Log

(fooocusControl) PS D:\Fooocus_win64\Fooocus-ControlNet-SDXL> python entry_with_update.py
Already up-to-date
Update succeeded.
Python 3.10.14 | packaged by Anaconda, Inc. | (main, Mar 21 2024, 16:20:14) [MSC v.1916 64 bit (AMD64)]
Fooocus version: 2.1.701
Downloading: "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors" to D:\Fooocus_win64\Fooocus-ControlNet-SDXL\models\loras\sd_xl_offset_example-lora_1.0.safetensors

100%|█████████████████████████████████████████████████████████████████████████████| 47.3M/47.3M [00:00<00:00, 67.1MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth" to D:\Fooocus_win64\Fooocus-ControlNet-SDXL\models\vae_approx\xlvaeapp.pth

100%|███████████████████████████████████████████████████████████████████████████████| 209k/209k [00:00<00:00, 4.70MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt" to D:\Fooocus_win64\Fooocus-ControlNet-SDXL\models\vae_approx\vaeapp_sd15.pth

100%|███████████████████████████████████████████████████████████████████████████████| 209k/209k [00:00<00:00, 4.96MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xl-to-v1_interposer-v3.1.safetensors" to D:\Fooocus_win64\Fooocus-ControlNet-SDXL\models\vae_approx\xl-to-v1_interposer-v3.1.safetensors

100%|█████████████████████████████████████████████████████████████████████████████| 6.25M/6.25M [00:00<00:00, 29.0MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_expansion.bin" to D:\Fooocus_win64\Fooocus-ControlNet-SDXL\models\prompt_expansion\fooocus_expansion\pytorch_model.bin

100%|███████████████████████████████████████████████████████████████████████████████| 335M/335M [00:04<00:00, 85.0MB/s]
C:\Users\micro.conda\envs\fooocusControl\lib\site-packages\gradio_client\documentation.py:103: UserWarning: Could not get documentation group for <class 'gradio.mix.Parallel'>: No known documentation group for module 'gradio.mix'
warnings.warn(f"Could not get documentation group for {cls}: {exc}")
C:\Users\micro.conda\envs\fooocusControl\lib\site-packages\gradio_client\documentation.py:103: UserWarning: Could not get documentation group for <class 'gradio.mix.Series'>: No known documentation group for module 'gradio.mix'
warnings.warn(f"Could not get documentation group for {cls}: {exc}")
Exception in thread Thread-3 (worker):
Traceback (most recent call last):
File "C:\Users\micro.conda\envs\fooocusControl\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\micro.conda\envs\fooocusControl\lib\threading.py", line 953, in run
self._target(*self._args, **self.kwargs)
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\modules\async_worker.py", line 21, in worker
import modules.default_pipeline as pipeline
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\modules\default_pipeline.py", line 1, in
import modules.core as core
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\modules\core.py", line 3, in
from modules.patch import patch_all
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\modules\patch.py", line 4, in
import fcbh.model_base
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\model_base.py", line 2, in
from fcbh.ldm.modules.diffusionmodules.openaimodel import UNetModel
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\ldm\modules\diffusionmodules\openaimodel.py", line 16, in
from ..attention import SpatialTransformer
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\ldm\modules\attention.py", line 10, in
from .sub_quadratic_attention import efficient_dot_product_attention
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\ldm\modules\sub_quadratic_attention.py", line 27, in
from fcbh import model_management
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\model_management.py", line 114, in
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
File "D:\Fooocus_win64\Fooocus-ControlNet-SDXL\backend\headless\fcbh\model_management.py", line 83, in get_torch_device
return torch.device(torch.cuda.current_device())
File "C:\Users\micro.conda\envs\fooocusControl\lib\site-packages\torch\cuda_init
.py", line 787, in current_device
lazy_init()
File "C:\Users\micro.conda\envs\fooocusControl\lib\site-packages\torch\cuda_init
.py", line 293, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().

可以增加一个上传遮罩功能吗?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the idea you'd like
A clear and concise description of what you want to happen.
在webui上有一个上传遮罩功能,价值非常大,不知道大神时候可以加上这个功能,谢谢!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.