Coder Social home page Coder Social logo

wkpark / sd-webui-model-mixer Goto Github PK

View Code? Open in Web Editor NEW
96.0 3.0 4.0 5.14 MB

Checkpoint model mixer/merger extension

License: GNU Affero General Public License v3.0

Python 98.63% CSS 0.22% JavaScript 1.15%
sd-webui stable-diffusion-webui stable-diffusion-webui-plugin

sd-webui-model-mixer's Introduction

Checkpoint Model Mixer extension

This is another model merger/mixer.

It was created when I was looking for a way to use SuperMerger with other extensions in the txt2img tab. It doesn't have all the features of SuperMerger, but it inherits the core functionality.

pros

  • Supports merging multiple models sequentially (up to 5).
  • Specialises in block-level merging to make block merging easier. (high speed UNet level partial update.)
  • Can be used without saving the merged model like SuperMerger.
  • Supports saving already merged models.
  • Support multiple LyCORIS/LoRA merge to checkpoint or extract to LoRA/LyCORIS.
  • Merge information is saved in the image, and when the image is loaded, it will retrieve the settings used in the merged model.
  • Support block level rebasin merge
  • Support XYZ plots

cons

  • Doesn't support many of the calculation methods supported by SuperMerger.

screenshot

image

sd-webui-model-mixer's People

Contributors

wkpark avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

sd-webui-model-mixer's Issues

Extracting Lora/Lyco fail

loading original SDXL model
building U-Net
no_half = False
loading U-Net...
U-Net: None
building text encoders
loading text encoders...
text encoder 1:
text encoder 2:
create LoRA network. base dim (rank): 64, alpha: 64
neuron dropout: p=None, rank dropout: p=None, module dropout: p=None
create LoRA for Text Encoder 1:
create LoRA for Text Encoder 2:
create LoRA for Text Encoder: 264 modules.
create LoRA for U-Net: 722 modules.
create LoRA network. base dim (rank): 64, alpha: 64
neuron dropout: p=None, rank dropout: p=None, module dropout: p=None
create LoRA for Text Encoder 1:
create LoRA for Text Encoder 2:
create LoRA for Text Encoder: 264 modules.
create LoRA for U-Net: 722 modules.
Calculate svd: 0%| | 0/986 [00:00<?, ?it/s]Text encoder is different. 0.0024471282958984375 > 0.0001ttn_k_proj
264/264 100%: lora_te2_text_model_encoder_layers_31_mlp_fc2
0/722 0%: lora_unet_down_blocks_1_attentions_0_proj_inin
Traceback (most recent call last):
File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "H:\IA\Packages\Stable Diffusion WebUI Forge\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 4730, in extract_lora_from_current_model
extracted_lora = svd(dict(state_dict_base), dict(state_dict_trained), None, lora_dim, min_diff=min_diff, clamp_quantile=clamp_quantile, device=calc_device,
File "H:\IA\Packages\Stable Diffusion WebUI Forge\extensions\sd-webui-model-mixer\scripts\kohya\extract_lora_from_models.py", line 186, in svd
if torch.allclose(module_t.weight, module_o.weight):
RuntimeError: BFloat16 did not match Half

I tried every combination of settings but alwasys stop at 27%

Error saving merged models: "KeyError: "sd_merge_recipe"

The save current merge model feature hasn't yet worked for me. In earlier versions I'd end up w/ a checkpoint that would come up as mostly "junk" data when loaded into the "model toolkit" extension. I checked them there as they were odd sizes and did not work in the app or for training. Those files would be generated if I unticked "Safetensors". Leaving that ticked gave a different error and failed to generate a checkpoint (sorry I don't have a log of that).

At the moment though when I try to save a model that has been created in an open session I'm getting the error "KeyError: 'sd_merge_recipe'":

 `To create a public link, set `share=True` in `launch()`.
Startup time: 1.7s (load scripts: 0.9s, create ui: 0.3s, gradio launch: 0.3s).
Traceback (most recent call last):
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 811, in save_current_model
    metadata["sd_merge_recipe"] = json.dumps(metadata["sd_merge_recipe"])
KeyError: 'sd_merge_recipe'`

Autohelper generates the same pictures

I've been trying to understand how AutoMergerHelpper works but noticed that it produces the same pictures with 0.5 merge ratio instead of variable rates. Looks like its default merging function overrides AMH and makes the default 0.5 merge after each iteration.
if i noticed correctly it renders the first image from the folder with the current variable ratio and all other images with simple 0.5 ratio

KeyError: 'model.diffusion_model.middle_block.0.in_layers.0.weight'

Great extension. Ran into a little bug, managed to work around it by selecting only a range of layers. It seemed anything higher than IN09... so IN10, 11, 12 and MID made it fall over. It's probably a trivial little regex or something that needs fixing.

*** Error running before_process: /home/stable-diffusion-webui/extensions/sd-webui-model-mixer/scripts/model_mixer.py
Traceback (most recent call last):
File "/home/stable-diffusion-webui/modules/scripts.py", line 611, in before_process
script.before_process(p, *script_args)
File "/home/stable-diffusion-webui/extensions/sd-webui-model-mixer/scripts/model_mixer.py", line 2191, in before_process
first_permutation, y = weight_matching(permutation_spec, models["model_a"], theta_0, usefp16=usefp16, device=device)
File "/home/stable-diffusion-webui/extensions/sd-webui-model-mixer/scripts/weight_matching.py", line 811, in weight_matching
w_b = get_permuted_param(ps, perm, wk, params_b, except_axis=axis)
File "/home/stable-diffusion-webui/extensions/sd-webui-model-mixer/scripts/weight_matching.py", line 771, in get_permuted_param
w = params[k]
KeyError: 'model.diffusion_model.middle_block.0.in_layers.0.weight'

I hope you find a way to simplify the extension. I've been using https://github.com/ashen-sensored/sd-webui-runtime-block-merge for a long time, and it's a little bit simpler, less powerful, and doesn't have git rebasin, but neither has a great interface. My suggestions would be:

  • Improve labels with "text encoder", "text decoder" and "unet" terminology. Maybe one section for each?
  • Make the sliders update the textbox in real time.
  • A graphical interface - spline nodes that could be dragged, or so you could drag the mouse across a box and set a load of nodes in a streak - would be amazing, although, I realise not easy to implement.

Best iteration is always 0

I've ran around 10 auto merges so far, and for some reason at the end I always end up with it saying "best iteration: 0"

Also the entire time seems to be spent on evaluation.

Thoughts?

Results: 'hyper_score'
   Best score: 0.5331370521490586
   Best parameter set:
      'model_b.BASE'  : 0.26
      'model_b.IN00'  : 0.18
      'model_b.IN01'  : 0.47
      'model_b.IN02'  : 0.01
      'model_b.IN03'  : 0.15
      'model_b.IN04'  : 0.71
      'model_b.IN05'  : 0.11
      'model_b.IN06'  : 0.64
      'model_b.IN07'  : 0.49
      'model_b.IN08'  : 0.91
      'model_b.M00'   : 0.26
      'model_b.OUT00' : 0.7
      'model_b.OUT01' : 0.47
      'model_b.OUT02' : 0.67
      'model_b.OUT03' : 0.13
      'model_b.OUT04' : 0.87
      'model_b.OUT05' : 0.76
      'model_b.OUT06' : 0.56
      'model_b.OUT07' : 0.51
      'model_b.OUT08' : 0.36
      'model_c.BASE'  : 0.66
      'model_c.IN00'  : 0.85
      'model_c.IN01'  : 0.27
      'model_c.IN02'  : 0.26
      'model_c.IN03'  : 0.18
      'model_c.IN04'  : 0.48
      'model_c.IN05'  : 0.82
      'model_c.IN06'  : 0.03
      'model_c.IN07'  : 0.65
      'model_c.IN08'  : 0.36
      'model_c.M00'   : 0.73
      'model_c.OUT00' : 0.59
      'model_c.OUT01' : 0.37
      'model_c.OUT02' : 0.52
      'model_c.OUT03' : 0.05
      'model_c.OUT04' : 0.89
      'model_c.OUT05' : 0.0
      'model_c.OUT06' : 0.16
      'model_c.OUT07' : 0.7
      'model_c.OUT08' : 0.86
   Best iteration: 0

   Random seed: 3908740

   Evaluation time   : 26290.356142520905 sec    [100.0 %]
   Optimization time : 0.13603782653808594 sec    [0.0 %]
   Iteration time    : 26290.492180347443 sec    [120.6 sec/iter]

 - Best weights para =  ['0.26,0.18,0.47,0.01,0.15,0.71,0.11,0.64,0.49,0.91,0.26,0.7,0.47,0.67,0.13,0.87,0.76,0.56,0.51,0.36', '0.66,0.85,0.27,0.26,0.18,0.48,0.82,0.03,0.65,0.36,0.73,0.59,0.37,0.52,0.05,0.89,0,0.16,0.7,0.86'] [True, True, False]
 - Best alpha para =  []
debugs =  ['elemental merge']
use_extra_elements =  True
 - mm_max_models =  3
config hash =  ffccadf55e18e3362e614c3d1a24e39b5abf545d5c8d10d7af27c1ece98694e6
  - mm_use [True, True, False]
  - model_a umbra_mecha.fp16.safetensors [80da973b09]
  - base_model sd_xl_base_1.0.safetensors [31e35c80fc]
  - max_models 3
  - models ['tpn34pdfv10js2ts05tensoradjust.fp16.safetensors [cf4f62151c]', '4thtail3fix.fp16.safetensors [bdc6379d5b]']
  - modes ['DARE', 'Add-Diff']
  - calcmodes ['Normal', 'Normal']
  - usembws [['ALL'], ['ALL']]
  - weights ['0.26,0.18,0.47,0.01,0.15,0.71,0.11,0.64,0.49,0.91,0.26,0.7,0.47,0.67,0.13,0.87,0.76,0.56,0.51,0.36', '0.66,0.85,0.27,0.26,0.18,0.48,0.82,0.03,0.65,0.36,0.73,0.59,0.37,0.52,0.05,0.89,0,0.16,0.7,0.86']
  - alpha [0.5, 0.5]
  - adjust
  - use elemental [False, False]
  - elementals ['', '']
  - Parse elemental merge...
model_a = umbra_mecha.fp16
Loading from file D:\stable-diffusion-webui\models\Stable-diffusion\umbra_mecha.fp16.safetensors...
isxl = True , sd2 = False
compact_mode =  True
 - check possible UNet partial update...
 - partial changed blocks =  ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
 - UNet partial update mode
Open state_dict from file D:\stable-diffusion-webui\models\Stable-diffusion\tpn34pdfv10js2ts05tensoradjust.fp16.safetensors...
mode = DARE, mbw mode, alpha = [0.26, 0.18, 0.47, 0.01, 0.15, 0.71, 0.11, 0.64, 0.49, 0.91, 0.26, 0.7, 0.47, 0.67, 0.13, 0.87, 0.76, 0.56, 0.51, 0.36]
Stage #1/4: 100%|██████████████████████████████████████████████████████████████████| 2263/2263 [00:57<00:00, 39.42it/s]
Check uninitialized #2/4: 100%|████████████████████████████████████████████████| 2263/2263 [00:00<00:00, 215202.24it/s]
Open state_dict from file D:\stable-diffusion-webui\models\Stable-diffusion\4thtail3fix.fp16.safetensors...
mode = Add-Diff, mbw mode, alpha = [0.66, 0.85, 0.27, 0.26, 0.18, 0.48, 0.82, 0.03, 0.65, 0.36, 0.73, 0.59, 0.37, 0.52, 0.05, 0.89, 0.0, 0.16, 0.7, 0.86]
Stage #3/4: 100%|█████████████████████████████████████████████████████████████████| 2263/2263 [00:20<00:00, 109.25it/s]
Save unchanged weights #4/4: 100%|███████████████████████████████████████████████████████████| 253/253 [00:00<?, ?it/s]
 - merge processing in 92.6s (prepare: 14.4s, merging: 78.2s).
 - loading scripts.patches...
 - lora patch
 - Textencoder(BASE) has been successfully updated
 - update UNet block input_blocks.0.
 - update UNet block input_blocks.1.
 - update UNet block input_blocks.2.
 - update UNet block input_blocks.3.
 - update UNet block input_blocks.4.
 - update UNet block input_blocks.5.
 - update UNet block input_blocks.6.
 - update UNet block input_blocks.7.
 - update UNet block input_blocks.8.
 - update UNet block middle_block.
 - update UNet block output_blocks.0.
 - update UNet block output_blocks.1.
 - update UNet block output_blocks.2.
 - update UNet block output_blocks.3.
 - update UNet block output_blocks.4.
 - update UNet block output_blocks.5.
 - update UNet block output_blocks.6.
 - update UNet block output_blocks.7.
 - update UNet block output_blocks.8.
 - update UNet block time_embed.
 - update UNet block out.
 - UNet partial blocks have been successfully updated
 - Reload full state_dict...
 - remove old checkpointinfo
Unloading model 4 over the limit of 3...
 - model 3: umbra_mecha.fp16 + dare_weights(diff tpn34pdfv10js2ts05tensoradjust.fp16) x alpha_0 + (4thtail3fix.fp16 - sd_xl_base_1.0.safetensors [31e35c80fc]) x alpha_1(0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5),(0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5).safetensors [44e1eaaa09]
Creating model from config: D:\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
Model loaded in 3.4s (create model: 0.6s, apply weights to model: 1.8s, apply half(): 0.1s, load VAE: 0.5s, calculate empty prompt: 0.1s).
100%|██████████████████████████████████████████████████████████████████████████████████| 28/28 [00:09<00:00,  3.07it/s]
Total progress: 28it [00:09,  2.92it/s]
 > score_origin = -0.967432594299316/s]
 - Result score = 0.2753925362860547

the extension list doesn't come up and I can't create an image.

When I restart webui after installing, the extension list doesn't come up and I can't create an image.
I uninstalled Model Mixer and the image is created.
Is there any way to use Model Mixer again?
I really want to use it
Thank you and good luck

설치후 webui 재시작할때 확장목록이 안불러와져서 이미지 생성이 안되네요
Model Mixer 제거하니 이미지생성되네요
Model Mixer 다시 사용할 방법이 있을까요?
꼭 사용하고 싶습니다
감사합니다 수고하세요

Issue where model in merged but not save leads to "Cannot copy out of meta tensor; no data!" - persists through restart - had to disable app to continue

Hey, first off congratulations on your extension. Having the option to merge up 5 models and also to have the option right on the TXT2IMG page is pretty cool.

I don't have time to troubleshoot this right now, but I did want to send in the error I received in case it maybe makes sense to you. I had been merging 5 models and generating images for an hour or so, then left the PC. When I came back and tried to generate another image I got an error of "cannot copy out of meta tensor; no data!".

When I closed A1111 (latest version) and restarted it it seemed to try to reload the last checkpoint which is listed as a huge string txt referencing the temporary merged model you use w/o saving. (btw also noticed the string is huge in PNG info & below the preview). I'm not sure if there are too many characters to handle or if it's just upset it can't find that file perhaps, but it seems like it tries to reload that "file" on restart and is unable. It doesn't release that request though, so even changing models keeps that same error popping up. I restarted multiple times and wasn't able to get an image to generate using any model w/o finally disabling your app.

One thing of note: I do seem to have a persistent recall of my last prompt into the txt input bar when I start A1111. In the past I used to use an extension called state that would restore your exact state the last time you closed A1111 after restated it. I don't have that installed anymore as it stopped working with v1.6. However, it does seem like prompts still get recalled post-reset so perhaps a remnant of that extension is related.

Here's a copy paste of a few starts and error - hope it's insightful:



100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.53it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.62it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.68it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.69it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.68it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.69it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.61it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.60it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.43it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.45it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.49it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.71it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████| 2840/2840 [10:11<00:00, 4.65it/s]
Unloading model 4 over the limit of 2: SDXL\2023-09-02 - Topnotch (Good Series) and #25 Gorge n1 supermerged - did lots of images.safetensors [b687629de1]
Unloading model 3 over the limit of 2: SDXL_2023-09-01 - Topnotch Artstyle - #2.5 - During Gorge N1 Stream - 14img - TXT ON - B8 - 1e5-step00001500 + SDXL_2023-08-27 - SDXL-Merge - Topnotch - 3 models (8-25 - 8-24 - 8-26) + SDXL_2023-08-24 - Topnotch Artstyle - 10img-TXT off - 1500 (Cont from 1k) + SDXL_2023-08-31 - Topnotch Artstyle - 12img - TXT on - 20rep - Batch 4 - bucket on -(Good Series) - 2000 steps + SDXL_2023-08-28 - SDXL Merge - 8k Topnotch 20 doubled dif smooth - Use .2 for weight then good.safetensors [6fc4c1bd77]
Reusing loaded model SDXL_2023-09-03 - Supermerge - add dif - 2 + SDXL_2023-08-27 - SDXL-Merge - Topnotch - 3 models (8-25 - 8-24 - 8-26) + SDXL_Topnotch Artstyle 20img-20rep-Txt-On-step00001500 + SDXL_2023-08-31 - Topnotch Artstyle (Mj greed theme park 3 - TXT enc on) - 12img-step00002000 + SDXL_2023-08-31 - Topnotch Artstyle - 12img - TXT on - 20rep - Batch 4 - bucket on -(Good Series) - 2000 steps + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001800.safetensors [d72e289c4d] to load SDXL\2023-09-04 - topnotch artstyle - 20img - TXT ON - B2 - 1e5-step00003200.safetensors
changing setting sd_model_checkpoint to SDXL\2023-09-04 - topnotch artstyle - 20img - TXT ON - B2 - 1e5-step00003200.safetensors: NotImplementedError
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\options.py", line 140, in set
option.onchange()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\initialize_util.py", line 170, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 738, in reload_model_weights
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(whp4o86efmaabuf)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DE80617DF0>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE80616950>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805ED270>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE46F2AA40>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!


changing setting sd_model_checkpoint to 2023-05-17 - Topnotch (Electronics Test 20 img) - [.50 Normal Flip] - 2500 - epoc.ckpt [3f056ed8bb]: NotImplementedError
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\options.py", line 140, in set
option.onchange()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\initialize_util.py", line 170, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(xo3ghwul7h6e9gt)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DDA1185C90>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DDA1184D00>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DDA1184970>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805F2440>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!


Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(zj8ljzmohy43u64)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DE805F3F40>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE36B98550>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE36B98700>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE36BA4340>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!


Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(2ps0463ll0ovgkt)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DE411E6440>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805F3F70>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805F2350>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE8060A440>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!


An error during merging

Tried different models with the same result.

debugs =  ['elemental merge']
use_extra_elements =  True
 - mm_max_models =  4
config hash =  f79746d04db933c7fd3d9ae6cb70b3bd562a09ed41e48bc894b4fec58d5259e1
  - mm_use [True, False, False, False]
  - model_a Photo\nextphoto_v30.safetensors [1c1f913f3b]
  - base_model v1-5-pruned-emaonly.safetensors [6ce0161689]
  - max_models 4
  - models ['Photo\\rundiffusionFX_v10.safetensors [ad1a10552b]']
  - modes ['Add-Diff']
  - calcmodes ['Rebasin']
  - usembws [[]]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.75]
  - adjust 1,1,1,0,0,0,0,0
  - use elemental [False]
  - elementals ['']
  - Parse elemental merge...
model_a = Photo_nextphoto_v30
Loading from file E:\SD\automatic1111\models\Stable-diffusion\Photo\nextphoto_v30.safetensors...
isxl = False
compact_mode =  False
 - Dynamic loading rebasin module...
Rebasin mode
 - Calulation device for Rebasin is  cuda
 - LAP library is lap
Loading model Photo_rundiffusionFX_v10...
Loading from file E:\SD\automatic1111\models\Stable-diffusion\Photo\rundiffusionFX_v10.safetensors...
mode = Add-Diff, alpha = 0.75
Stage #1/2: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1132/1132 [00:15<00:00, 74.62it/s]
Check uninitialized #2/2: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1132/1132 [00:00<00:00, 189153.90it/s]
Rebasin calc...
maximize weight matching using scipy linear_sum_assignment...
P_bg337:  24%|██████████Special layer P_bg337 found███████████████▉                                                                                                                                                                       | 125/519 [00:12<00:25, 15.58it/s]
P_bg265: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 519/519 [00:38<00:00, 13.33it/s]
P_bg176: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 519/519 [00:35<00:00, 14.43it/s]
P_bg176: new - old = 0.0
weight order changed layers = ['P_bg371', 'P_bg324', 'P_bg337', 'P_bg358']
Save unchanged weights #2/2: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
Apply fine tune [0.99, 1.02, 0.99, 1.02, 0.99, [0.0, 0.0, 0.0, 0.0]]
Clip is fine
 - merge processing in 95.3s (prepare: 0.4s, merging: 94.9s).
*** Error running before_process: E:\SD\automatic1111\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "E:\SD\automatic1111\modules\scripts.py", line 710, in before_process
        script.before_process(p, *script_args)
      File "E:\SD\automatic1111\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3746, in before_process
        sd_models.send_model_to_cpu(sd_models.model_data.sd_model)
      File "E:\SD\automatic1111\modules\sd_models.py", line 576, in send_model_to_cpu
        m.to(devices.cpu)
      File "e:\SD\automatic1111\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
        return super().to(*args, **kwargs)
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
        return self._apply(convert)
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      [Previous line repeated 1 more time]
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
        param_applied = fn(param)
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
        return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
    NotImplementedError: Cannot copy out of meta tensor; no data!

---
*** Error completing request
*** Arguments: ('task(26bs7tv83v5lryc)', 'A real life analog photo of a provocative adult woman,\nred lips, camisole, low leg jeans shorts, boobs drop, side boobs, groin,\njewelry,\ndetailed skin, skin pores, (freckles:0.6), (moles:0.6), (pigmentation:0.5),\ncatwalking on a street at night,\n[[Gabrielle Union | Laura Vandervoort]:0.1],\nphotography achievement, depth of field, film grain', '(worst quality:1.3), (bad quality:1.2), (low quality:1.1),\n[deformed | disfigured], poorly drawn, [bad : wrong] anatomy, [extra | missing | floating | disconnected] limb, (mutated hands and fingers)', [], 32, 'DPM++ 3M SDE Karras', 1, 1, 7, 768, 512, False, 0.3, 2, '4x_NMKD-Siax_200k', 24, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000029F347CFBB0>, 0, False, '', 0.8, 1518790282, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, False, 1, 1, 0, 0, -1, 0, 0, 0, 0, 0, True, -1, 1, 0, '1,1', 'Horizontal', '', 2, 1, True, 'Photo\\nextphoto_v30.safetensors [1c1f913f3b]', 'v1-5-pruned-emaonly.safetensors [6ce0161689]', 4, '1,1,1,0,0,0,0,0', {'calcmodes': ('Rebasin', 'Normal', 'Normal', 'Normal'), 'save_settings': ['fp16', 'prune', 'safetensors'], 'calc_settings': ['GPU']}, True, False, False, False, 'Photo\\rundiffusionFX_v10.safetensors [ad1a10552b]', 'None', 'None', 'None', 'Add-Diff', 'Sum', 'Sum', 'Sum', 0.75, 0.5, 0.5, 0.5, False, True, True, True, [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, False, '', '', '', '', False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "E:\SD\automatic1111\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\SD\automatic1111\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\SD\automatic1111\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "E:\SD\automatic1111\modules\processing.py", line 721, in process_images
        sd_models.reload_model_weights()
      File "E:\SD\automatic1111\modules\sd_models.py", line 764, in reload_model_weights
        sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
      File "E:\SD\automatic1111\modules\sd_models.py", line 713, in reuse_model_from_already_loaded
        send_model_to_cpu(sd_model)
      File "E:\SD\automatic1111\modules\sd_models.py", line 576, in send_model_to_cpu
        m.to(devices.cpu)
      File "e:\SD\automatic1111\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
        return super().to(*args, **kwargs)
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
        return self._apply(convert)
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      [Previous line repeated 1 more time]
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
        param_applied = fn(param)
      File "e:\SD\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
        return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
    NotImplementedError: Cannot copy out of meta tensor; no data!

---

Console Error generating with model merging on (" TypeError: sequence item 1: expected str instance, NoneType found")

Hi,

Getting this error when generating in txt2img after selecting SDXL models to merge/use - it seems to persist for a bit even after the extension is disabeled. When first getting this error and image is generated but it comes every time:

To create a public link, set `share=True` in `launch()`
Startup time: 1.7s (load scripts: 0.9s, create ui: 0.4s, gradio launch: 0.2s, app_started_callback: 0.2s).
config hash =  57d0eb405b7ae3e9373dd23b257eee7bd00cf1e8c6e39de44937648d0277f145
  - mm_use [True, True, True, True, True]
  - model_a SDXL\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors [c4fa751694]
  - base_model None
  - max_models 5
  - models ['SDXL\\2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000.safetensors', 'SDXL\\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors [c4fa751694]', 'SDXL\\2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000.safetensors [3bec9bda52]', 'SDXL\\2023-09-03 - Topnotch Artstyle #29 (Knowlin Box etc) - 20img - TXT ON - B4 - 1e5-step00002800.safetensors [31145a7321]', 'SDXL\\2023-09-05 - Topnotch #31 - 58img (MJ eerie film shots - rulers - mishmash-step00003300.safetensors [efdf0f8b17]']
  - modes ['Sum', 'Sum', 'Sum', 'Sum', 'Sum']
  - usembws [[], [], [], [], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5, 0.5, 0.5, 0.5]
  - adjust
model_a = SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good
Loading SDXL\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors [c4fa751694] from loaded model...
isxl = False
compact_mode =  False
Loading model SDXL_2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000.safetensors...
mode = Sum, alpha = 0.5
Stage #1/6: 100%|█████████████████████████████████████████████████████████████████| 3103/3103 [00:05<00:00, 534.21it/s]
Check uninitialized #2/6: 100%|███████████████████████████████████████████████| 3103/3103 [00:00<00:00, 1034408.31it/s]
Loading model SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good...
Loading SDXL\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors [c4fa751694] from loaded model...
mode = Sum, alpha = 0.5
Stage #3/6: 100%|█████████████████████████████████████████████████████████████████| 3103/3103 [00:07<00:00, 411.32it/s]
Loading model SDXL_2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000.safetensors...
mode = Sum, alpha = 0.5
Stage #4/6: 100%|█████████████████████████████████████████████████████████████████| 3103/3103 [00:06<00:00, 509.40it/s]
Loading model SDXL_2023-09-03 - Topnotch Artstyle #29 (Knowlin Box etc) - 20img - TXT ON - B4 - 1e5-step00002800...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-03 - Topnotch Artstyle #29 (Knowlin Box etc) - 20img - TXT ON - B4 - 1e5-step00002800.safetensors...
mode = Sum, alpha = 0.5
Stage #5/6: 100%|█████████████████████████████████████████████████████████████████| 3103/3103 [00:06<00:00, 509.75it/s]
Loading model SDXL_2023-09-05 - Topnotch #31 - 58img (MJ eerie film shots - rulers - mishmash-step00003300...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-05 - Topnotch #31 - 58img (MJ eerie film shots - rulers - mishmash-step00003300.safetensors...
mode = Sum, alpha = 0.5
Stage #6/6: 100%|█████████████████████████████████████████████████████████████████| 3103/3103 [00:06<00:00, 507.20it/s]
Save unchanged weights #6/6: 0it [00:00, ?it/s]
Loading VAE weights specified in settings: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:08<00:00,  4.59it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
Total progress: 100%|██████████████████████████████████████████████████████████████████| 37/37 [00:09<00:00,  3.99it/s]
config hash =  57d0eb405b7ae3e9373dd23b257eee7bd00cf1e8c6e39de44937648d0277f145████████| 37/37 [00:09<00:00,  4.67it/s]
  - use current mixed model 57d0eb405b7ae3e9373dd23b257eee7bd00cf1e8c6e39de44937648d0277f145
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 10 images in a total of 10 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:08<00:00,  4.62it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  4.68it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  4.69it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  4.68it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
 35%|████████████████████████████▊                                                     | 13/37 [00:02<00:05,  4.67it/s]
Total progress:  44%|███████████████████████████▊                                    | 161/370 [00:41<00:54,  3.85it/s]
config hash =  95509bfa362e0e3a3f1de1f34de81039bfffdfc9b99c1afdb25a90a8636cdb17      | 161/370 [00:41<00:46,  4.52it/s]
  - mm_use [True, True, True, False, False]
  - model_a SDXL\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors [c4fa751694]
  - base_model None
  - max_models 5
  - models ['SDXL\\2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000.safetensors', 'SDXL\\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors [c4fa751694]', 'SDXL\\2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000.safetensors [3bec9bda52]']
  - modes ['Sum', 'Sum', 'Sum']
  - usembws [[], [], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5, 0.5]
  - adjust
model_a = SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors...
isxl = True
compact_mode =  False
Loading model SDXL_2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000.safetensors...
mode = Sum, alpha = 0.5
Stage #1/4: 100%|█████████████████████████████████████████████████████████████████| 2515/2515 [00:06<00:00, 384.60it/s]
Check uninitialized #2/4: 100%|███████████████████████████████████████████████| 2515/2515 [00:00<00:00, 1257591.15it/s]
Loading model SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors...
mode = Sum, alpha = 0.5
Stage #3/4: 100%|█████████████████████████████████████████████████████████████████| 2515/2515 [00:06<00:00, 383.43it/s]
Loading model SDXL_2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000.safetensors...
mode = Sum, alpha = 0.5
Stage #4/4: 100%|█████████████████████████████████████████████████████████████████| 2515/2515 [00:06<00:00, 363.37it/s]
Save unchanged weights #4/4: 0it [00:00, ?it/s]
Loading VAE weights specified in settings: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 10 images in a total of 10 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  4.68it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  5.20it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
 73%|███████████████████████████████████████████████████████████▊                      | 27/37 [00:05<00:01,  5.16it/s]
Total progress:  27%|█████████████████▍                                              | 101/370 [00:24<01:05,  4.13it/s]
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 10 images in a total of 10 batches.0:51,  5.21it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:08<00:00,  4.60it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  4.68it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
 30%|████████████████████████▍                                                         | 11/37 [00:02<00:05,  4.66it/s]
Total progress:  23%|██████████████▉                                                  | 85/370 [00:22<01:15,  3.79it/s]
Restarting UI... 23%|██████████████▉                                                  | 85/370 [00:22<01:05,  4.35it/s]
Closing server running on port: 7860
[-] ADetailer initialized. version: 23.9.2, num models: 9
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
2023-09-10 08:26:21,611 - ControlNet - INFO - ControlNet v1.1.409
checkpoint title =  (((SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good) x (1 - alpha_0) + (SDXL_2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000) x alpha_0) x (1 - alpha_1) + (SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good) x alpha_1) x (1 - alpha_2) + (SDXL_2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000) x alpha_2(0.5),(0.5),(0.5).safetensors [95509bfa36]
checkpoint title =  (((SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good) x (1 - alpha_0) + (SDXL_2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000) x alpha_0) x (1 - alpha_1) + (SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good) x alpha_1) x (1 - alpha_2) + (SDXL_2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000) x alpha_2(0.5),(0.5),(0.5).safetensors [95509bfa36]
D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:925: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  out_image = gr.Image(label="Inpainted image", elem_id="ia_out_image", type="pil",
D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:941: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  cleaner_out_image = gr.Image(label="Cleaned image", elem_id="ia_cleaner_out_image", type="pil",
D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:1001: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  webui_out_image = gr.Image(label="Inpainted image", elem_id="ia_webui_out_image", type="pil",
D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:1087: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  cn_out_image = gr.Image(label="Inpainted image", elem_id="ia_cn_out_image", type="pil",
D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:1126: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  sam_image = gr.Image(label="Segment Anything image", elem_id="ia_sam_image", type="numpy", tool="sketch", brush_radius=8,
D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:1137: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  sel_mask = gr.Image(label="Selected mask image", elem_id="ia_sel_mask", type="numpy", tool="sketch", brush_radius=12,
D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:1140: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  with gr.Row().style(equal_height=False):
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 1.8s (load scripts: 0.9s, create ui: 0.5s, gradio launch: 0.2s).
Checkpoint (((SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good) x (1 - alpha_0) + (SDXL_2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000) x alpha_0) x (1 - alpha_1) + (SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good) x alpha_1) x (1 - alpha_2) + (SDXL_2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000) x alpha_2(0.5),(0.5),(0.5).safetensors [95509bfa36] not found; loading fallback 2022-10-21T00-13-18_darcy_14_training_images_2200_max_training_steps_darcy_token_person_class_word.ckpt [8b96536fb9]
Loading model 2022-10-21T00-13-18_darcy_14_training_images_2200_max_training_steps_darcy_token_person_class_word.ckpt [8b96536fb9] (2 out of 3)
Loading weights [8b96536fb9] from e:\Stable Diffusion Checkpoints\2022-10-21T00-13-18_darcy_14_training_images_2200_max_training_steps_darcy_token_person_class_word.ckpt
Creating model from config: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\configs\v1-inference.yaml
Loading VAE weights specified in settings: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
Model loaded in 2.3s (load weights from disk: 1.3s, create model: 0.2s, apply weights to model: 0.6s, load VAE: 0.1s).
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 10 images in a total of 10 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  5.18it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  5.25it/s]
 32%|██████████████████████████▌                                                       | 12/37 [00:02<00:04,  5.22it/s]
Total progress:  23%|███████████████                                                  | 86/370 [00:20<01:07,  4.20it/s]
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 10 images in a total of 10 batches.0:57,  4.97it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:06<00:00,  5.32it/s]
 41%|█████████████████████████████████▏                                                | 15/37 [00:02<00:04,  5.48it/s]
Total progress:  14%|█████████▏                                                       | 52/370 [00:11<01:12,  4.37it/s]
Restoring base VAE4%|█████████▏                                                       | 52/370 [00:11<00:58,  5.40it/s]
Applying attention optimization: sdp... done.
VAE weights loaded.
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 10 images in a total of 10 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:06<00:00,  5.32it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:06<00:00,  5.52it/s]
 35%|████████████████████████████▊                                                     | 13/37 [00:02<00:04,  5.49it/s]
Total progress:  24%|███████████████▎                                                 | 87/370 [00:19<01:03,  4.44it/s]
Loading model 2023-05-18- Topnotch 2.5 - (Electronics - Not sure the numbers) - prob not great.ckpt (3 out of 3)29it/s]
Calculating sha256 for e:\Stable Diffusion Checkpoints\2023-05-18- Topnotch 2.5 - (Electronics - Not sure the numbers) - prob not great.ckpt: 297eae4f0dd98507c0fdd3719315bf20633966512c70a50a8c38eead28b87e16
Loading weights [297eae4f0d] from e:\Stable Diffusion Checkpoints\2023-05-18- Topnotch 2.5 - (Electronics - Not sure the numbers) - prob not great.ckpt
Creating model from config: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\configs\v1-inference.yaml
Applying attention optimization: sdp... done.
Model loaded in 2.7s (calculate hash: 1.2s, load weights from disk: 0.5s, create model: 0.2s, apply weights to model: 0.7s).
Reusing loaded model (((SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good) x (1 - alpha_0) + (SDXL_2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000) x alpha_0) x (1 - alpha_1) + (SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good) x alpha_1) x (1 - alpha_2) + (SDXL_2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000) x alpha_2(0.5),(0.5),(0.5).safetensors [95509bfa36] to load SDXL\2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000.safetensors [3bec9bda52]
Loading weights [3bec9bda52] from e:\Stable Diffusion Checkpoints\SDXL\2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000.safetensors
Applying attention optimization: sdp... done.
Weights loaded in 3.3s (send model to cpu: 0.6s, load weights from disk: 0.5s, apply weights to model: 0.8s, move model to device: 1.4s).
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 10 images in a total of 10 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  4.63it/s]
 41%|█████████████████████████████████▏                                                | 15/37 [00:03<00:04,  4.66it/s]
Total progress:  14%|█████████▏                                                       | 52/370 [00:13<01:24,  3.78it/s]
config hash =  57d0eb405b7ae3e9373dd23b257eee7bd00cf1e8c6e39de44937648d0277f145       | 52/370 [00:13<01:09,  4.57it/s]
  - mm_use [True, True, True, True, True]
  - model_a SDXL\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors [c4fa751694]
  - base_model None
  - max_models 5
  - models ['SDXL\\2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000.safetensors', 'SDXL\\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors [c4fa751694]', 'SDXL\\2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000.safetensors [3bec9bda52]', 'SDXL\\2023-09-03 - Topnotch Artstyle #29 (Knowlin Box etc) - 20img - TXT ON - B4 - 1e5-step00002800.safetensors [31145a7321]', 'SDXL\\2023-09-05 - Topnotch #31 - 58img (MJ eerie film shots - rulers - mishmash-step00003300.safetensors [efdf0f8b17]']
  - modes ['Sum', 'Sum', 'Sum', 'Sum', 'Sum']
  - usembws [[], [], [], [], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5, 0.5, 0.5, 0.5]
  - adjust
model_a = SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors...
isxl = True
compact_mode =  False
Loading model SDXL_2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-08-31 - Topnotch Artstyle - 16img - TXT off - Batch 4 - 40rep-step00003000.safetensors...
mode = Sum, alpha = 0.5
Stage #1/6: 100%|█████████████████████████████████████████████████████████████████| 2515/2515 [00:06<00:00, 399.78it/s]
Check uninitialized #2/6: 100%|███████████████████████████████████████████████| 2515/2515 [00:00<00:00, 1257591.15it/s]
Loading model SDXL_2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-09 - Supermerge - Topnotch Artstyle - 510 - triple model good.safetensors...
mode = Sum, alpha = 0.5
Stage #3/6: 100%|█████████████████████████████████████████████████████████████████| 2515/2515 [00:06<00:00, 392.03it/s]
Loading model SDXL_2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000...
Loading SDXL\2023-09-05 - Topnotch #35 - 39img (MJ robots gloss gadgets topnotch rooms - bat6- mishmash-step00003000.safetensors [3bec9bda52] from loaded model...
mode = Sum, alpha = 0.5
Stage #4/6: 100%|█████████████████████████████████████████████████████████████████| 2515/2515 [00:04<00:00, 514.60it/s]
Loading model SDXL_2023-09-03 - Topnotch Artstyle #29 (Knowlin Box etc) - 20img - TXT ON - B4 - 1e5-step00002800...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-03 - Topnotch Artstyle #29 (Knowlin Box etc) - 20img - TXT ON - B4 - 1e5-step00002800.safetensors...
mode = Sum, alpha = 0.5
Stage #5/6: 100%|█████████████████████████████████████████████████████████████████| 2515/2515 [00:07<00:00, 354.93it/s]
Loading model SDXL_2023-09-05 - Topnotch #31 - 58img (MJ eerie film shots - rulers - mishmash-step00003300...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-05 - Topnotch #31 - 58img (MJ eerie film shots - rulers - mishmash-step00003300.safetensors...
mode = Sum, alpha = 0.5
Stage #6/6: 100%|█████████████████████████████████████████████████████████████████| 2515/2515 [00:06<00:00, 365.19it/s]
Save unchanged weights #6/6: 0it [00:00, ?it/s]
Loading VAE weights specified in settings: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
VAE weights loaded.
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 10 images in a total of 10 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  4.69it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:07<00:00,  4.69it/s]
*** Error executing callback before_image_saved_callback for D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\script_callbacks.py", line 192, in before_image_saved_callback
        c.callback(params)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1665, in on_image_save
        lines[i] = " Model hash: " + ", Model hash: ".join(modelhashes)
    TypeError: sequence item 1: expected str instance, NoneType found

---
 57%|██████████████████████████████████████████████▌                                   | 21/37 [00:04<00:03,  4.67it/s]
Restoring base VAE6%|████████████████▋                                                | 95/370 [00:23<00:59,  4.64it/s]
Applying attention optimization: sdp... done.
VAE weights loaded.
Total progress:  26%|████████████████▋                                                | 95/370 [00:26<01:17,  3.54it/s]
Total progress:  26%|████████████████▋                                                | 95/370 [00:26<00:59,  4.64it/s]
`

Error saving to to Safetensor format: "ERROR is You are trying to save a non contiguous tensor: `model.diffusion_model.input_blocks.0.0.weight` which is not allowed. It either means you are trying to save tensors which are reference of each other in which case it's recommended to save only the full tensors, and reslice at load time, or simply call `.contiguous()` on your tensor to pack it before saving.

TL;DR: Been getting error saving models out to Safetensors format for a while - always worked around saving to CKPT. Testing new LoRA export realized I will need safetensor functionality working to use that upcoming feature.


Hello again!

I get the following error whenever I try to save out a merge to .safetensors format:

"ERROR is You are trying to save a non contiguous tensor: model.diffusion_model.input_blocks.0.0.weight which is not allowed. It either means you are trying to save tensors which are reference of each other in which case it's recommended to save only the full tensors, and reslice at load time, or simply call .contiguous() on your tensor to pack it before saving."

I've been getting this error for a little while, but I've just untoggled the safetensors option and saved to ckpt instead and that's worked without issue. Howver, I pulled the development branch w/ early LoRA export functionality and I realized this above error might cause issues when exporting to LoRA.

My first attempts exporting to Lora and Lycoris worked great! But - for some reason the exported LoRAs don't show up in A1111 unless I turn off this option in settings, "Always show all networks on the Lora page (otherwise, those detected as for incompatible version of Stable Diffusion will be hidden)"

So I'm thinking A1111s not recognizing the Model Mixer generated LoRAs as valid for SDXL - My guess is that it's related to the safetensor error I get when trying to save full models out to Safetensors. I don't have a workaround that would work in this case.

Really sorry if this is user error (Re: something in my A111 settings), but I do get it on both the current 1.6 version and the latest development version v1.6.0-295-g9c1c0da0 which has the tweaks Automatic added recently related to the other issue in this forum.

Thanks so much for everything - love the extension!

[Suggestion] Add options to merge using sd-meh

https://github.com/s1dlx/meh is a small library that has a good number of merge methods, some of which are not in supermerger. One very important feature is "weights clipping", which allows to merge models using add difference alpha=1.0 with limited distortion by clipping the weights to the original models A and B. There's also rebasin, which makes it possible to reduce the loss when merging using weighted sum.

Note that the library does not yet support SDXL in the main branch.

I did contribute to it a little bit, which is why I know about this library. Just wanted to mention this in case you were considering adding more merge options.

A1111 config json got corrupted / deleted it / but now any selection in Model Mixer gives error

Hello again,

I had a computer issue (BSOD crash) and when I recovered A1111 wouldn't launch due to a corrupt config json file. I deleted that and reset my settings, but now when I try to use the menu options for Model Mixer I'm getting errors. Is there a way to totally clear it out? I actually tried deleting the folder and reinstalling it but somehow that didn't work.

Console error examples:

venv "D:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.7.0
Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
Installing sd-webui-controlnet requirement: trimesh[easy]
Launching Web UI with arguments: --opt-sdp-attention --no-half-vae --opt-channelslast --disable-safe-unpickle --skip-torch-cuda-test --disable-nan-check --skip-version-check --ckpt-dir e:\stable Diffusion Checkpoints
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
ControlNet preprocessor location: D:\stable-diffusion-webui\extensions\3sd-webui-controlnet\annotator\downloads
2024-01-04 12:24:40,801 - ControlNet - INFO - ControlNet v1.1.427
2024-01-04 12:24:40,848 - ControlNet - INFO - ControlNet v1.1.427
[-] ADetailer initialized. version: 23.9.2, num models: 9
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Loading weights [e869ac7d69] from e:\stable Diffusion Checkpoints\SDXL\sd_xl_turbo_1.0_fp16.safetensors
Creating model from config: D:\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
Model loaded in 3.5s (load weights from disk: 0.5s, create model: 0.3s, apply weights to model: 1.8s, calculate empty prompt: 0.7s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 12.5s (prepare environment: 2.2s, import torch: 1.6s, import gradio: 0.4s, setup paths: 0.4s, initialize shared: 1.0s, other imports: 0.3s, list SD models: 0.2s, load scripts: 1.4s, create ui: 4.1s, gradio launch: 0.3s, app_started_callback: 0.5s).
Reusing loaded model SDXL\sd_xl_turbo_1.0_fp16.safetensors [e869ac7d69] to load SDXL\talmendoxlSDXL_v11Beta.safetensors [7fb8947de2]
Loading weights [7fb8947de2] from e:\stable Diffusion Checkpoints\SDXL\talmendoxlSDXL_v11Beta.safetensors
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
Weights loaded in 15.6s (send model to cpu: 1.4s, load weights from disk: 0.9s, apply weights to model: 12.3s, move model to device: 0.8s).
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1297, in postprocess_data
    self.validate_outputs(fn_index, predictions)  # type: ignore
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1272, in validate_outputs
    raise ValueError(
ValueError: An event handler (config_sdxl) didn't receive enough output values (needed: 41, received: 35).
Wanted outputs:
    [dropdown, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, dropdown, dropdown, dropdown, dropdown, dropdown, dropdown, dropdown, textbox, textbox, textbox, textbox, textbox, textbox, textbox]
Received outputs:
    [{'choices': ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN08,M00,OUT00,...,OUT08', '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN08,M00,OUT00,...,OUT08', '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN08,M00,OUT00,...,OUT08', '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN08,M00,OUT00,...,OUT08', '__type__': 'generic_update'}]
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1297, in postprocess_data
    self.validate_outputs(fn_index, predictions)  # type: ignore
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1272, in validate_outputs
    raise ValueError(
ValueError: An event handler (config_sdxl) didn't receive enough output values (needed: 41, received: 35).
Wanted outputs:
    [dropdown, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, dropdown, dropdown, dropdown, dropdown, dropdown, dropdown, dropdown, textbox, textbox, textbox, textbox, textbox, textbox, textbox]
Received outputs:
    [{'choices': ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'IN09', 'IN10', 'IN11', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08', 'OUT09', 'OUT10', 'OUT11'], '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'IN09', 'IN10', 'IN11', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08', 'OUT09', 'OUT10', 'OUT11'], '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'IN09', 'IN10', 'IN11', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08', 'OUT09', 'OUT10', 'OUT11'], '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'IN09', 'IN10', 'IN11', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08', 'OUT09', 'OUT10', 'OUT11'], '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'IN09', 'IN10', 'IN11', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08', 'OUT09', 'OUT10', 'OUT11'], '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN11,M00,OUT00,...,OUT11', '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN11,M00,OUT00,...,OUT11', '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN11,M00,OUT00,...,OUT11', '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN11,M00,OUT00,...,OUT11', '__type__': 'generic_update'}]
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 958, in check_calc_settings
    last = calc_settings.pop()
IndexError: pop from empty list
Reusing loaded model SDXL\talmendoxlSDXL_v11Beta.safetensors [7fb8947de2] to load v1-5-pruned-emaonly.safetensors [6ce0161689]
Loading weights [6ce0161689] from D:\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\stable-diffusion-webui\configs\v1-inference.yaml
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
Model loaded in 1.3s (create model: 0.2s, apply weights to model: 0.9s).
Reusing loaded model v1-5-pruned-emaonly.safetensors [6ce0161689] to load SDXL\sd_xl_turbo_1.0.safetensors [2e58e3704b]
Loading weights [2e58e3704b] from e:\stable Diffusion Checkpoints\SDXL\sd_xl_turbo_1.0.safetensors
Creating model from config: D:\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
Model loaded in 3.5s (create model: 0.2s, apply weights to model: 3.1s).
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1297, in postprocess_data
    self.validate_outputs(fn_index, predictions)  # type: ignore
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1272, in validate_outputs
    raise ValueError(
ValueError: An event handler (config_sdxl) didn't receive enough output values (needed: 41, received: 35).
Wanted outputs:
    [dropdown, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, dropdown, dropdown, dropdown, dropdown, dropdown, dropdown, dropdown, textbox, textbox, textbox, textbox, textbox, textbox, textbox]
Received outputs:
    [{'choices': ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN08,M00,OUT00,...,OUT08', '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN08,M00,OUT00,...,OUT08', '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN08,M00,OUT00,...,OUT08', '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN08,M00,OUT00,...,OUT08', '__type__': 'generic_update'}]
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 2083, in recipe_update
    if "Sum" in modes[n]:
TypeError: argument of type 'bool' is not iterable

Error when attempting to merge SDXL Inpaint .1 Beta model - Perhaps impossible? (RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1)

The developer branch of A1111 allows for use of SDXL inpaint models. SAI quietly released a very early beta inpaint model for SDXL months ago in diffuser format, and I found a safetensor conversion of it hosted here:

Card: https://huggingface.co/wangqyqq/sd_xl_base_1.0_inpainting_0.1.safetensors
Safetensor: https://huggingface.co/wangqyqq/sd_xl_base_1.0_inpainting_0.1.safetensors/tree/main

I've been using it w/ this simple outpainting extension today (https://github.com/Haoming02/sd-webui-mosaic-outpaint) and it's worked pretty well. I was hoping to try merging it w/ some of my dreambooth (kohya) trained models but am getting the following error: RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1 - (See full console in details)

Is there a fundamental issue w/ this beta SDXL inpainting model that would prevent it from being compatible for merging w/ other SDXL models? Or is there a way the model could be mixed w/ "normal" SDXL models (a la how you can do that w/ v1.5's inpainting model) to allow for better inpainting.

Thanks for any insight!!

Details

venv "D:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.7.0-437-gce168ab5
Commit hash: ce168ab5dbc8b54b7245f352a2eaa55a37019b91
Launching Web UI with arguments: --opt-sdp-attention --no-half-vae --opt-channelslast --skip-torch-cuda-test --skip-version-check --ckpt-dir e:\Stable Diffusion Checkpoints
No module 'xformers'. Proceeding without it.
*** Extension "sd-webui-lama-cleaner-masked-content" requires "sd-webui-controlnet" which is not installed.
ControlNet preprocessor location: D:\stable-diffusion-webui\extensions\3sd-webui-controlnet\annotator\downloads
2024-02-01 14:36:46,452 - ControlNet - INFO - ControlNet v1.1.440
2024-02-01 14:36:46,552 - ControlNet - INFO - ControlNet v1.1.440
[-] ADetailer initialized. version: 24.1.2, num models: 9
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Loading weights [fe1b97fe65] from e:\Stable Diffusion Checkpoints\SDXL\sd_xl_base_1.0_inpainting_0.1.safetensors
Creating model from config: D:\stable-diffusion-webui\configs\sd_xl_inpaint.yaml
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp-no-mem... done.
Model loaded in 2.7s (create model: 0.2s, apply weights to model: 2.0s).
2024-02-01 14:36:51,003 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 11.6s (prepare environment: 0.6s, import torch: 2.6s, import gradio: 0.7s, setup paths: 0.7s, initialize shared: 0.2s, other imports: 0.5s, list SD models: 0.2s, load scripts: 2.1s, refresh VAE: 0.1s, create ui: 3.4s, gradio launch: 0.4s).

img2img: topnotch artstyle
debugs = []
use_extra_elements = True

  • mm_max_models = 7
    config hash = 2733d0e96220606eb11a46bc48f171b0c6c24c67b4b6e2cdcad8e7bc11cbf64d
  • mm_use [True, False, False, False, False, False, False]
  • model_a SDXL\sd_xl_base_1.0_inpainting_0.1.safetensors [fe1b97fe65]
  • base_model None
  • max_models 7
  • models ['SDXL\2023-11-28 - Topnotch Artstyle (Mostly Ruler-Math) - RealVision Try - 17img (20repeats) - 900reg - b2 - 4000max-step00005400.safetensors']
  • modes ['Sum']
  • calcmodes ['Normal']
  • usembws [[]]
  • weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  • alpha [0.5]
  • adjust
  • use elemental [False]
  • elementals ['']
    model_a = SDXL_sd_xl_base_1.0_inpainting_0.1
    Loading SDXL\sd_xl_base_1.0_inpainting_0.1.safetensors [fe1b97fe65] from loaded model...
  • loading script.patches...
  • base lora_patch
    Applying attention optimization: sdp-no-mem... done.
    isxl = True , sd2 = False
    compact_mode = False
  • check possible UNet partial update...
  • partial changed blocks = ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
  • UNet partial update mode
    Loading model SDXL_2023-11-28 - Topnotch Artstyle (Mostly Ruler-Math) - RealVision Try - 17img (20repeats) - 900reg - b2 - 4000max-step00005400...
    Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-11-28 - Topnotch Artstyle (Mostly Ruler-Math) - RealVision Try - 17img (20repeats) - 900reg - b2 - 4000max-step00005400.safetensors...
    Calculating sha256 for e:\Stable Diffusion Checkpoints\SDXL\2023-11-28 - Topnotch Artstyle (Mostly Ruler-Math) - RealVision Try - 17img (20repeats) - 900reg - b2 - 4000max-step00005400.safetensors: 3183e182eb1a9439f63712afe40c9933a3f20da78655647392b1e337199e278f
    mode = Sum, alpha = 0.5
    Stage #1/2: 0%| | 0/2255 [00:00<?, ?it/s]
    *** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
    File "D:\stable-diffusion-webui\modules\scripts.py", line 776, in before_process
    script.before_process(p, *script_args)
    File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3740, in before_process
    theta_0[key] = weighted_sum(theta_0[key], theta_1[key], alpha)
    File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3435, in _torch_lerp return torch.lerp(theta0.to(torch.float32), theta1.to(torch.float32), alpha).to(theta0.dtype)
    RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1

[-] ADetailer: img2img inpainting detected. adetailer disabled.
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:01<00:00, 13.97it/s]

img2img: topnotch artstyle
debugs = []
use_extra_elements = True

  • mm_max_models = 7
    config hash = 2733d0e96220606eb11a46bc48f171b0c6c24c67b4b6e2cdcad8e7bc11cbf64d
  • mm_use [True, False, False, False, False, False, False]
  • model_a SDXL\sd_xl_base_1.0_inpainting_0.1.safetensors [fe1b97fe65]
  • base_model None
  • max_models 7
  • models ['SDXL\2023-11-28 - Topnotch Artstyle (Mostly Ruler-Math) - RealVision Try - 17img (20repeats) - 900reg - b2 - 4000max-step00005400.safetensors']
  • modes ['Sum']
  • calcmodes ['Normal']
  • usembws [[]]
  • weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  • alpha [0.5]
  • adjust
  • use elemental [False]
  • elementals ['']
    model_a = SDXL_sd_xl_base_1.0_inpainting_0.1
    Loading SDXL\sd_xl_base_1.0_inpainting_0.1.safetensors [fe1b97fe65] from loaded model...
  • base lora_patch
    Applying attention optimization: sdp-no-mem... done.
    isxl = True , sd2 = False
    compact_mode = False
  • check possible UNet partial update...
  • partial changed blocks = ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
  • UNet partial update mode
    Loading model SDXL_2023-11-28 - Topnotch Artstyle (Mostly Ruler-Math) - RealVision Try - 17img (20repeats) - 900reg - b2 - 4000max-step00005400...
    Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-11-28 - Topnotch Artstyle (Mostly Ruler-Math) - RealVision Try - 17img (20repeats) - 900reg - b2 - 4000max-step00005400.safetensors...
    mode = Sum, alpha = 0.5
    Stage #1/2: 0%| | 0/2255 [00:00<?, ?it/s]
    *** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
    File "D:\stable-diffusion-webui\modules\scripts.py", line 776, in before_process
    script.before_process(p, *script_args)
    File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3740, in before_process
    theta_0[key] = weighted_sum(theta_0[key], theta_1[key], alpha)
    File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3435, in _torch_lerp
    return torch.lerp(theta0.to(torch.float32), theta1.to(torch.float32), alpha).to(theta0.dtype)
    RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1

[-] ADetailer: img2img inpainting detected. adetailer disabled.
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:04<00:00, 3.27it/s]

img2img: topnotch artstyle
debugs = []
use_extra_elements = True

  • mm_max_models = 7
    config hash = 571332e681bd53d1c5055e1cf25b8077992c865dcf08f321cc7e9f1b8e457634
  • mm_use [True, False, False, False, False, False, False]
  • model_a SDXL\sd_xl_base_1.0_inpainting_0.1.safetensors [fe1b97fe65]
  • base_model None
  • max_models 7
  • models ['SDXL\2024-01-17 - Topnotch Artstyle - 20img - Chips and tree landscapes- 16x9-step00002100.safetensors [a711139531]']
  • modes ['Sum']
  • calcmodes ['Normal']
  • usembws [[]]
  • weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  • alpha [0.5]
  • adjust
  • use elemental [False]
  • elementals ['']
    model_a = SDXL_sd_xl_base_1.0_inpainting_0.1
    Loading SDXL\sd_xl_base_1.0_inpainting_0.1.safetensors [fe1b97fe65] from loaded model...
  • base lora_patch
    Applying attention optimization: sdp-no-mem... done.
    isxl = True , sd2 = False
    compact_mode = False
  • check possible UNet partial update...
  • partial changed blocks = ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
  • UNet partial update mode
    Loading model SDXL_2024-01-17 - Topnotch Artstyle - 20img - Chips and tree landscapes- 16x9-step00002100...
    Loading from file e:\Stable Diffusion Checkpoints\SDXL\2024-01-17 - Topnotch Artstyle - 20img - Chips and tree landscapes- 16x9-step00002100.safetensors...
    mode = Sum, alpha = 0.5
    Stage #1/2: 0%| | 0/2255 [00:00<?, ?it/s]
    *** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
    File "D:\stable-diffusion-webui\modules\scripts.py", line 776, in before_process
    script.before_process(p, *script_args)
    File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3740, in before_process
    theta_0[key] = weighted_sum(theta_0[key], theta_1[key], alpha)
    File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3435, in _torch_lerp
    return torch.lerp(theta0.to(torch.float32), theta1.to(torch.float32), alpha).to(theta0.dtype)
    RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1

SDXL models trained w/ "One Trainer" are rejected w/ warning, "Warning model_a is SDNone but model_b is SDXL"

Hey,

I think this is a issue/quirk on the OneTrainer side, but due to some difference in the way the header(maybe?) is formatted some applications read SDXL models trained w/ "One Trainer" as broken. I asked in their Discord why models trained w/ Onetrainer might be reported incorrectly, and apparently it's a known thing but it seemed the consensus was Kohya's training script just does something differently that some applications look-for/expect.

The models I've trained w/ OneTrainer (which gained a lot of popularity recently due to a nice GUI and very active developer) have never failed to work anywhere for inferencing/training(w/ Kohya)/or even merging when it's been allowed w/o the warning/rejection. On the other hand the A1111's extension "Model Toolkit" is one other place I've found OT trained models list as corrupt/invalid.....so I don't know what the difference is.

Regardless, based on this can you allow for an over-ride if a model is reported as "SDNone"? I honestly think I've gotten it to accept my OT models anyway, but I can't remember how I did it. I don't know if I had to use a OneTrainer model as the 1st one, and then could use "standard"/kohya models in B,C,etc......or perhaps it was just an older version.

An override would allow people to still use an OT model until maybe whatever the difference is can be sorted out. It's not an SD 1.5 vs. SDXL type issue - it's something minor w/ the header/file-info (faik). I could upload a model that's been exported w/ OT if that'd be helpful....but they are 6gb obv....

https://github.com/Nerogar/OneTrainer

sdfdsfsdfds

Dare issue - RuntimeError: "bernoulli_tensor_cpu_self_" not implemented for 'Half'

Doesn't seem to matter whether or not use fp16 is enabled.

 - loading sd_modelmixer.hyper...
 - set search lower, upper = -0.2 0.2
 - fix request parameter order...
debugs =  ['elemental merge']
use_extra_elements =  True
 - mm_max_models =  3
config hash =  be072c7e93b620844dfd390c276fd823fffb9c179f795df9362365309d4d75a9
  - mm_use [True, False, False]
  - model_a umbra_mecha.fp16.safetensors [80da973b09]
  - base_model None
  - max_models 3
  - models ['tpn34pdfv10js2ts05tensoradjust.fp16.safetensors [cf4f62151c]']
  - modes ['DARE']
  - calcmodes ['Inv. Cosine']
  - usembws [['ALL']]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5]
  - adjust
  - use elemental [False]
  - elementals ['']
  - Parse elemental merge...
model_a = umbra_mecha.fp16
Loading umbra_mecha.fp16.safetensors [80da973b09] from loaded model...
 - base lora_patch
Applying attention optimization: xformers... done.
isxl = True , sd2 = False
compact_mode =  True
 - check possible UNet partial update...
 - partial changed blocks =  ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
 - UNet partial update mode
Open state_dict from file D:\stable-diffusion-webui\models\Stable-diffusion\tpn34pdfv10js2ts05tensoradjust.fp16.safetensors...
mode = DARE, mbw mode, alpha = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
 - Use Inv. Cosine merge
Stage #1/3:   0%|                                                                             | 0/2263 [00:00<?, ?it/s]
*** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\scripts.py", line 776, in before_process
        script.before_process(p, *script_args)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3979, in before_process
        ret = cosim(theta0, theta1, calcmodes[n])
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3974, in cosim
        theta0 = dare_merge(theta0, theta1, alpha, 0.5)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3609, in dare_merge
        m = torch.bernoulli(torch.full_like(input=theta0, fill_value=p)).to(device="gpu")
    RuntimeError: "bernoulli_tensor_cpu_self_" not implemented for 'Half'

When cpu is selected:

Stage #1/3:   0%|                                                                             | 0/2263 [00:00<?, ?it/s]
*** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\scripts.py", line 776, in before_process
        script.before_process(p, *script_args)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3979, in before_process
        ret = cosim(theta0, theta1, calcmodes[n])
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3974, in cosim
        theta0 = dare_merge(theta0, theta1, alpha, 0.5)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3608, in dare_merge
        if calc_settings.index("GPU"):
    ValueError: 'GPU' is not in list

[Feature request] Additional scorer options for auto merger

There are a few potential scorers that could be worth a try

https://huggingface.co/shadowlilac/aesthetic-shadow-v2 - Anime. Large model, maybe needs to be converted to fp16. Was used for the Animagine v3 datasets
https://huggingface.co/Eugeoter/waifu-scorer-v2 - Anime. Small. By one of the better anime finetune makers. Haven't tested yet.
https://huggingface.co/yuvalkirstain/PickScore_v1 - Large model, not 100% sure about the function but I've used it before.

Idea from here along with some other implementations like BLIP CLIP hpsv2 etc.

Maybe could run multiple scorers together and average them out to have more opinions. And it could also be nice to implement a manual scoring option, although that wouldn't strictly fall under "auto merging" I guess, but it would be useful nonetheless.

In 1.8.0 is not possible create the difference from two models

I am trying to make the difference between two SDXL models but with WebUI 1.8.0 it seems to be not possible.
This is my workflow:

  1. I select Difference between base and current.
  2. I load the two models
  3. I enable without LoRAs.
  4. I name the new LoRA.
  5. I press save

In case it is a LyCORIS I get the following error:
No lycoris module found No module named 'diffusers'
In the case of a LoRA
No scripts.kohya.* modules found. ERROR: No module named 'diffusers'

On 1.7.0 I have no errors at all.

error and I don't know what's wrong.

i don't know what setting is mm finetune and max models
i have model a selected and even had base model and b model
then tried without base model.

*** Error running before_process: C:\Users\123ky\Documents\Automatic 1111\FULL BACKUP\extensions\sd-webui-model-mixer\scripts\model_mixer.py
Traceback (most recent call last):
File "C:\Users\123ky\Documents\Automatic 1111\FULL BACKUP\modules\scripts.py", line 511, in before_process
script.before_process(p, *script_args)
TypeError: ModelMixerScript.before_process() missing 5 required positional arguments: 'enabled', 'model_a', 'base_model', 'mm_max_models', and 'mm_finetune'

Negative weights?

I just found out I can set negative weights in MBW mode. Is this a new feature?

Error when try to bake a VAE

Baking in VAE from L:\A1111 Portable\sd.webui\webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Bake in VAE...:   0%|                                                                          | 0/248 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "L:\A1111 Portable\sd.webui\system\python\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "L:\A1111 Portable\sd.webui\system\python\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "L:\A1111 Portable\sd.webui\system\python\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "L:\A1111 Portable\sd.webui\system\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "L:\A1111 Portable\sd.webui\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "L:\A1111 Portable\sd.webui\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "L:\A1111 Portable\sd.webui\system\python\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "L:\A1111 Portable\sd.webui\webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 2561, in save_current_model
    state_dict[key_name] = copy.deepcopy(vae_dict[key])
AttributeError: 'function' object has no attribute 'deepcopy'

Thanks for check it.

fix fake checkbox activator bug

The fake checkbox only works by clicking the checkbox itself, whereas the original checkbox could be toggled by clicking on the label.

Rebasin issues (tested against A1111 master and A1111 dev, both clean installed)

If Rebasin is selected, regardless if Fast Rebasin is selected, the following error message appears and the merge fails:

...
File "xxxxxxxx\sd-webui-model-mixer\scripts\rebasin\weight_matching.py", line 837, in weight_matching
perm_sizes = {p: params_a[axes[0][0]].shape[axes[0][1]] for p, axes in ps.perm_to_axes.items() if axes[0][0] in params_b}
AttributeError: 'NoneType' object has no attribute 'perm_to_axes'

Tried with several models and with several merge methods. The error does not occur if Rebasin is not selected.

cloned A1111 master and dev again to different directories to do clean install checks with the extension and it still gives that error.

-B

Error: 'NoneType' object has no attribute 'strip'

Any idea what this is? I used auto merger yesterday, but I don't know what I'm doing differently today

 - loading sd_modelmixer.hyper...
 - set search lower, upper = -0.4 0.4
 - fix request parameter order...
####################  Auto merger using Hyperactive  ####################
 - search_space keys = dict_keys(['model_b.BASE', 'model_b.IN00', 'model_b.IN01', 'model_b.IN02', 'model_b.IN03', 'model_b.IN04', 'model_b.IN05', 'model_b.IN06', 'model_b.IN07', 'model_b.IN08', 'model_b.M00', 'model_b.OUT00', 'model_b.OUT01', 'model_b.OUT02', 'model_b.OUT03', 'model_b.OUT04', 'model_b.OUT05', 'model_b.OUT06', 'model_b.OUT07', 'model_b.OUT08', 'model_c.alpha'])
 - warm_start =  {'model_b.BASE': 0.5, 'model_b.IN00': 0.5, 'model_b.IN01': 0.5, 'model_b.IN02': 0.5, 'model_b.IN03': 0.5, 'model_b.IN04': 0.5, 'model_b.IN05': 0.5, 'model_b.IN06': 0.5, 'model_b.IN07': 0.5, 'model_b.IN08': 0.5, 'model_b.M00': 0.5, 'model_b.OUT00': 0.5, 'model_b.OUT01': 0.5, 'model_b.OUT02': 0.5, 'model_b.OUT03': 0.5, 'model_b.OUT04': 0.5, 'model_b.OUT05': 0.5, 'model_b.OUT06': 0.5, 'model_b.OUT07': 0.5, 'model_b.OUT08': 0.5, 'model_c.alpha': 0.5}
 - search type =  BayesianOptimizer SimulatedAnnealingOptimizer
{'xi': 0.3, 'max_sample_size': 10000000, 'sampling': {'random': 1000000}, 'rand_rest_p': 0.0} {'epsilon': 0.03, 'distribution': 'normal', 'n_neighbours': 3, 'start_temp': 1.0, 'annealing_rate': 0.97}
 - opt_strategy =  <hyperactive.optimizers.strategies.custom_optimization_strategy.CustomOptimizationStrategy object at 0x0000022E096AA170> <class 'hyperactive.optimizers.strategies.custom_optimization_strategy.CustomOptimizationStrategy'>
Error: 'NoneType' object has no attribute 'strip'

The log message just says "Failed to call hyper.run()"

Thought it happened because I added a model C, but I disabled it and I still get the same error :(

auto merge support

It would be nice to support the auto MBW's approach.

  • auto MBW does not have an intuitive ui.
  • auto MBW v2 support hyperactive optimizer
    • auto MBW v2 has too complicated UI.
  • auto MBW use Image scoring using Classifier but it seems to be slower than expected
  • could be used to fit a model. see also https://github.com/wkpark/sd-model-analyzer

Produces an error during work

A1111 is 1.7
`mode = Add-Diff, alpha = 0.5
Stage #1/2: 100%|███████████████████████████████████████████████████████████████████| 878/878 [00:03<00:00, 253.78it/s]
Check uninitialized #2/2: 100%|██████████████████████████████████████████████████| 878/878 [00:00<00:00, 176040.87it/s]
Rebasin calc...
*** Error running before_process: E:\SD\automatic1111\extensions\sd-webui-model-mixer\scripts\model_mixer.py
Traceback (most recent call last):
File "E:\SD\automatic1111\modules\scripts.py", line 710, in before_process
script.before_process(p, *script_args)
File "E:\SD\automatic1111\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3005, in before_process
first_permutation, y = weight_matching(permutation_spec, models["model_a"], theta_0, usefp16=usefp16, device=device, full=fullmatching, lap=laplib)
File "E:\SD\automatic1111\extensions\sd-webui-model-mixer\scripts\rebasin\weight_matching.py", line 837, in weight_matching
perm_sizes = {p: params_a[axes[0][0]].shape[axes[0][1]] for p, axes in ps.perm_to_axes.items() if axes[0][0] in params_b}
AttributeError: 'NoneType' object has no attribute 'perm_to_axes'

---`

This extension now breaks A1111 WebUI with --no-gradio-queue enabled

I installed this extension and now I am unable to use Automatic1111's webui unless I don't enable the --no-gradio-queue flag. But doing so causes my console output to be flooded with HTTP POST messages. It clutters up the output.

The error is as follows:

Traceback (most recent call last):
  File "D:\AI\stable-diffusion-webui\launch.py", line 48, in <module>
    main()
  File "D:\AI\stable-diffusion-webui\launch.py", line 44, in main
    start()
  File "D:\AI\stable-diffusion-webui\modules\launch_utils.py", line 436, in start
    webui.webui()
  File "D:\AI\stable-diffusion-webui\webui.py", line 79, in webui
    app, local_url, share_url = shared.demo.launch(
  File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1858, in launch
    self.validate_queue_settings()
  File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1690, in validate_queue_settings
    raise ValueError("Progress tracking requires queuing to be enabled.")
ValueError: Progress tracking requires queuing to be enabled.

Is enabling the queue really necessary?

Previews don't match saved merge

I was using this extension to fix a merge that had contiguous tensor errors in other extensions and programs (thanks for that!)

During my testing I was attempting to simply pass through an existing model with no actual merging, so I set the multiplier on model B to 0. Clicked generate, confirmed a merge didn't change anything, then saved the checkpoint.

Switched to the new checkpoint and generated the same image using the same seed and prompt, and the image is different.

My first assumption was that something in the contiguous repair had changed the model but after more testing, it seems like no matter what model I use, merging it with nothing shows a preview image differing from what actually saves to file.

This leads me to believe that something IS being merged or changed during 'live' generation, but it's not saved with the checkpoint.

I've attached 3 images with metadata intact.

First is before merger.
20231213_215022_1352062077

Second is the preview of merging the checkpoint with itself (B) set to 0 alpha (should produce identical checkpoint and images).
20231213_215137_1352062077

Third is generated after checkpoint is saved after step 2 above.
20231213_215807_1352062077

The first and third images are identical nearly identical despite the different preview before merge.

EXTRA NOTE:
While writing this up and re-testing everything to make sure I was testing properly I found that the first and third images ARE in-fact different but they shouldn't be, and even if they are supposed to be different they still don't match the non-saved merge.

tl;dr - The image displayed testing a merge is not the image generated after saving that merge. Expected behavior is that these match exactly.

[New UI] - The Controlnet developers have released a custom fork of A1111 called "Forge" that is recoded for performance increases, "Unet Patching", and new functionality - Likely will need to review/update Model Mixer to get it working properly

Hello!

In case you haven't seen yet, the developers of Controlnet (also "Fooocus") have released a custom fork of A1111 they're calling "Forge".

Based on the changes that were made (almost all back end/performance based) I don't think Model Mixer works w/ it properly out of the box. I did run a quick test and received popup errors when loading models and it didn't seem to function on attempts to generate. One error example:

Details

Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
*** Error executing callback app_started_callback for D:\stable-diffusion-webui-forge\extensions\sd-webui-supermerger\scripts\GenParamGetter.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui-forge\modules\script_callbacks.py", line 142, in app_started_callback
        c.callback(demo, app)
      File "D:\stable-diffusion-webui-forge\extensions\sd-webui-supermerger\scripts\GenParamGetter.py", line 90, in get_params_components
        inputs=[*components.msettings,components.esettings1,*components.genparams,*components.hiresfix,*components.lucks,components.currentmodel,components.dfalse,*components.txt2img_params],
    TypeError: Value after * must be an iterable, not NoneType

---
Startup time: 10.9s (prepare environment: 1.8s, import torch: 1.7s, import gradio: 0.5s, setup paths: 0.3s, other imports: 0.3s, list SD models: 0.2s, load scripts: 2.1s, refresh VAE: 0.1s, create ui: 3.1s, gradio launch: 0.4s, app_started_callback: 0.2s).
Traceback (most recent call last):
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1297, in postprocess_data
    self.validate_outputs(fn_index, predictions)  # type: ignore
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1272, in validate_outputs
    raise ValueError(
ValueError: An event handler (config_sdxl) didn't receive enough output values (needed: 33, received: 31).
Wanted outputs:
    [dropdown, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, slider, dropdown, dropdown, dropdown, textbox, textbox, textbox]
Received outputs:
    [{'choices': ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': True, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'visible': False, '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'choices': ['ALL', 'BASE', 'INP*', 'MID', 'OUT*', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08'], '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN08,M00,OUT00,...,OUT08', '__type__': 'generic_update'}, {'label': 'Merge Block Weights: BASE,IN00,IN02,...IN08,M00,OUT00,...,OUT08', '__type__': 'generic_update'}]
Traceback (most recent call last):
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 2366, in recipe_update
    if "Sum" in modes[n]:
TypeError: argument of type 'bool' is not iterable
debugs =  ['elemental merge']
use_extra_elements =  True
 - mm_max_models =  2
*** Error running before_process: D:\stable-diffusion-webui-forge\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui-forge\modules\scripts.py", line 790, in before_process
        script.before_process(p, *script_args)
      File "D:\stable-diffusion-webui-forge\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3071, in before_process
        if type(alpha) == str: alpha = float(alpha)
    ValueError: could not convert string to float: 'Sum'

---
To load target model SDXL
Begin to load 1 model
Moving model(s) has taken 0.96 seconds
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  6.65it/s]
To load target model AutoencoderKL███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00,  7.42it/s]
Begin to load 1 model
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  6.19it/s]
debugs =  ['elemental merge']████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  7.42it/s]
use_extra_elements =  True
 - mm_max_models =  2
*** Error running before_process: D:\stable-diffusion-webui-forge\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui-forge\modules\scripts.py", line 790, in before_process
        script.before_process(p, *script_args)
      File "D:\stable-diffusion-webui-forge\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3071, in before_process
        if type(alpha) == str: alpha = float(alpha)
    ValueError: could not convert string to float: 'Sum'

Their fork seems to have been reworked in a way that might make MM run smoother once compatibility is fixed.

I hope you can check "Forge" out when you get some time and assess what needs to be fixed or updated to use MM w/ it.

Thanks!!

Repo link: https://github.com/lllyasviel/stable-diffusion-webui-forge

Their description:

465111111

Some models error with, " File does not contain tensor conditioner.embedders.0.transformer.text_model.embeddings.position_ids"

Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 16.3s (prepare environment: 0.9s, import torch: 4.3s, import gradio: 0.9s, setup paths: 1.0s, initialize shared: 0.2s, other imports: 0.5s, list SD models: 0.5s, load scripts: 2.7s, refresh VAE: 0.1s, create ui: 4.3s, gradio launch: 0.4s, app_started_callback: 0.6s).
SDXL\2024-04-11 - PXR Artstyle - Realvision 4 - 1.2e-5 - b2 - 20rep-step00002000.safetensors [e1302c57ff] found
SDXL\2024-03-16 - OT - Pxr artstyle (no captions now) - 3998-5-874.safetensors [87ada3497b] found
SDXL\2024-03-22 - Supermerge - PXR - Lightning - Training today - Topnotch - Captioned - Pixar - mboard.safetensors [d05ee96673] found
SDXL\2024-04-08 - cont - pxr artstyle Retry during nap - base - 50img 2e-5 -b3-step00000500.safetensors [9e456a1057] found
debugs = ['elemental merge']
use_extra_elements = True

  • mm_max_models = 14
    config hash = 21604a8be0dae915d7a7e4e5038fe20726fc1e142ea7065755dd29cf9b0ca8d1
  • mm_use [True, True, True, True, False, False, False, False, False, False, False, False, False, False]
  • model_a SDXL\2024-04-11 - PXR Artstyle - Realvision 4 - 1.2e-5 - b2 - 20rep-step00002000.safetensors [e1302c57ff]
  • base_model None
  • max_models 14
  • models ['SDXL\2024-04-08 - cont - pxr artstyle Retry during nap - base - 50img 2e-5 -b3-step00000500.safetensors [9e456a1057]', 'SDXL\2024-04-11 - PXR Artstyle - Realvision 4 - 1.2e-5 - b2 - 20rep-step00004000.safetensors', 'SDXL\2024-03-16 - OT - Pxr artstyle (no captions now) - 3998-5-874.safetensors [87ada3497b]', 'SDXL\2024-03-22 - Supermerge - PXR - Lightning - Training today - Topnotch - Captioned - Pixar - mboard.safetensors [d05ee96673]']
  • modes ['Sum', 'Sum', 'Sum', 'Sum']
  • calcmodes ['Normal', 'Normal', 'Normal', 'Normal']
  • usembws [[], [], [], []]
  • weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  • alpha [0.5, 0.5, 0.5, 0.5]
  • adjust
  • use elemental [False, False, False, False]
  • elementals ['', '', '', '']
  • Parse elemental merge...
    model_a = SDXL_2024-04-11 - PXR Artstyle - Realvision 4 - 1.2e-5 - b2 - 20rep-step00002000
    Loading SDXL\2024-04-11 - PXR Artstyle - Realvision 4 - 1.2e-5 - b2 - 20rep-step00002000.safetensors [e1302c57ff] from loaded model...
  • base lora_patch
    Applying attention optimization: sdp-no-mem... done.
    isxl = True , sd2 = False
    compact_mode = False
  • check possible UNet partial update...
  • partial changed blocks = ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
  • UNet partial update mode
    Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-04-08 - cont - pxr artstyle Retry during nap - base - 50img 2e-5 -b3-step00000500.safetensors...
    mode = Sum, alpha = 0.5
    Stage #1/5: 74%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 1676/2263 [00:02<00:00, 787.22it/s]
    *** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
    File "D:\stable-diffusion-webui\modules\scripts.py", line 817, in before_process
    script.before_process(p, *script_args)
    File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3912, in before_process
    theta_1[key] = theta_1f.get_tensor(key)
    safetensors_rust.SafetensorError: File does not contain tensor conditioner.embedders.0.transformer.text_model.embeddings.position_ids

Error after update

Hello,

I updated today for the first time in a week or two and have gotten this type error twice now after running a few successful merge generations. Unfortunately I don't have time to try to narrow down a method to reproduce at the moment, but I wanted to report the error in case it is informative:

Details

2024-05-28 21:02:11,668 - ControlNet - INFO - unit_separate = False, style_align = False | 364/82004 [01:25<3:28:22, 6.53it/s]
2024-05-28 21:02:11,668 - ControlNet - INFO - Loading model from cache: Canny-xinsir-sdxl - model_V2 [ab7dc06d]
2024-05-28 21:02:11,690 - ControlNet - INFO - Using preprocessor: canny
2024-05-28 21:02:11,690 - ControlNet - INFO - preprocessor resolution = 512
2024-05-28 21:02:11,868 - ControlNet - INFO - ControlNet Hooked - Time = 0.205000638961792
46%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 12/26 [00:02<00:03, 4.57it/s]
Total progress: 0%|█ | 376/82004 [01:31<5:31:26, 4.10it/s]
debugs = ['elemental merge'] | 376/82004 [01:31<4:46:45, 4.74it/s]
use_extra_elements = True

  • mm_max_models = 8
    config hash = 48473657badf5fd71c75986fcfaeb2b00dd1d84390ed27d83950c60a4fc85202
  • mm_use [True, True, True, False, False, False, False, False]
  • model_a SDXL\2024-03-31 - Davematthews Person - Awesome Mix (Model Mixer) - Tops.safetensors [0fdd81785f]
  • base_model None
  • max_models 8
  • models ['SDXL\SD-Checkpoints-Fast-Backup\z-2023-12-08-Dave Matthews - GoodMix-Uses-Other-12-8-As-Base - This looked nice on XYZ at cfg 6 and 8.safetensors [ed1d2df009]', 'SDXL\2024-04-14 - Davematthews Person - Outstanding Supermerge - Mix of mixes from hidden forge.safetensors [1b08e5db2e]', 'SDXL\SD-Checkpoints-Fast-Backup\2024-05-10 - Davematthews Person (Current Day) - 10img.safetensors-step00001500.safetensors']
  • modes ['Sum', 'Sum', 'Sum']
  • calcmodes ['Normal', 'Normal', 'Normal']
  • usembws [[], [], []]
  • weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  • alpha [0.8, 0.5, 0.5]
  • adjust
  • use elemental [False, False, False]
  • elementals ['', '', '']
  • Parse elemental merge...
    model_a = SDXL_2024-03-31 - Davematthews Person - Awesome Mix (Model Mixer) - Tops
    Loading from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-31 - Davematthews Person - Awesome Mix (Model Mixer) - Tops.safetensors...
    isxl = True , sd2 = False
    compact_mode = False
  • check possible UNet partial update...
  • partial changed blocks = ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
  • UNet partial update mode
    Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\SD-Checkpoints-Fast-Backup\z-2023-12-08-Dave Matthews - GoodMix-Uses-Other-12-8-As-Base - This looked nice on XYZ at cfg 6 and 8.safetensors...
    mode = Sum, alpha = 0.8
    Stage #1/4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2262/2262 [00:03<00:00, 660.82it/s]
    Check uninitialized #2/4: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2262/2262 [00:00<00:00, 452432.79it/s]
    Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-04-14 - Davematthews Person - Outstanding Supermerge - Mix of mixes from hidden forge.safetensors...
    mode = Sum, alpha = 0.5
    Stage #3/4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2262/2262 [00:05<00:00, 452.26it/s]
    Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\SD-Checkpoints-Fast-Backup\2024-05-10 - Davematthews Person (Current Day) - 10img.safetensors-step00001500.safetensors...
    Calculating sha256 for e:\Stable Diffusion Checkpoints\SDXL\SD-Checkpoints-Fast-Backup\2024-05-10 - Davematthews Person (Current Day) - 10img.safetensors-step00001500.safetensors: f564430288b3e37b407d13037c331c71dd4b7df65e4ba7d50c487c37e7ed8cd3
    mode = Sum, alpha = 0.5
    Stage #4/4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2262/2262 [00:03<00:00, 724.07it/s]
    Save unchanged weights #4/4: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
  • merge processing in 15.9s (merging: 15.9s).
  • loading scripts.patches...
  • lora patch
  • Textencoder(BASE) has been successfully updated
  • update UNet block input_blocks.0.
  • update UNet block input_blocks.1.
  • update UNet block input_blocks.2.
  • update UNet block input_blocks.3.
  • update UNet block input_blocks.4.
  • update UNet block input_blocks.5.
  • update UNet block input_blocks.6.
  • update UNet block input_blocks.7.
  • update UNet block input_blocks.8.
  • update UNet block middle_block.
  • update UNet block output_blocks.0.
  • update UNet block output_blocks.1.
  • update UNet block output_blocks.2.
  • update UNet block output_blocks.3.
  • update UNet block output_blocks.4.
  • update UNet block output_blocks.5.
  • update UNet block output_blocks.6.
  • update UNet block output_blocks.7.
  • update UNet block output_blocks.8.
  • update UNet block time_embed.
  • update UNet block out.
  • UNet partial blocks have been successfully updated
  • Reload full state_dict...
  • remove old checkpointinfo
  • unload current merged model from loaded_sd_models...
    *** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
    File "D:\stable-diffusion-webui\modules\scripts.py", line 817, in before_process
    script.before_process(p, *script_args)
    File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 4338, in before_process
    send_model_to_cpu(sd_models.model_data.sd_model)
    File "D:\stable-diffusion-webui\modules\sd_models.py", line 672, in send_model_to_cpu
    m.to(devices.cpu)
    File "D:\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
    return super().to(*args, **kwargs)
    File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1160, in to
    return self._apply(convert)
    File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
    File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
    File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
    [Previous line repeated 1 more time]
    File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply
    param_applied = fn(param)
    File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
    NotImplementedError: Cannot copy out of meta tensor; no data!

2024-05-28 21:03:58,220 - ControlNet - INFO - Batch enabled (3154)
controlnet batch mode
2024-05-28 21:03:58,246 - ControlNet - INFO - unit_separate = False, style_align = False
2024-05-28 21:03:58,247 - ControlNet - INFO - Loading model from cache: Canny-xinsir-sdxl - model_V2 [ab7dc06d]
2024-05-28 21:03:58,273 - ControlNet - INFO - Using preprocessor: canny
2024-05-28 21:03:58,274 - ControlNet - INFO - preprocessor resolution = 512
2024-05-28 21:03:58,438 - ControlNet - INFO - ControlNet Hooked - Time = 0.1959998607635498
*** Error completing request
*** Arguments: ('task(7jf0rgm2nbvfcxn)', <gradio.routes.Request object at 0x000001BB7BB23310>, 'davematthews person Llama2 - Variable Movie film scenarios-Feb2-2024', '', [], 1, 1, 6, 768, 1344, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 26, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', ControlNetUnit(is_ui=True, input_mode=<InputMode.BATCH: 'batch'>, batch_images=<list_iterator object at 0x000001BABACA24D0>, output_dir='', loopback=False, enabled=True, module='canny', model='Canny-xinsir-sdxl - model_V2 [ab7dc06d]', weight=1.0, image='D:\SDXL\OneTrainer\Datasets\Shotdeck-3000\0003.jpg', resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=512, threshold_a=100.0, threshold_b=200.0, guidance_start=0.0, guidance_end=0.3, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, 'SDXL\2024-03-31 - Davematthews Person - Awesome Mix (Model Mixer) - Tops.safetensors [0fdd81785f]', 'None', 8, '', {'save_settings': ['fp16', 'prune', 'safetensors'], 'calc_settings': ['GPU']}, True, True, True, False, False, False, False, False, 'SDXL\SD-Checkpoints-Fast-Backup\z-2023-12-08-Dave Matthews - GoodMix-Uses-Other-12-8-As-Base - This looked nice on XYZ at cfg 6 and 8.safetensors [ed1d2df009]', 'SDXL\2024-04-14 - Davematthews Person - Outstanding Supermerge - Mix of mixes from hidden forge.safetensors [1b08e5db2e]', 'SDXL\SD-Checkpoints-Fast-Backup\2024-05-10 - Davematthews Person (Current Day) - 10img.safetensors-step00001500.safetensors', 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.8, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, False, False, False, False, False, '', '', '', '', '', '', '', '', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 'DemoFusion', False, 128, 64, 4, 2, False, 10, 1, 1, 64, False, True, 3, 1, 1, True, 0.85, 0.6, 4, False, False, 3072, 192, True, True, True, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "D:\stable-diffusion-webui\modules\processing.py", line 839, in process_images
res = process_images_inner(p)
File "D:\stable-diffusion-webui\extensions\3sd-webui-controlnet\scripts\batch_hijack.py", line 66, in processing_process_images_hijack
processed = self.process_images_cn_batch(p, *args, **kwargs)
File "D:\stable-diffusion-webui\extensions\3sd-webui-controlnet\scripts\batch_hijack.py", line 91, in process_images_cn_batch
processed = getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\stable-diffusion-webui\modules\processing.py", line 953, in process_images_inner
p.setup_conds()
File "D:\stable-diffusion-webui\modules\processing.py", line 1489, in setup_conds
super().setup_conds()
File "D:\stable-diffusion-webui\modules\processing.py", line 500, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
File "D:\stable-diffusion-webui\modules\processing.py", line 486, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
File "D:\stable-diffusion-webui\modules\prompt_parser.py", line 188, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "D:\stable-diffusion-webui\modules\sd_models_xl.py", line 32, in get_learned_conditioning
c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward
emb_out = embedder(batch[embedder.input_key])
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\generative-models\sgm\util.py", line 59, in do_autocast
return f(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 391, in forward
outputs = self.transformer(
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
return self.text_model(
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 734, in forward
causal_attention_mask = _make_causal_mask(input_shape, hidden_states.dtype, device=hidden_states.device)
File "D:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 684, in _make_causal_mask
mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
NotImplementedError: Could not run 'aten::_local_scalar_dense' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_local_scalar_dense' is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

CPU: registered at aten\src\ATen\RegisterCPU.cpp:31188 [kernel]
CUDA: registered at aten\src\ATen\RegisterCUDA.cpp:44143 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:153 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:290 [backend fallback]
Named: fallthrough registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:11 [kernel]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:18694 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_2.cpp:17079 [kernel]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:382 [backend fallback]
AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:249 [backend fallback]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\BatchRulesDynamic.cpp:66 [kernel]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:161 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:165 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:157 [backend fallback]

System info:

Details

"Platform": "Windows-10-10.0.19045-SP0",
"Python": "3.10.11",
"Version": "v1.9.3",
"Commit": "1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0",
"Script path": "D:\\stable-diffusion-webui",
"Data path": "D:\\stable-diffusion-webui",
"Extensions dir": "D:\\stable-diffusion-webui\\extensions",
"Checksum": "be19a3f0de303f54ce2e759f78d9727330569187eb0992c629dedd0e157d58a8",
"Commandline": [
    "launch.py",
    "--opt-sdp-attention",
    "--no-half-vae",
    "--opt-channelslast",
    "--skip-torch-cuda-test",
    "--skip-version-check",
    "--ckpt-dir",
    "e:\\Stable Diffusion Checkpoints"
],
"Torch env info": {
    "torch_version": "2.1.2+cu121",
    "is_debug_build": "False",
    "cuda_compiled_version": "12.1",
    "gcc_version": null,
    "clang_version": null,
    "cmake_version": null,
    "os": "Microsoft Windows 10 Pro",
    "libc_version": "N/A",
    "python_version": "3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)",
    "python_platform": "Windows-10-10.0.19045-SP0",
    "is_cuda_available": "True",
    "cuda_runtime_version": "12.1.66\r",
    "cuda_module_loading": "LAZY",
    "nvidia_driver_version": "546.33",
    "nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 4090",
    "cudnn_version": null,
    "pip_version": "pip3",
    "pip_packages": [
        "numpy==1.26.2",
        "open-clip-torch==2.20.0",
        "pytorch-lightning==1.9.4",
        "torch==2.1.2+cu121",
        "torchdiffeq==0.2.3",
        "torchmetrics==1.3.2",
        "torchsde==0.2.6",
        "torchvision==0.16.2+cu121"
    ],
    "conda_packages": null,
    "hip_compiled_version": "N/A",
    "hip_runtime_version": "N/A",
    "miopen_runtime_version": "N/A",
    "caching_allocator_config": "",
    "is_xnnpack_available": "True",
    "cpu_info": [
        "Architecture=9",
        "CurrentClockSpeed=3200",
        "DeviceID=CPU0",
        "Family=207",
        "L2CacheSize=16384",
        "L2CacheSpeed=",
        "Manufacturer=GenuineIntel",
        "MaxClockSpeed=3200",
        "Name=Intel(R) Core(TM) i9-14900K",
        "ProcessorType=3",
        "Revision="
    ]
},
"Exceptions": [
    {
        "exception": "Could not run 'aten::_local_scalar_dense' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_local_scalar_dense' is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].\n\nCPU: registered at aten\\src\\ATen\\RegisterCPU.cpp:31188 [kernel]\nCUDA: registered at aten\\src\\ATen\\RegisterCUDA.cpp:44143 [kernel]\nBackendSelect: fallthrough registered at ..\\aten\\src\\ATen\\core\\BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ..\\aten\\src\\ATen\\core\\PythonFallbackKernel.cpp:153 [backend fallback]\nFuncTorchDynamicLayerBackMode: registered at ..\\aten\\src\\ATen\\functorch\\DynamicLayer.cpp:498 [backend fallback]\nFunctionalize: registered at ..\\aten\\src\\ATen\\FunctionalizeFallbackKernel.cpp:290 [backend fallback]\nNamed: fallthrough registered at ..\\aten\\src\\ATen\\core\\NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ..\\aten\\src\\ATen\\ConjugateFallback.cpp:17 [backend fallback]\nNegative: registered at ..\\aten\\src\\ATen\\native\\NegateFallback.cpp:19 [backend fallback]\nZeroTensor: registered at ..\\aten\\src\\ATen\\ZeroTensorFallback.cpp:86 [backend fallback]\nADInplaceOrView: fallthrough registered at ..\\aten\\src\\ATen\\core\\VariableFallbackKernel.cpp:86 [backend fallback]\nAutogradOther: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradCPU: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradCUDA: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradHIP: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradXLA: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradMPS: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradIPU: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradXPU: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradHPU: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradVE: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradLazy: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradMTIA: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradPrivateUse1: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradPrivateUse2: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradPrivateUse3: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradMeta: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nAutogradNestedTensor: registered at ..\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:18694 [autograd kernel]\nTracer: registered at ..\\torch\\csrc\\autograd\\generated\\TraceType_2.cpp:17079 [kernel]\nAutocastCPU: fallthrough registered at ..\\aten\\src\\ATen\\autocast_mode.cpp:382 [backend fallback]\nAutocastCUDA: fallthrough registered at ..\\aten\\src\\ATen\\autocast_mode.cpp:249 [backend fallback]\nFuncTorchBatched: registered at ..\\aten\\src\\ATen\\functorch\\BatchRulesDynamic.cpp:66 [kernel]\nFuncTorchVmapMode: fallthrough registered at ..\\aten\\src\\ATen\\functorch\\VmapModeRegistrations.cpp:28 [backend fallback]\nBatched: registered at ..\\aten\\src\\ATen\\LegacyBatchingRegistrations.cpp:1075 [backend fallback]\nVmapMode: fallthrough registered at ..\\aten\\src\\ATen\\VmapModeRegistrations.cpp:33 [backend fallback]\nFuncTorchGradWrapper: registered at ..\\aten\\src\\ATen\\functorch\\TensorWrapper.cpp:203 [backend fallback]\nPythonTLSSnapshot: registered at ..\\aten\\src\\ATen\\core\\PythonFallbackKernel.cpp:161 [backend fallback]\nFuncTorchDynamicLayerFrontMode: registered at ..\\aten\\src\\ATen\\functorch\\DynamicLayer.cpp:494 [backend fallback]\nPreDispatch: registered at ..\\aten\\src\\ATen\\core\\PythonFallbackKernel.cpp:165 [backend fallback]\nPythonDispatcher: registered at ..\\aten\\src\\ATen\\core\\PythonFallbackKernel.cpp:157 [backend fallback]\n",
        "traceback": [
            [
                "D:\\stable-diffusion-webui\\modules\\call_queue.py, line 57, f",
                "res = list(func(*args, **kwargs))"
            ],
            [
                "D:\\stable-diffusion-webui\\modules\\call_queue.py, line 36, f",
                "res = func(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\modules\\txt2img.py, line 109, txt2img",
                "processed = processing.process_images(p)"
            ],
            [
                "D:\\stable-diffusion-webui\\modules\\processing.py, line 839, process_images",
                "res = process_images_inner(p)"
            ],
            [
                "D:\\stable-diffusion-webui\\extensions\\3sd-webui-controlnet\\scripts\\batch_hijack.py, line 66, processing_process_images_hijack",
                "processed = self.process_images_cn_batch(p, *args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\extensions\\3sd-webui-controlnet\\scripts\\batch_hijack.py, line 91, process_images_cn_batch",
                "processed = getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\modules\\processing.py, line 953, process_images_inner",
                "p.setup_conds()"
            ],
            [
                "D:\\stable-diffusion-webui\\modules\\processing.py, line 1489, setup_conds",
                "super().setup_conds()"
            ],
            [
                "D:\\stable-diffusion-webui\\modules\\processing.py, line 500, setup_conds",
                "self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)"
            ],
            [
                "D:\\stable-diffusion-webui\\modules\\processing.py, line 486, get_conds_with_caching",
                "cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)"
            ],
            [
                "D:\\stable-diffusion-webui\\modules\\prompt_parser.py, line 188, get_learned_conditioning",
                "conds = model.get_learned_conditioning(texts)"
            ],
            [
                "D:\\stable-diffusion-webui\\modules\\sd_models_xl.py, line 32, get_learned_conditioning",
                "c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
                "return self._call_impl(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
                "return forward_call(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\encoders\\modules.py, line 141, forward",
                "emb_out = embedder(batch[embedder.input_key])"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
                "return self._call_impl(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
                "return forward_call(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\util.py, line 59, do_autocast",
                "return f(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\encoders\\modules.py, line 391, forward",
                "outputs = self.transformer("
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
                "return self._call_impl(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
                "return forward_call(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\transformers\\models\\clip\\modeling_clip.py, line 822, forward",
                "return self.text_model("
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
                "return self._call_impl(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
                "return forward_call(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\transformers\\models\\clip\\modeling_clip.py, line 734, forward",
                "causal_attention_mask = _make_causal_mask(input_shape, hidden_states.dtype, device=hidden_states.device)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\transformers\\models\\clip\\modeling_clip.py, line 684, _make_causal_mask",
                "mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)"
            ]
        ]
    },
    {
        "exception": "Cannot copy out of meta tensor; no data!",
        "traceback": [
            [
                "D:\\stable-diffusion-webui\\modules\\scripts.py, line 817, before_process",
                "script.before_process(p, *script_args)"
            ],
            [
                "D:\\stable-diffusion-webui\\extensions\\sd-webui-model-mixer\\scripts\\model_mixer.py, line 4338, before_process",
                "send_model_to_cpu(sd_models.model_data.sd_model)"
            ],
            [
                "D:\\stable-diffusion-webui\\modules\\sd_models.py, line 672, send_model_to_cpu",
                "m.to(devices.cpu)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\lightning_fabric\\utilities\\device_dtype_mixin.py, line 54, to",
                "return super().to(*args, **kwargs)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1160, to",
                "return self._apply(convert)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 810, _apply",
                "module._apply(fn)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 810, _apply",
                "module._apply(fn)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 810, _apply",
                "module._apply(fn)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 810, _apply",
                "module._apply(fn)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 833, _apply",
                "param_applied = fn(param)"
            ],
            [
                "D:\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1158, convert",
                "return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)"
            ]
        ]
    }
],
"CPU": {
    "model": "Intel64 Family 6 Model 183 Stepping 1, GenuineIntel",
    "count logical": 32,
    "count physical": 24
},
"RAM": {
    "total": "128GB",
    "used": "38GB",
    "free": "90GB"
},
"Extensions": [
    {
        "name": "1-sd-dynamic-prompts",
        "path": "D:\\stable-diffusion-webui\\extensions\\1-sd-dynamic-prompts",
        "version": "1567e787",
        "branch": "main",
        "remote": null
    },
    {
        "name": "3sd-webui-controlnet",
        "path": "D:\\stable-diffusion-webui\\extensions\\3sd-webui-controlnet",
        "version": "c91dbe5c",
        "branch": "main",
        "remote": "https://github.com/Mikubill/sd-webui-controlnet.git"
    },
    {
        "name": "a2-adetailer",
        "path": "D:\\stable-diffusion-webui\\extensions\\a2-adetailer",
        "version": "1edd5888",
        "branch": "main",
        "remote": "https://github.com/Bing-su/adetailer.git"
    },
    {
        "name": "b1111-sd-webui-tagcomplete",
        "path": "D:\\stable-diffusion-webui\\extensions\\b1111-sd-webui-tagcomplete",
        "version": "29b5bf07",
        "branch": "main",
        "remote": "https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git"
    },
    {
        "name": "sd-webui-aspect-ratio-helper-tweaked",
        "path": "D:\\stable-diffusion-webui\\extensions\\sd-webui-aspect-ratio-helper-tweaked",
        "version": "99fcf9b0",
        "branch": "main",
        "remote": "https://github.com/thomasasfk/sd-webui-aspect-ratio-helper.git"
    },
    {
        "name": "sd-webui-controlnet-evaclip",
        "path": "D:\\stable-diffusion-webui\\extensions\\sd-webui-controlnet-evaclip",
        "version": "05a0b881",
        "branch": "main",
        "remote": "https://github.com/huchenlei/sd-webui-controlnet-evaclip.git"
    },
    {
        "name": "sd-webui-model-mixer",
        "path": "D:\\stable-diffusion-webui\\extensions\\sd-webui-model-mixer",
        "version": "a5099276",
        "branch": "master",
        "remote": "https://github.com/wkpark/sd-webui-model-mixer.git"
    },
    {
        "name": "sd-webui-stablesr",
        "path": "D:\\stable-diffusion-webui\\extensions\\sd-webui-stablesr",
        "version": "e3a7c69c",
        "branch": "master",
        "remote": "https://github.com/pkuliyi2015/sd-webui-stablesr.git"
    },
    {
        "name": "sd-webui-zmultidiffusion-upscaler-for-automatic1111",
        "path": "D:\\stable-diffusion-webui\\extensions\\sd-webui-zmultidiffusion-upscaler-for-automatic1111",
        "version": "574a0963",
        "branch": "main",
        "remote": "https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111.git"
    }
],
"Inactive extensions": [],
"Environment": {
    "COMMANDLINE_ARGS": "--opt-sdp-attention --no-half-vae --opt-channelslast --skip-torch-cuda-test --skip-version-check --ckpt-dir \"e:\\Stable Diffusion Checkpoints\"",
    "GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
    "ldsr_steps": 100,
    "ldsr_cached": false,
    "SCUNET_tile": 256,
    "SCUNET_tile_overlap": 8,
    "SWIN_tile": 192,
    "SWIN_tile_overlap": 8,
    "SWIN_torch_compile": false,
    "hypertile_enable_unet": false,
    "hypertile_enable_unet_secondpass": false,
    "hypertile_max_depth_unet": 3,
    "hypertile_max_tile_unet": 256,
    "hypertile_swap_size_unet": 3,
    "hypertile_enable_vae": false,
    "hypertile_max_depth_vae": 3,
    "hypertile_max_tile_vae": 128,
    "hypertile_swap_size_vae": 3,
    "dp_ignore_whitespace": false,
    "dp_write_raw_template": true,
    "dp_write_prompts_to_file": false,
    "dp_parser_variant_start": "{",
    "dp_parser_variant_end": "}",
    "dp_parser_wildcard_wrap": "__",
    "dp_limit_jinja_prompts": false,
    "dp_auto_purge_cache": true,
    "dp_wildcard_manager_no_dedupe": true,
    "dp_wildcard_manager_no_sort": true,
    "dp_wildcard_manager_shuffle": true,
    "dp_magicprompt_default_model": "Gustavosta/MagicPrompt-Stable-Diffusion",
    "dp_magicprompt_batch_size": 1,
    "control_net_detectedmap_dir": "detected_maps",
    "control_net_models_path": "",
    "control_net_modules_path": "",
    "control_net_unit_count": 3,
    "control_net_model_cache_size": 3,
    "control_net_inpaint_blur_sigma": 7,
    "control_net_no_detectmap": false,
    "control_net_detectmap_autosaving": false,
    "control_net_allow_script_control": true,
    "control_net_sync_field_args": true,
    "controlnet_show_batch_images_in_ui": false,
    "controlnet_increment_seed_during_batch": true,
    "controlnet_disable_openpose_edit": false,
    "controlnet_disable_photopea_edit": false,
    "controlnet_photopea_warning": true,
    "controlnet_ignore_noninpaint_mask": false,
    "controlnet_clip_detector_on_cpu": false,
    "controlnet_control_type_dropdown": false,
    "ad_max_models": 2,
    "ad_extra_models_dir": "",
    "ad_save_previews": false,
    "ad_save_images_before": false,
    "ad_only_seleted_scripts": true,
    "ad_script_names": "dynamic_prompting,dynamic_thresholding,lora_block_weight,negpip,soft_inpainting,wildcard_recursive,wildcards",
    "ad_bbox_sortby": "None",
    "ad_same_seed_for_each_tap": false,
    "tac_tagFile": "danbooru.csv",
    "tac_active": true,
    "tac_activeIn.txt2img": true,
    "tac_activeIn.img2img": true,
    "tac_activeIn.negativePrompts": true,
    "tac_activeIn.thirdParty": true,
    "tac_activeIn.modelList": "",
    "tac_activeIn.modelListMode": "Blacklist",
    "tac_slidingPopup": true,
    "tac_maxResults": 5,
    "tac_showAllResults": false,
    "tac_resultStepLength": 100,
    "tac_delayTime": 100,
    "tac_useWildcards": true,
    "tac_sortWildcardResults": true,
    "tac_wildcardExclusionList": "",
    "tac_skipWildcardRefresh": false,
    "tac_useEmbeddings": true,
    "tac_includeEmbeddingsInNormalResults": false,
    "tac_useHypernetworks": true,
    "tac_useLoras": true,
    "tac_useLycos": true,
    "tac_useLoraPrefixForLycos": true,
    "tac_showWikiLinks": false,
    "tac_showExtraNetworkPreviews": true,
    "tac_modelSortOrder": "Name",
    "tac_useStyleVars": false,
    "tac_frequencySort": true,
    "tac_frequencyFunction": "Logarithmic (weak)",
    "tac_frequencyMinCount": 3,
    "tac_frequencyMaxAge": 30,
    "tac_frequencyRecommendCap": 10,
    "tac_frequencyIncludeAlias": false,
    "tac_replaceUnderscores": true,
    "tac_escapeParentheses": true,
    "tac_appendComma": true,
    "tac_appendSpace": true,
    "tac_alwaysSpaceAtEnd": true,
    "tac_modelKeywordCompletion": "Never",
    "tac_modelKeywordLocation": "Start of prompt",
    "tac_wildcardCompletionMode": "To next folder level",
    "tac_alias.searchByAlias": true,
    "tac_alias.onlyShowAlias": false,
    "tac_translation.translationFile": "None",
    "tac_translation.oldFormat": false,
    "tac_translation.searchByTranslation": true,
    "tac_translation.liveTranslation": false,
    "tac_extra.extraFile": "extra-quality-tags.csv",
    "tac_extra.addMode": "Insert before",
    "tac_chantFile": "demo-chants.json",
    "tac_keymap": "{\n    \"MoveUp\": \"ArrowUp\",\n    \"MoveDown\": \"ArrowDown\",\n    \"JumpUp\": \"PageUp\",\n    \"JumpDown\": \"PageDown\",\n    \"JumpToStart\": \"\",\n    \"JumpToEnd\": \"\",\n    \"ChooseSelected\": \"Enter\",\n    \"ChooseFirstOrSelected\": \"Tab\",\n    \"Close\": \"Escape\"\n}",
    "tac_colormap": "{\n    \"danbooru\": {\n        \"-1\": [\"red\", \"maroon\"],\n        \"0\": [\"lightblue\", \"dodgerblue\"],\n        \"1\": [\"indianred\", \"firebrick\"],\n        \"3\": [\"violet\", \"darkorchid\"],\n        \"4\": [\"lightgreen\", \"darkgreen\"],\n        \"5\": [\"orange\", \"darkorange\"]\n    },\n    \"e621\": {\n        \"-1\": [\"red\", \"maroon\"],\n        \"0\": [\"lightblue\", \"dodgerblue\"],\n        \"1\": [\"gold\", \"goldenrod\"],\n        \"3\": [\"violet\", \"darkorchid\"],\n        \"4\": [\"lightgreen\", \"darkgreen\"],\n        \"5\": [\"tomato\", \"darksalmon\"],\n        \"6\": [\"red\", \"maroon\"],\n        \"7\": [\"whitesmoke\", \"black\"],\n        \"8\": [\"seagreen\", \"darkseagreen\"]\n    },\n    \"derpibooru\": {\n        \"-1\": [\"red\", \"maroon\"],\n        \"0\": [\"#60d160\", \"#3d9d3d\"],\n        \"1\": [\"#fff956\", \"#918e2e\"],\n        \"3\": [\"#fd9961\", \"#a14c2e\"],\n        \"4\": [\"#cf5bbe\", \"#6c1e6c\"],\n        \"5\": [\"#3c8ad9\", \"#1e5e93\"],\n        \"6\": [\"#a6a6a6\", \"#555555\"],\n        \"7\": [\"#47abc1\", \"#1f6c7c\"],\n        \"8\": [\"#7871d0\", \"#392f7d\"],\n        \"9\": [\"#df3647\", \"#8e1c2b\"],\n        \"10\": [\"#c98f2b\", \"#7b470e\"],\n        \"11\": [\"#e87ebe\", \"#a83583\"]\n    }\n}",
    "tac_refreshTempFiles": "Refresh TAC temp files",
    "style_vars_enabled": true,
    "style_vars_random": false,
    "style_vars_hires": false,
    "style_vars_linebreaks": true,
    "style_vars_info": true,
    "polotno_api_key": "bHEpG9Rp0Nq9XrLcwFNu",
    "canvas_editor_default_width": 1024,
    "canvas_editor_default_height": 1024,
    "sd_image_editor_outdir": "outputs\\sd-image-editor",
    "arh_javascript_aspect_ratio_show": true,
    "arh_javascript_aspect_ratio": "1:1, 16:9, 8:5, 4:3, 3:2, 7:5,  21:9, 19:9, 3:4, 2:3, 5:7, 9:16, 9:21, 5:8, 9:5",
    "arh_ui_javascript_selection_method": "Aspect Ratios Dropdown",
    "arh_hide_accordion_by_default": true,
    "arh_expand_by_default": false,
    "arh_ui_component_order_key": "MaxDimensionScaler, MinDimensionScaler, PredefinedAspectRatioButtons, PredefinedPercentageButtons",
    "arh_show_max_width_or_height": false,
    "arh_max_width_or_height": 1024.0,
    "arh_show_min_width_or_height": false,
    "arh_min_width_or_height": 1024.0,
    "arh_show_predefined_aspect_ratios": false,
    "arh_predefined_aspect_ratio_use_max_dim": false,
    "arh_predefined_aspect_ratios": "1:1, 4:3, 16:9, 9:16, 21:9",
    "arh_show_predefined_percentages": false,
    "arh_predefined_percentages": "25, 50, 75, 125, 150, 175, 200",
    "arh_predefined_percentages_display_key": "Incremental/decremental percentage (-50%, +50%)",
    "inpaint_anything_save_folder": "inpaint-anything",
    "inpaint_anything_sam_oncpu": false,
    "inpaint_anything_offline_inpainting": false,
    "inpaint_anything_padding_fill": 127,
    "inpain_anything_sam_models_dir": "",
    "inpaint_background_enabled": true,
    "inpaint_background_u2net_location": "",
    "inpaint_background_show_image_under_mask": true,
    "inpaint_background_mask_brush_color": "#ffffff",
    "mm_max_models": 8,
    "mm_debugs": [
        "elemental merge"
    ],
    "mm_save_model": [
        "safetensors",
        "fp16",
        "prune",
        "overwrite"
    ],
    "mm_save_model_filename": "modelmixer-[hash]",
    "mm_use_extra_elements": true,
    "mm_use_old_finetune": false,
    "mm_use_unet_partial_update": true,
    "mm_laplib": "lap",
    "mm_use_fast_weighted_sum": true,
    "mm_use_precalculate_hash": false,
    "mm_use_model_dl": false,
    "mm_default_config_lock": false,
    "mm_civitai_api_key": "",
    "mm_use_txt2img_only": false,
    "mm_use_safe_open": false,
    "model_toolkit_fix_clip": false,
    "model_toolkit_autoprune": false,
    "sd_model_checkpoint": "(SDXL_2024-03-31 - Davematthews Person - Awesome Mix (Model Mixer) - Tops) x (1 - alpha_0) + (SDXL_SD-Checkpoints-Fast-Backup_z-2023-12-08-Dave Matthews - GoodMix-Uses-Other-12-8-As-Base - This looked nice on XYZ at cfg 6 and 8) x alpha_0(0.8).safetensors [852aed88f7]",
    "sd_checkpoint_hash": "852aed88f72dd2f7330a710067478e82a1c74c7da380985c5e6df9470326a151",
    "disabled_extensions": [],
    "disable_all_extensions": "none",
    "restore_config_state_file": "",
    "outdir_samples": "E:\\Stable Diffusion Images",
    "outdir_txt2img_samples": "E:\\Stable Diffusion Images",
    "outdir_img2img_samples": "E:\\Stable Diffusion Images",
    "outdir_extras_samples": "E:\\Stable Diffusion Images",
    "outdir_grids": "E:\\Stable Diffusion Images",
    "outdir_txt2img_grids": "E:\\Stable Diffusion Images",
    "outdir_img2img_grids": "E:\\Stable Diffusion Images",
    "outdir_save": "E:\\Stable Diffusion Images",
    "outdir_init_images": "E:\\Stable Diffusion Images",
    "samples_save": true,
    "samples_format": "png",
    "samples_filename_pattern": "[number]-[prompt_spaces]",
    "save_images_add_number": true,
    "save_images_replace_action": "Replace",
    "grid_save": false,
    "grid_format": "png",
    "grid_extended_filename": false,
    "grid_only_if_multiple": true,
    "grid_prevent_empty_spots": false,
    "grid_zip_filename_pattern": "",
    "n_rows": -1,
    "font": "",
    "grid_text_active_color": "#000000",
    "grid_text_inactive_color": "#999999",
    "grid_background_color": "#ffffff",
    "save_images_before_face_restoration": false,
    "save_images_before_highres_fix": false,
    "save_images_before_color_correction": false,
    "save_mask": false,
    "save_mask_composite": false,
    "jpeg_quality": 80,
    "webp_lossless": false,
    "export_for_4chan": false,
    "img_downscale_threshold": 4.0,
    "target_side_length": 4000.0,
    "img_max_size_mp": 200.0,
    "use_original_name_batch": false,
    "use_upscaler_name_as_suffix": false,
    "save_selected_only": false,
    "save_init_img": false,
    "temp_dir": "",
    "clean_temp_dir_at_start": false,
    "save_incomplete_images": false,
    "notification_audio": false,
    "notification_volume": 100,
    "save_to_dirs": true,
    "grid_save_to_dirs": false,
    "use_save_to_dirs_for_ui": false,
    "directories_filename_pattern": "[date]",
    "directories_max_prompt_words": 8,
    "auto_backcompat": true,
    "use_old_emphasis_implementation": false,
    "use_old_karras_scheduler_sigmas": false,
    "no_dpmpp_sde_batch_determinism": false,
    "use_old_hires_fix_width_height": false,
    "dont_fix_second_order_samplers_schedule": false,
    "hires_fix_use_firstpass_conds": false,
    "use_old_scheduling": false,
    "use_downcasted_alpha_bar": false,
    "refiner_switch_by_sample_steps": false,
    "lora_functional": false,
    "extra_networks_show_hidden_directories": true,
    "extra_networks_dir_button_function": false,
    "extra_networks_hidden_models": "When searched",
    "extra_networks_default_multiplier": 1,
    "extra_networks_card_width": 0.0,
    "extra_networks_card_height": 0.0,
    "extra_networks_card_text_scale": 1,
    "extra_networks_card_show_desc": true,
    "extra_networks_card_description_is_html": false,
    "extra_networks_card_order_field": "Date Created",
    "extra_networks_card_order": "Descending",
    "extra_networks_tree_view_style": "Dirs",
    "extra_networks_tree_view_default_enabled": true,
    "extra_networks_tree_view_default_width": 180.0,
    "extra_networks_add_text_separator": " ",
    "ui_extra_networks_tab_reorder": "",
    "textual_inversion_print_at_load": false,
    "textual_inversion_add_hashes_to_infotext": true,
    "sd_hypernetwork": "None",
    "sd_lora": "None",
    "lora_preferred_name": "Alias from file",
    "lora_add_hashes_to_infotext": true,
    "lora_show_all": true,
    "lora_hide_unknown_for_versions": [],
    "lora_in_memory_limit": 0,
    "lora_not_found_warning_console": false,
    "lora_not_found_gradio_warning": false,
    "cross_attention_optimization": "sdp - scaled dot product",
    "s_min_uncond": 0,
    "token_merging_ratio": 0,
    "token_merging_ratio_img2img": 0,
    "token_merging_ratio_hr": 0,
    "pad_cond_uncond": false,
    "pad_cond_uncond_v0": false,
    "persistent_cond_cache": true,
    "batch_cond_uncond": true,
    "fp8_storage": "Disable",
    "cache_fp16_weight": false,
    "hide_samplers": [],
    "eta_ddim": 0,
    "eta_ancestral": 1,
    "ddim_discretize": "uniform",
    "s_churn": 0,
    "s_tmin": 0,
    "s_tmax": 0,
    "s_noise": 1,
    "sigma_min": 0.0,
    "sigma_max": 0.0,
    "rho": 0.0,
    "eta_noise_seed_delta": 0,
    "always_discard_next_to_last_sigma": false,
    "sgm_noise_multiplier": false,
    "uni_pc_variant": "bh1",
    "uni_pc_skip_type": "time_uniform",
    "uni_pc_order": 3,
    "uni_pc_lower_order_final": true,
    "sd_noise_schedule": "Default",
    "sd_checkpoints_limit": 1,
    "sd_checkpoints_keep_in_cpu": true,
    "sd_checkpoint_cache": 0,
    "sd_unet": "Automatic",
    "enable_quantization": false,
    "emphasis": "Original",
    "enable_batch_seeds": true,
    "comma_padding_backtrack": 20,
    "CLIP_stop_at_last_layers": 1,
    "upcast_attn": false,
    "randn_source": "GPU",
    "tiling": false,
    "hires_fix_refiner_pass": "second pass",
    "enable_prompt_comments": true,
    "sdxl_crop_top": 0.0,
    "sdxl_crop_left": 0.0,
    "sdxl_refiner_low_aesthetic_score": 2.5,
    "sdxl_refiner_high_aesthetic_score": 6.0,
    "sd_vae_checkpoint_cache": 0,
    "sd_vae": "sdxl_vae.safetensors",
    "sd_vae_overrides_per_model_preferences": true,
    "auto_vae_precision_bfloat16": false,
    "auto_vae_precision": true,
    "sd_vae_encode_method": "Full",
    "sd_vae_decode_method": "Full",
    "inpainting_mask_weight": 1,
    "initial_noise_multiplier": 1,
    "img2img_extra_noise": 0,
    "img2img_color_correction": false,
    "img2img_fix_steps": false,
    "img2img_background_color": "#ffffff",
    "img2img_editor_height": 720,
    "img2img_sketch_default_brush_color": "#ffffff",
    "img2img_inpaint_mask_brush_color": "#ffffff",
    "img2img_inpaint_sketch_default_brush_color": "#ffffff",
    "return_mask": false,
    "return_mask_composite": false,
    "img2img_batch_show_results_limit": 32,
    "overlay_inpaint": true,
    "return_grid": false,
    "do_not_show_images": false,
    "js_modal_lightbox": true,
    "js_modal_lightbox_initially_zoomed": true,
    "js_modal_lightbox_gamepad": false,
    "js_modal_lightbox_gamepad_repeat": 250.0,
    "sd_webui_modal_lightbox_icon_opacity": 1,
    "sd_webui_modal_lightbox_toolbar_opacity": 0.9,
    "gallery_height": "",
    "open_dir_button_choice": "Subdirectory",
    "enable_pnginfo": true,
    "save_txt": false,
    "add_model_name_to_info": true,
    "add_model_hash_to_info": true,
    "add_vae_name_to_info": true,
    "add_vae_hash_to_info": true,
    "add_user_name_to_info": false,
    "add_version_to_infotext": true,
    "disable_weights_auto_swap": true,
    "infotext_skip_pasting": [],
    "infotext_styles": "Apply if any",
    "show_progressbar": true,
    "live_previews_enable": true,
    "live_previews_image_format": "png",
    "show_progress_grid": true,
    "show_progress_every_n_steps": -1,
    "show_progress_type": "Full",
    "live_preview_allow_lowvram_full": false,
    "live_preview_content": "Prompt",
    "live_preview_refresh_period": 1000.0,
    "live_preview_fast_interrupt": true,
    "js_live_preview_in_modal_lightbox": true,
    "keyedit_precision_attention": 0.1,
    "keyedit_precision_extra": 0.05,
    "keyedit_delimiters": ".,\\/!?%^*;:{}=`~() ",
    "keyedit_delimiters_whitespace": [
        "Tab",
        "Carriage Return",
        "Line Feed"
    ],
    "keyedit_move": true,
    "disable_token_counters": false,
    "include_styles_into_token_counters": true,
    "extra_options_txt2img": [],
    "extra_options_img2img": [],
    "extra_options_cols": 1,
    "extra_options_accordion": false,
    "compact_prompt_box": false,
    "samplers_in_dropdown": true,
    "dimensions_and_batch_together": true,
    "sd_checkpoint_dropdown_use_short": false,
    "hires_fix_show_sampler": false,
    "hires_fix_show_prompts": false,
    "txt2img_settings_accordion": false,
    "img2img_settings_accordion": false,
    "interrupt_after_current": true,
    "localization": "None",
    "quicksettings_list": [
        "sd_model_checkpoint",
        "sd_vae",
        "CLIP_stop_at_last_layers"
    ],
    "ui_tab_order": [],
    "hidden_tabs": [
        "Train"
    ],
    "ui_reorder_list": [],
    "gradio_theme": "Default",
    "gradio_themes_cache": true,
    "show_progress_in_title": true,
    "send_seed": true,
    "send_size": true,
    "enable_reloading_ui_scripts": false,
    "api_enable_requests": true,
    "api_forbid_local_requests": true,
    "api_useragent": "",
    "prioritized_callbacks_app_started": [],
    "prioritized_callbacks_model_loaded": [],
    "prioritized_callbacks_ui_tabs": [],
    "prioritized_callbacks_ui_settings": [],
    "prioritized_callbacks_before_image_saved": [],
    "prioritized_callbacks_after_component": [],
    "prioritized_callbacks_infotext_pasted": [],
    "prioritized_callbacks_script_unloaded": [],
    "prioritized_callbacks_before_ui": [],
    "prioritized_callbacks_on_reload": [],
    "prioritized_callbacks_list_optimizers": [],
    "prioritized_callbacks_before_token_counter": [],
    "prioritized_callbacks_script_before_process": [],
    "prioritized_callbacks_script_process": [],
    "prioritized_callbacks_script_before_process_batch": [],
    "prioritized_callbacks_script_process_batch": [],
    "prioritized_callbacks_script_postprocess": [],
    "prioritized_callbacks_script_postprocess_batch": [],
    "prioritized_callbacks_script_post_sample": [],
    "prioritized_callbacks_script_on_mask_blend": [],
    "prioritized_callbacks_script_postprocess_image": [],
    "prioritized_callbacks_script_postprocess_maskoverlay": [],
    "prioritized_callbacks_script_after_component": [],
    "auto_launch_browser": "Local",
    "enable_console_prompts": false,
    "show_warnings": false,
    "show_gradio_deprecation_warnings": false,
    "memmon_poll_rate": 8,
    "samples_log_stdout": false,
    "multiple_tqdm": true,
    "enable_upscale_progressbar": true,
    "print_hypernet_extra": false,
    "list_hidden_files": true,
    "disable_mmap_load_safetensors": false,
    "hide_ldm_prints": true,
    "dump_stacks_on_signal": false,
    "face_restoration": false,
    "face_restoration_model": "CodeFormer",
    "code_former_weight": 0.5,
    "face_restoration_unload": false,
    "postprocessing_enable_in_main_ui": [],
    "postprocessing_disable_in_extras": [],
    "postprocessing_operation_order": [],
    "upscaling_max_images_in_cache": 5,
    "postprocessing_existing_caption_action": "Ignore",
    "ESRGAN_tile": 192,
    "ESRGAN_tile_overlap": 8,
    "realesrgan_enabled_models": [
        "R-ESRGAN 4x+",
        "R-ESRGAN 4x+ Anime6B"
    ],
    "dat_enabled_models": [
        "DAT x2",
        "DAT x3",
        "DAT x4"
    ],
    "DAT_tile": 192,
    "DAT_tile_overlap": 8,
    "set_scale_by_when_changing_upscaler": false,
    "unload_models_when_training": false,
    "pin_memory": false,
    "save_optimizer_state": false,
    "save_training_settings_to_txt": true,
    "dataset_filename_word_regex": "",
    "dataset_filename_join_string": " ",
    "training_image_repeats_per_epoch": 1,
    "training_write_csv_every": 500.0,
    "training_xattention_optimizations": false,
    "training_enable_tensorboard": false,
    "training_tensorboard_save_images": false,
    "training_tensorboard_flush_every": 120.0,
    "canvas_hotkey_zoom": "Alt",
    "canvas_hotkey_adjust": "Ctrl",
    "canvas_hotkey_shrink_brush": "Q",
    "canvas_hotkey_grow_brush": "W",
    "canvas_hotkey_move": "F",
    "canvas_hotkey_fullscreen": "S",
    "canvas_hotkey_reset": "R",
    "canvas_hotkey_overlap": "O",
    "canvas_show_tooltip": true,
    "canvas_auto_expand": true,
    "canvas_blur_prompt": false,
    "canvas_disabled_functions": [
        "Overlap"
    ],
    "interrogate_keep_models_in_memory": false,
    "interrogate_return_ranks": false,
    "interrogate_clip_num_beams": 1,
    "interrogate_clip_min_length": 24,
    "interrogate_clip_max_length": 48,
    "interrogate_clip_dict_limit": 1500.0,
    "interrogate_clip_skip_categories": [],
    "interrogate_deepbooru_score_threshold": 0.5,
    "deepbooru_sort_alpha": true,
    "deepbooru_use_spaces": true,
    "deepbooru_escape": true,
    "deepbooru_filter_tags": "",
    "state": [],
    "state_txt2img": [],
    "state_img2img": [],
    "state_extensions": [],
    "animatediff_model_path": "",
    "animatediff_default_save_formats": [
        "GIF",
        "PNG"
    ],
    "animatediff_save_to_custom": true,
    "animatediff_frame_extract_path": "",
    "animatediff_frame_extract_remove": false,
    "animatediff_default_frame_extract_method": "ffmpeg",
    "animatediff_optimize_gif_palette": false,
    "animatediff_optimize_gif_gifsicle": false,
    "animatediff_mp4_crf": 23,
    "animatediff_mp4_preset": "",
    "animatediff_mp4_tune": "",
    "animatediff_webp_quality": 80,
    "animatediff_webp_lossless": false,
    "animatediff_s3_enable": false,
    "animatediff_s3_host": "",
    "animatediff_s3_port": "",
    "animatediff_s3_access_key": "",
    "animatediff_s3_secret_key": "",
    "animatediff_s3_storge_bucket": "",
    "prioritized_callbacks_cfg_denoiser": [],
    "prioritized_callbacks_script_postprocess_batch_list": [],
    "state_ui": [
        "Reset Button",
        "Import Button",
        "Export Button"
    ],
    "replacer_use_first_positive_prompt_from_examples": true,
    "replacer_use_first_negative_prompt_from_examples": true,
    "replacer_hide_segment_anything_accordions": true,
    "replacer_hide_animatediff_accordions": false,
    "replacer_hide_replacer_script": false,
    "replacer_always_unload_models": "Automatic",
    "replacer_use_cpu_for_detection": false,
    "replacer_fast_dilation": true,
    "replacer_mask_color": "#84FF9A",
    "replacer_detection_prompt_examples": "",
    "replacer_avoidance_prompt_examples": "",
    "replacer_positive_prompt_examples": "",
    "replacer_negative_prompt_examples": "",
    "replacer_hf_positive_prompt_suffix_examples": "",
    "replacer_examples_per_page_for_detection_prompt": 10,
    "replacer_examples_per_page_for_avoidance_prompt": 10,
    "replacer_examples_per_page_for_positive_prompt": 10,
    "replacer_examples_per_page_for_negative_prompt": 10,
    "replacer_save_dir": "E:\\Stable Diffusion Images\\Replacer",
    "sam_use_local_groundingdino": false,
    "replacer_default_extra_includes": [
        "script"
    ],
    "mm_dare_merger_random_seed": 1324
},
"Startup": {
    "total": 9.000435829162598,
    "records": {
        "initial startup": 0.013000011444091797,
        "prepare environment/checks": 0.004000425338745117,
        "prepare environment/git version info": 0.023999929428100586,
        "prepare environment/torch GPU test": 0.0019998550415039062,
        "prepare environment/clone repositores": 0.07299971580505371,
        "prepare environment/run extensions installers/1-sd-dynamic-prompts": 0.09300017356872559,
        "prepare environment/run extensions installers/3sd-webui-controlnet": 0.0840003490447998,
        "prepare environment/run extensions installers/a2-adetailer": 0.08500027656555176,
        "prepare environment/run extensions installers/b1111-sd-webui-tagcomplete": 0.0,
        "prepare environment/run extensions installers/sd-webui-aspect-ratio-helper-tweaked": 0.0,
        "prepare environment/run extensions installers/sd-webui-controlnet-evaclip": 0.0,
        "prepare environment/run extensions installers/sd-webui-model-mixer": 0.0,
        "prepare environment/run extensions installers/sd-webui-stablesr": 0.0,
        "prepare environment/run extensions installers/sd-webui-zmultidiffusion-upscaler-for-automatic1111": 0.0,
        "prepare environment/run extensions installers": 0.26200079917907715,
        "prepare environment": 0.3820009231567383,
        "launcher": 0.0010001659393310547,
        "import torch": 2.2980003356933594,
        "import gradio": 0.43799924850463867,
        "setup paths": 0.5269994735717773,
        "import ldm": 0.0029997825622558594,
        "import sgm": 0.0,
        "initialize shared": 0.13100004196166992,
        "other imports": 0.27499985694885254,
        "opts onchange": 0.0,
        "setup SD model": 0.0,
        "setup codeformer": 0.0010001659393310547,
        "setup gfpgan": 0.09599995613098145,
        "set samplers": 0.0,
        "list extensions": 0.0010001659393310547,
        "restore config state file": 0.0,
        "list SD models": 0.38299989700317383,
        "list localizations": 0.0009999275207519531,
        "load scripts/custom_code.py": 0.0030002593994140625,
        "load scripts/img2imgalt.py": 0.0,
        "load scripts/loopback.py": 0.0,
        "load scripts/outpainting_mk_2.py": 0.0,
        "load scripts/poor_mans_outpainting.py": 0.0,
        "load scripts/postprocessing_codeformer.py": 0.0,
        "load scripts/postprocessing_gfpgan.py": 0.0,
        "load scripts/postprocessing_upscale.py": 0.0,
        "load scripts/prompt_matrix.py": 0.0009996891021728516,
        "load scripts/prompts_from_file.py": 0.0,
        "load scripts/sd_upscale.py": 0.0,
        "load scripts/xyz_grid.py": 0.0010008811950683594,
        "load scripts/ldsr_model.py": 0.2649991512298584,
        "load scripts/lora_script.py": 0.10500001907348633,
        "load scripts/scunet_model.py": 0.009999990463256836,
        "load scripts/swinir_model.py": 0.008999824523925781,
        "load scripts/hotkey_config.py": 0.0,
        "load scripts/extra_options_section.py": 0.0,
        "load scripts/hypertile_script.py": 0.02200031280517578,
        "load scripts/hypertile_xyz.py": 0.0,
        "load scripts/postprocessing_autosized_crop.py": 0.0,
        "load scripts/postprocessing_caption.py": 0.0010001659393310547,
        "load scripts/postprocessing_create_flipped_copies.py": 0.0,
        "load scripts/postprocessing_focal_crop.py": 0.0,
        "load scripts/postprocessing_split_oversized.py": 0.0010001659393310547,
        "load scripts/soft_inpainting.py": 0.0,
        "load scripts/dynamic_prompting.py": 0.018999814987182617,
        "load scripts/adapter.py": 0.0,
        "load scripts/api.py": 0.11600017547607422,
        "load scripts/batch_hijack.py": 0.00099945068359375,
        "load scripts/cldm.py": 0.0,
        "load scripts/controlnet.py": 0.08299994468688965,
        "load scripts/controlnet_diffusers.py": 0.0,
        "load scripts/controlnet_lllite.py": 0.0,
        "load scripts/controlnet_lora.py": 0.0,
        "load scripts/controlnet_model_guess.py": 0.0,
        "load scripts/controlnet_sparsectrl.py": 0.0010004043579101562,
        "load scripts/controlnet_version.py": 0.0,
        "load scripts/enums.py": 0.0,
        "load scripts/external_code.py": 0.0,
        "load scripts/global_state.py": 0.0010006427764892578,
        "load scripts/hook.py": 0.0,
        "load scripts/infotext.py": 0.0,
        "load scripts/logging.py": 0.0,
        "load scripts/lvminthin.py": 0.0009996891021728516,
        "load scripts/movie2movie.py": 0.0,
        "load scripts/supported_preprocessor.py": 0.0,
        "load scripts/utils.py": 0.0009996891021728516,
        "load scripts/xyz_grid_support.py": 0.0,
        "load scripts/!adetailer.py": 0.5944359302520752,
        "load scripts/model_keyword_support.py": 0.0010006427764892578,
        "load scripts/shared_paths.py": 0.0010008811950683594,
        "load scripts/tag_autocomplete_helper.py": 0.09199833869934082,
        "load scripts/tag_frequency_db.py": 0.0,
        "load scripts/sd_webui_aspect_ratio_helper.py": 0.03200030326843262,
        "load scripts/preprocessor_evaclip.py": 0.05300188064575195,
        "load scripts/model_mixer.py": 0.06799840927124023,
        "load scripts/patches.py": 0.0,
        "load scripts/vxa.py": 0.0,
        "load scripts/stablesr.py": 0.0010001659393310547,
        "load scripts/tilediffusion.py": 0.004000186920166016,
        "load scripts/tileglobal.py": 0.0009992122650146484,
        "load scripts/tilevae.py": 0.0,
        "load scripts/comments.py": 0.011999845504760742,
        "load scripts/refiner.py": 0.0,
        "load scripts/sampler.py": 0.0,
        "load scripts/seed.py": 0.0,
        "load scripts": 1.5004360675811768,
        "load upscalers": 0.002000570297241211,
        "refresh VAE": 0.13199996948242188,
        "refresh textual inversion templates": 0.0,
        "scripts list_optimizers": 0.0009996891021728516,
        "scripts list_unets": 0.0,
        "reload hypernetworks": 0.0,
        "initialize extra networks": 0.012999773025512695,
        "scripts before_ui_callback": 0.002000093460083008,
        "create ui": 2.2860002517700195,
        "gradio launch": 0.4310009479522705,
        "add APIs": 0.003999233245849609,
        "app_started_callback/lora_script.py": 0.0,
        "app_started_callback/api.py": 0.002001523971557617,
        "app_started_callback/!adetailer.py": 0.0,
        "app_started_callback/tag_autocomplete_helper.py": 0.0019981861114501953,
        "app_started_callback/model_mixer.py": 0.09199976921081543,
        "app_started_callback": 0.09599947929382324
    }
},
"Packages": [
    "absl-py==2.1.0",
    "accelerate==0.21.0",
    "addict==2.4.0",
    "aenum==3.1.15",
    "aiofiles==23.2.1",
    "aiohttp==3.9.5",
    "aiosignal==1.3.1",
    "albumentations==1.4.3",
    "altair==5.3.0",
    "antlr4-python3-runtime==4.9.3",
    "anyio==3.7.1",
    "async-timeout==4.0.3",
    "asyncer==0.0.7",
    "attrs==23.2.0",
    "av==12.0.0",
    "blendmodes==2022",
    "certifi==2024.2.2",
    "cffi==1.16.0",
    "chardet==5.2.0",
    "charset-normalizer==3.3.2",
    "clean-fid==0.1.35",
    "click==8.1.7",
    "clip==1.0",
    "colorama==0.4.6",
    "coloredlogs==15.0.1",
    "colorlog==6.8.2",
    "contourpy==1.2.1",
    "cssselect2==0.7.0",
    "cycler==0.12.1",
    "cython==3.0.10",
    "decorator==4.0.11",
    "defusedxml==0.7.1",
    "deprecation==2.1.0",
    "depth-anything==2024.1.22.0",
    "diffusers==0.27.0.dev0",
    "diskcache==5.6.3",
    "dsine==2024.3.23",
    "dynamicprompts==0.29.0",
    "easydict==1.13",
    "einops==0.4.1",
    "embreex==2.17.7.post4",
    "exceptiongroup==1.2.1",
    "facexlib==0.3.0",
    "fastapi==0.94.0",
    "ffmpy==0.3.2",
    "filelock==3.13.4",
    "filterpy==1.4.5",
    "flatbuffers==24.3.25",
    "fonttools==4.51.0",
    "frozenlist==1.4.1",
    "fsspec==2024.3.1",
    "ftfy==6.2.0",
    "fvcore==0.1.5.post20221221",
    "geffnet==1.0.2",
    "gitdb==4.0.11",
    "gitpython==3.1.32",
    "glob2==0.5",
    "gradio-client==0.5.0",
    "gradio==3.41.2",
    "h11==0.12.0",
    "handrefinerportable==2024.2.12.0",
    "httpcore==0.15.0",
    "httpx==0.24.1",
    "huggingface-hub==0.22.2",
    "humanfriendly==10.0",
    "idna==3.7",
    "imageio-ffmpeg==0.4.9",
    "imageio==2.34.1",
    "importlib-metadata==7.1.0",
    "importlib-resources==5.12.0",
    "inflection==0.5.1",
    "insightface==0.7.3",
    "iopath==0.1.9",
    "jax==0.4.26",
    "jinja2==3.1.3",
    "joblib==1.4.0",
    "jsonmerge==1.8.0",
    "jsonschema-specifications==2023.12.1",
    "jsonschema==4.21.1",
    "kiwisolver==1.4.5",
    "kornia==0.6.7",
    "lark==1.1.2",
    "lazy-loader==0.4",
    "lightning-utilities==0.11.2",
    "llvmlite==0.42.0",
    "loguru==0.7.2",
    "lxml==5.2.1",
    "mapbox-earcut==1.0.1",
    "markdown-it-py==3.0.0",
    "markupsafe==2.1.5",
    "matplotlib==3.8.4",
    "mdurl==0.1.2",
    "mediapipe==0.10.11",
    "ml-dtypes==0.4.0",
    "moviepy==0.2.3.2",
    "mpmath==1.3.0",
    "multidict==6.0.5",
    "natsort==8.4.0",
    "networkx==3.3",
    "numba==0.59.1",
    "numpy==1.26.2",
    "omegaconf==2.2.3",
    "onnx==1.16.0",
    "onnxruntime-gpu==1.17.1",
    "open-clip-torch==2.20.0",
    "opencv-contrib-python==4.9.0.80",
    "opencv-python-headless==4.9.0.80",
    "opencv-python==4.9.0.80",
    "openpiv==0.25.3",
    "opt-einsum==3.3.0",
    "orjson==3.10.1",
    "packaging==24.0",
    "pandas==2.2.2",
    "piexif==1.1.3",
    "pillow-avif-plugin==1.4.3",
    "pillow==9.5.0",
    "pip==23.0.1",
    "platformdirs==4.2.1",
    "pooch==1.8.1",
    "portalocker==2.8.2",
    "prettytable==3.10.0",
    "protobuf==3.20.3",
    "psutil==5.9.5",
    "py-cpuinfo==9.0.0",
    "pycocotools==2.0.7",
    "pycollada==0.8",
    "pycparser==2.22",
    "pydantic==1.10.15",
    "pydub==0.25.1",
    "pygments==2.17.2",
    "pymatting==1.1.12",
    "pyparsing==3.1.2",
    "pyreadline3==3.4.1",
    "python-dateutil==2.9.0.post0",
    "python-multipart==0.0.9",
    "pytorch-lightning==1.9.4",
    "pytz==2024.1",
    "pywavelets==1.6.0",
    "pywin32==306",
    "pyyaml==6.0.1",
    "referencing==0.35.0",
    "regex==2024.4.16",
    "rembg==2.0.38",
    "reportlab==4.2.0",
    "requests==2.31.0",
    "resize-right==0.0.2",
    "rich==13.7.1",
    "rpds-py==0.18.0",
    "rtree==1.2.0",
    "safetensors==0.4.2",
    "scikit-image==0.21.0",
    "scikit-learn==1.4.2",
    "scipy==1.13.0",
    "sdwi2iextender==0.1.3",
    "seaborn==0.13.2",
    "segment-anything==1.0",
    "semantic-version==2.10.0",
    "send2trash==1.8.3",
    "sentencepiece==0.2.0",
    "setuptools==65.5.0",
    "shapely==2.0.4",
    "six==1.16.0",
    "smmap==5.0.1",
    "sniffio==1.3.1",
    "sounddevice==0.4.6",
    "spandrel==0.1.6",
    "starlette==0.26.1",
    "supervision==0.20.0",
    "svg.path==6.3",
    "svglib==1.5.1",
    "sympy==1.12",
    "tabulate==0.9.0",
    "termcolor==2.4.0",
    "thop==0.1.1.post2209072238",
    "threadpoolctl==3.4.0",
    "tifffile==2024.4.24",
    "timm==0.9.16",
    "tinycss2==1.3.0",
    "tokenizers==0.13.3",
    "tomesd==0.1.3",
    "tomli==2.0.1",
    "toolz==0.12.1",
    "torch==2.1.2+cu121",
    "torchdiffeq==0.2.3",
    "torchmetrics==1.3.2",
    "torchsde==0.2.6",
    "torchvision==0.16.2+cu121",
    "tqdm==4.66.2",
    "trampoline==0.1.2",
    "transformers==4.30.2",
    "trimesh==4.3.1",
    "typing-extensions==4.11.0",
    "tzdata==2024.1",
    "ultralytics==8.2.4",
    "urllib3==2.2.1",
    "uvicorn==0.29.0",
    "vhacdx==0.0.6",
    "wcwidth==0.2.13",
    "webencodings==0.5.1",
    "websockets==11.0.3",
    "win32-setctime==1.1.0",
    "xatlas==0.0.9",
    "xxhash==3.4.1",
    "yacs==0.1.8",
    "yapf==0.40.2",
    "yarl==1.9.4",
    "zipp==3.18.1"
]

}

Should also not that I have manually added the PRs for A1111 mentioned here which increase performance (to Forge levels). These are by a developer of Forge and contributor to A1111 but not in the official repos (as of yet at least). I suppose there's a chance these changes could be causing this issue although I was running the prior commit I was on ( 4c43307 per reflog) w/o any issues. Link to PR set / discussion on A1111 forum: AUTOMATIC1111/stable-diffusion-webui#15821

Plan to rollback now since it was working fine prior to updating tonight to a509927

Error when attempting to merge over 6 models: "safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer"

Just noticed you had increased the maximum number of models you can merge to 14(!). For fun I decided to test this, and seems in spite of the menu having the option to pick the extra models, merging fails after stage 6 which I think was the prior limit.

I'm currently on version 1.7 (one back) of A1111 due to an unrelated problem in case that matters.

Session log from a couple failed merges and then completion when limiting the # of models to merge:

Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 2.6s (list SD models: 0.5s, load scripts: 0.4s, refresh VAE: 0.1s, create ui: 0.9s, gradio launch: 0.3s, app_started_callback: 0.4s).
debugs =  ['elemental merge']
use_extra_elements =  True
 - mm_max_models =  14
config hash =  eff83a3f885d1dda68359f067a0fb85a8ae684680a2d53c0b2c8cb0e1563c8ec
  - mm_use [True, True, True, True, True, True, True, True, True, True, True, True, False, False]
  - model_a SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00003600.safetensors [49500a301c]
  - base_model None
  - max_models 14
  - models ['SDXL\\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00001500.safetensors', 'SDXL\\2024-03-31 - MJSpooky Artstyle - 21img - 2folders - 9e-6 - base model-step00002000.safetensors [cb05905470]', 'SDXL\\2024-03-29 - hlmt artstyle - base model - 2folders - no captions - 9e-6-step00001500.safetensors [beeb1870d5]', 'SDXL\\2024-03-19 - Great Dreamy Results - Merge-6steps-2.5cfg - Topnotch-Dreamyvibes-Lightn-4-Sdthr-DM-CB-Plshdoll-mjcrzs.safetensors', 'SDXL\\2024-03-30 - V2 - mjhelmet - mjspooky - base model - 2folders - no captions - 8e-6-step00007200.safetensors', 'SDXL\\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00010800.safetensors', 'SDXL\\z-2024-03-12 - Topnotch mix looks nice - also sdthr - dreamy - mzcrzs - dm - cb - plsh.safetensors', 'SDXL\\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors', 'SDXL\\2024-03-28 - PXR artstyle---- DB kohya - 20img - 1e-5 - No captions - 2b-step00003000.safetensors', 'SDXL\\2024-03-29 - MJspky - Imgs 12 - base model - 1e-5-step00003000.safetensors', 'SDXL\\2024-03-29 - MJhelmet - MJhphp - MBoard - DMBDave - base model - 1e-5-step00002100.safetensors', 'SDXL\\2024-03-16 - OT - Pxr artstyle (no captions now) - 7742-10-18.safetensors']
  - modes ['DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE']
  - calcmodes ['Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal']
  - usembws [[], [], [], [], [], [], [], [], [], [], [], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
  - adjust
  - use elemental [False, False, False, False, False, False, False, False, False, False, False, False]
  - elementals ['', '', '', '', '', '', '', '', '', '', '', '']
  - Parse elemental merge...
model_a = SDXL_2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00003600
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00003600.safetensors...
isxl = True , sd2 = False
compact_mode =  False
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00001500.safetensors...
Calculating sha256 for e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00001500.safetensors: adb89bc8b8e00e6e4fd01cc1bffcffd3cdbb4391925b862042fd64e4f67278af
mode = DARE, alpha = 0.5
Stage #1/13: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:23<00:00, 107.89it/s]
Check uninitialized #2/13: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:00<00:00, 628470.63it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-31 - MJSpooky Artstyle - 21img - 2folders - 9e-6 - base model-step00002000.safetensors...
mode = DARE, alpha = 0.5
Stage #3/13: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:21<00:00, 118.02it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-29 - hlmt artstyle - base model - 2folders - no captions - 9e-6-step00001500.safetensors...
mode = DARE, alpha = 0.5
Stage #4/13: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:23<00:00, 106.90it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-19 - Great Dreamy Results - Merge-6steps-2.5cfg - Topnotch-Dreamyvibes-Lightn-4-Sdthr-DM-CB-Plshdoll-mjcrzs.safetensors...
Calculating sha256 for e:\Stable Diffusion Checkpoints\SDXL\2024-03-19 - Great Dreamy Results - Merge-6steps-2.5cfg - Topnotch-Dreamyvibes-Lightn-4-Sdthr-DM-CB-Plshdoll-mjcrzs.safetensors: 11e4684ca4646234bcd053596a852d39de75ae813aad3b79033c50cbd191ba41
mode = DARE, alpha = 0.5
Stage #5/13: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:22<00:00, 112.60it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - V2 - mjhelmet - mjspooky - base model - 2folders - no captions - 8e-6-step00007200.safetensors...
Calculating sha256 for e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - V2 - mjhelmet - mjspooky - base model - 2folders - no captions - 8e-6-step00007200.safetensors: c3ee11ef53c497834fb9917a0ea5a0b6a3c87f87dde72b42ac6632b975b43da8
mode = DARE, alpha = 0.5
Stage #6/13: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:22<00:00, 112.63it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00010800.safetensors...
*** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\scripts.py", line 710, in before_process
        script.before_process(p, *script_args)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3828, in before_process
        theta_1f = open_state_dict(checkpointinfo1)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 5127, in open_state_dict
        f = safe_open(file_path, framework="pt", device="cpu")
    safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

---
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 23/23 [00:03<00:00,  6.92it/s]
Total progress:   8%|██████████████████▉                                                                                                                                                                                                                | 23/276 [00:04<00:48,  5.20it/s]
debugs =  ['elemental merge']██████████▉                                                                                                                                                                                                                | 23/276 [00:04<00:34,  7.44it/s]
use_extra_elements =  True
 - mm_max_models =  14
config hash =  20f645cd0457c74bf332e1655f4fb6959854170de91ca2b87cdd2df287a3774f
  - mm_use [True, True, True, True, True, True, True, True, True, True, False, False, False, False]
  - model_a SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00003600.safetensors [49500a301c]
  - base_model None
  - max_models 14
  - models ['SDXL\\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00001500.safetensors', 'SDXL\\2024-03-31 - MJSpooky Artstyle - 21img - 2folders - 9e-6 - base model-step00002000.safetensors [cb05905470]', 'SDXL\\2024-03-29 - hlmt artstyle - base model - 2folders - no captions - 9e-6-step00001500.safetensors [beeb1870d5]', 'SDXL\\2024-03-19 - Great Dreamy Results - Merge-6steps-2.5cfg - Topnotch-Dreamyvibes-Lightn-4-Sdthr-DM-CB-Plshdoll-mjcrzs.safetensors', 'SDXL\\2024-03-30 - V2 - mjhelmet - mjspooky - base model - 2folders - no captions - 8e-6-step00006000.safetensors', 'SDXL\\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00010800.safetensors', 'SDXL\\z-2024-03-12 - Topnotch mix looks nice - also sdthr - dreamy - mzcrzs - dm - cb - plsh.safetensors', 'SDXL\\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors', 'SDXL\\2024-03-28 - PXR artstyle---- DB kohya - 20img - 1e-5 - No captions - 2b-step00003000.safetensors', 'SDXL\\2024-03-29 - MJspky - Imgs 12 - base model - 1e-5-step00003000.safetensors']
  - modes ['DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE', 'DARE']
  - calcmodes ['Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal']
  - usembws [[], [], [], [], [], [], [], [], [], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
  - adjust
  - use elemental [False, False, False, False, False, False, False, False, False, False]
  - elementals ['', '', '', '', '', '', '', '', '', '']
  - Parse elemental merge...
model_a = SDXL_2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00003600
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00003600.safetensors...
isxl = True , sd2 = False
compact_mode =  False
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00001500.safetensors...
mode = DARE, alpha = 0.5
Stage #1/11: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:21<00:00, 114.71it/s]
Check uninitialized #2/11: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:00<00:00, 502764.52it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-31 - MJSpooky Artstyle - 21img - 2folders - 9e-6 - base model-step00002000.safetensors...
mode = DARE, alpha = 0.5
Stage #3/11: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:21<00:00, 117.51it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-29 - hlmt artstyle - base model - 2folders - no captions - 9e-6-step00001500.safetensors...
mode = DARE, alpha = 0.5
Stage #4/11: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:21<00:00, 115.24it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-19 - Great Dreamy Results - Merge-6steps-2.5cfg - Topnotch-Dreamyvibes-Lightn-4-Sdthr-DM-CB-Plshdoll-mjcrzs.safetensors...
mode = DARE, alpha = 0.5
Stage #5/11: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:22<00:00, 112.81it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - V2 - mjhelmet - mjspooky - base model - 2folders - no captions - 8e-6-step00006000.safetensors...
Calculating sha256 for e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - V2 - mjhelmet - mjspooky - base model - 2folders - no captions - 8e-6-step00006000.safetensors: 59b55490133bfd72bfcee4e38e1b82d2eb9266ab9155127fc563e2dd2dfb557a
mode = DARE, alpha = 0.5
Stage #6/11: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:22<00:00, 113.88it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00010800.safetensors...
*** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\scripts.py", line 710, in before_process
        script.before_process(p, *script_args)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3828, in before_process
        theta_1f = open_state_dict(checkpointinfo1)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 5127, in open_state_dict
        f = safe_open(file_path, framework="pt", device="cpu")
    safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

---
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 23/23 [00:03<00:00,  7.49it/s]
Total progress:  10%|██████████████████████▋                                                                                                                                                                                                            | 23/230 [00:04<00:38,  5.35it/s]
debugs =  ['elemental merge']██████████████▋                                                                                                                                                                                                            | 23/230 [00:04<00:27,  7.54it/s]
use_extra_elements =  True
 - mm_max_models =  14
config hash =  7eb11ce0dab91782e48926351464d797743e4dff6c4e95199ed7f162bc43fd3e
  - mm_use [True, True, True, True, True, False, False, False, False, False, False, False, False, False]
  - model_a SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00003600.safetensors [49500a301c]
  - base_model None
  - max_models 14
  - models ['SDXL\\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00001500.safetensors', 'SDXL\\2024-03-31 - MJSpooky Artstyle - 21img - 2folders - 9e-6 - base model-step00002000.safetensors [cb05905470]', 'SDXL\\2024-03-29 - hlmt artstyle - base model - 2folders - no captions - 9e-6-step00001500.safetensors [beeb1870d5]', 'SDXL\\2024-03-19 - Great Dreamy Results - Merge-6steps-2.5cfg - Topnotch-Dreamyvibes-Lightn-4-Sdthr-DM-CB-Plshdoll-mjcrzs.safetensors', 'SDXL\\2024-03-30 - V2 - mjhelmet - mjspooky - base model - 2folders - no captions - 8e-6-step00006000.safetensors']
  - modes ['DARE', 'DARE', 'DARE', 'DARE', 'DARE']
  - calcmodes ['Normal', 'Normal', 'Normal', 'Normal', 'Normal']
  - usembws [[], [], [], [], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5, 0.5, 0.5, 0.5]
  - adjust
  - use elemental [False, False, False, False, False]
  - elementals ['', '', '', '', '']
  - Parse elemental merge...
model_a = SDXL_2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00003600
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00003600.safetensors...
isxl = True , sd2 = False
compact_mode =  False
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - mjhelmet - mjhiphop - mjspooky - mjhlmt artstyle - mboard - base model - 2folders - no captions - 9e-6-step00001500.safetensors...
mode = DARE, alpha = 0.5
Stage #1/6: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:21<00:00, 115.12it/s]
Check uninitialized #2/6: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:00<00:00, 502812.47it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-31 - MJSpooky Artstyle - 21img - 2folders - 9e-6 - base model-step00002000.safetensors...
mode = DARE, alpha = 0.5
Stage #3/6: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:21<00:00, 117.07it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-29 - hlmt artstyle - base model - 2folders - no captions - 9e-6-step00001500.safetensors...
mode = DARE, alpha = 0.5
Stage #4/6: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:21<00:00, 116.20it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-19 - Great Dreamy Results - Merge-6steps-2.5cfg - Topnotch-Dreamyvibes-Lightn-4-Sdthr-DM-CB-Plshdoll-mjcrzs.safetensors...
mode = DARE, alpha = 0.5
Stage #5/6: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:22<00:00, 112.61it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-30 - V2 - mjhelmet - mjspooky - base model - 2folders - no captions - 8e-6-step00006000.safetensors...
mode = DARE, alpha = 0.5
Stage #6/6: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2514/2514 [00:22<00:00, 113.25it/s]
Save unchanged weights #6/6: 0it [00:00, ?it/s]
 - merge processing in 110.1s (prepare: 0.6s, merging: 109.5s).
Creating model from config: D:\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp-no-mem... done.
Model loaded in 1.8s (create model: 0.4s, apply weights to model: 1.1s, load VAE: 0.1s).
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 23/23 [00:03<00:00,  6.39it/s]
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 23/23 [00:04<00:00,  4.84it/s]
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 23/23 [00:04<00:00,  6.73it/s]

Continuation for #95

#95
As I see — nothing has changed. Still getting the same error and the rendered picture doesn't chande

*** Error running before_process: E:\SD\automatic1111\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "E:\SD\automatic1111\modules\scripts.py", line 710, in before_process
        script.before_process(p, *script_args)
      File "E:\SD\automatic1111\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3487, in before_process
        theta_0 = apply_permutation(permutation_spec, first_permutation, theta_0)
      File "E:\SD\automatic1111\extensions\sd-webui-model-mixer\scripts\rebasin\weight_matching.py", line 839, in apply_permutation
        return {k: get_permuted_param(ps, perm, k, params) for k in params.keys() if _valid_key(k)}
      File "E:\SD\automatic1111\extensions\sd-webui-model-mixer\scripts\rebasin\weight_matching.py", line 839, in <dictcomp>
        return {k: get_permuted_param(ps, perm, k, params) for k in params.keys() if _valid_key(k)}
      File "E:\SD\automatic1111\extensions\sd-webui-model-mixer\scripts\rebasin\weight_matching.py", line 773, in get_permuted_param
        for axis, p in enumerate(ps.axes_to_perm[k]):
    KeyError: 'cond_stage_model.logit_scale'

Originally posted by @miasik in #95 (comment)

No module named 'hyperactive', Auto Merger seems not work

Hello, wkpark, your extension is fantastic, but I have a problem here, when I tried testing the "auto merge", it rised an error:
"No module named 'hyperactive' "

Here follows the log:

  • loading sd_modelmixer.hyper...
  • set search lower, upper = -0.2 0.2
    debugs = ['elemental merge']
    use_extra_elements = True
  • mm_max_models = 3
    config hash = 83d0b768e639ffda5bc85c1a9150f07d3264cd3016e4781bdf1de22673da3ed5
  • mm_use [True, False, False]
  • model_a Anything-V3.0-pruned.ckpt [543bcbc212]
  • base_model None
  • max_models 3
  • models ['ayonimix_V6.safetensors']
  • modes ['Sum']
  • calcmodes ['Normal']
  • usembws [['BASE', 'INP*', 'MID', 'OUT*']]
  • weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  • alpha [0.5]
  • adjust
  • use elemental [False]
  • elementals ['']
  • Parse elemental merge...
    model_a = Anything-V3.0-pruned
    Loading from file D:\stable-diffusion-webui\models\Stable-diffusion\Anything-V3.0-pruned.ckpt...
    isxl = False , sd2 = False
    compact_mode = True
    Loading model ayonimix_V6...
    Loading from file D:\stable-diffusion-webui\models\Stable-diffusion\ayonimix_V6.safetensors...
    Calculating sha256 for D:\stable-diffusion-webui\models\Stable-diffusion\ayonimix_V6.safetensors: f3a242fcaaf1d540a1c2d55602e83766cc2efbdf64c2b70e62518cbf516bfcd3
    mode = Sum, mbw mode, alpha = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
    Stage #1/3: 100%|████████████████████████████████████████████████████████████████████| 882/882 [02:35<00:00, 5.66it/s]
    Check uninitialized #2/3: 100%|███████████████████████████████████████████████████| 882/882 [00:00<00:00, 60660.43it/s]
    Save unchanged weights #3/3: 100%|███████████████████████████████████████████████████████████| 937/937 [00:00<?, ?it/s]
    Clip is fine
  • merge processing in 215.4s (prepare: 38.7s, merging: 176.7s).
    WARN: lowvram/medvram load_model() with minor workaround
    Creating model from config: D:\stable-diffusion-webui\configs\v1-inference.yaml
    Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.vae.pt
    Applying attention optimization: xformers... done.
    Model loaded in 10.6s (create model: 1.2s, apply weights to model: 2.2s, apply half(): 0.2s, load VAE: 4.4s, load textual inversion embeddings: 0.8s, calculate empty prompt: 1.5s).
    100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:27<00:00, 1.36s/it]
    Total progress: 20it [00:21, 1.06s/it]
    Traceback (most recent call last):it/s]
    File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
    File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
    File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
    File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
    File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
    File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
    result = context.run(func, *args)
    File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
    File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 2737, in hyper_merge
    ret = hyper.hyper_optimizer(**optimizer_args)
    File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\sd_modelmixer\hyper.py", line 351, in hyper_optimizer
    import hyperactive.optimizers
    ModuleNotFoundError: No module named 'hyperactive'

Can't merge models anymore

I have an error when try to merge models:

use_extra_elements =  True
 - mm_max_models =  2
*** Error running before_process: L:\WebUI\webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "L:\WebUI\webui\modules\scripts.py", line 710, in before_process
        script.before_process(p, *script_args)
      File "L:\WebUI\webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 2793, in before_process
        if type(alpha) == str: alpha = float(alpha)
    ValueError: could not convert string to float: 'Sum'

This happening to any operators.
Thanks

[ENHANCEMENT] More Image Score Classifiers - A Guide

EDIT June 6th, 2024: I found a new one I recommend more than others, same install instructions. Make folder, copy init, make script, copy code into it. Code is is second comment.

If you want more image score classifiers, do the following:

  1. Make 3 new folders under "sd-webui-model-mixer\sd_modelmixer\classifiers" and name them:
    artwork, shadow_v2, & shadow_v2_strict

  2. Copy the "__init__.py" file from "classifiers" into each folder you just made.

  3. Make an empty text file in each folder named similarly to said folder:
    score_artwork.py, score_shadow_v2.py, & score_shadow_v2_strict.py

Here's the code that goes in "score_artwork.py"

# based on https://github.com/WhiteWipe/sd-webui-bayesian-merger/blob/main/sd_webui_bayesian_merger/models/ShadowScore.py
import os
import safetensors
import torch

from huggingface_hub import hf_hub_download
from modules import devices
from PIL import Image
from transformers import pipeline, AutoConfig, AutoProcessor, ConvNextV2ForImageClassification


pathname = hf_hub_download(repo_id="Muinez/artwork-scorer", filename="model.safetensors")

statedict = safetensors.torch.load_file(pathname)

config = AutoConfig.from_pretrained(pretrained_model_name_or_path="Muinez/artwork-scorer")
model = ConvNextV2ForImageClassification.from_pretrained(pretrained_model_name_or_path=None, state_dict=statedict, config=config)
processor = AutoProcessor.from_pretrained(pretrained_model_name_or_path="Muinez/artwork-scorer")


def score(image, prompt="", use_cuda=True):
    if use_cuda:
        model.to("cuda")
    else:
        model.float()
        model.to("cpu")

    if isinstance(image, Image.Image):
        pil_image = image
    elif isinstance(image, str):
        if os.path.isfile(image):
            pil_image = Image.open(image)
    else:
        pil_image = image

    pipe = pipeline("image-classification", model=model, image_processor=processor, device="cpu" if not use_cuda else "cuda:0")

    score = pipe(images=[pil_image])[0]
    score = [p for p in score if p['label'] == 'score'][0]['score']

    if use_cuda:
        model.to("cpu")
    print(" > score =", score)

    devices.torch_gc()

    return score

Here's the code that goes in "score_shadow_v2.py"

# based on https://github.com/WhiteWipe/sd-webui-bayesian-merger/blob/main/sd_webui_bayesian_merger/models/ShadowScore.py
import os
import safetensors
import torch

from huggingface_hub import hf_hub_download
from modules import devices
from PIL import Image
from transformers import pipeline, AutoConfig, AutoProcessor, ViTForImageClassification


pathname = hf_hub_download(repo_id="shadowlilac/aesthetic-shadow-v2", filename="model.safetensors")

statedict = safetensors.torch.load_file(pathname)

config = AutoConfig.from_pretrained(pretrained_model_name_or_path="shadowlilac/aesthetic-shadow-v2")
model = ViTForImageClassification.from_pretrained(pretrained_model_name_or_path=None, state_dict=statedict, config=config)
processor = AutoProcessor.from_pretrained(pretrained_model_name_or_path="shadowlilac/aesthetic-shadow-v2")


def score(image, prompt="", use_cuda=True):
    if use_cuda:
        model.to("cuda")
    else:
        model.float()
        model.to("cpu")

    if isinstance(image, Image.Image):
        pil_image = image
    elif isinstance(image, str):
        if os.path.isfile(image):
            pil_image = Image.open(image)
    else:
        pil_image = image

    pipe = pipeline("image-classification", model=model, image_processor=processor, device="cpu" if not use_cuda else "cuda:0")

    score = pipe(images=[pil_image])[0]
    score = [p for p in score if p['label'] == 'hq'][0]['score']

    if use_cuda:
        model.to("cpu")
    print(" > score =", score)

    devices.torch_gc()

    return score

And here's the code that goes in "score_shadow_v2_strict.py"

# based on https://github.com/WhiteWipe/sd-webui-bayesian-merger/blob/main/sd_webui_bayesian_merger/models/ShadowScore.py
import os
import safetensors
import torch

from huggingface_hub import hf_hub_download
from modules import devices
from PIL import Image
from transformers import pipeline, AutoConfig, AutoProcessor, ViTForImageClassification


pathname = hf_hub_download(repo_id="shadowlilac/aesthetic-shadow-v2-strict", filename="model.safetensors")

statedict = safetensors.torch.load_file(pathname)

config = AutoConfig.from_pretrained(pretrained_model_name_or_path="shadowlilac/aesthetic-shadow-v2-strict")
model = ViTForImageClassification.from_pretrained(pretrained_model_name_or_path=None, state_dict=statedict, config=config)
processor = AutoProcessor.from_pretrained(pretrained_model_name_or_path="shadowlilac/aesthetic-shadow-v2-strict")


def score(image, prompt="", use_cuda=True):
    if use_cuda:
        model.to("cuda")
    else:
        model.float()
        model.to("cpu")

    if isinstance(image, Image.Image):
        pil_image = image
    elif isinstance(image, str):
        if os.path.isfile(image):
            pil_image = Image.open(image)
    else:
        pil_image = image

    pipe = pipeline("image-classification", model=model, image_processor=processor, device="cpu" if not use_cuda else "cuda:0")

    score = pipe(images=[pil_image])[0]
    score = [p for p in score if p['label'] == 'hq'][0]['score']

    if use_cuda:
        model.to("cpu")
    print(" > score =", score)

    devices.torch_gc()

    return score

None of them can compare to Image Reward but if you're having issues installing Image Reward like I was, these might help.

Saved models out of Forge gives "'NoneType' object is not iterable" when used to generate in both A1111 and Comfy

Not sure when this started happening but noticed an issue w/ models saved out using Forge giving me the following error. I get this when trying to use the saved model in both A1111 (main and Forge) and also a similar error about tokenizer in ComfyUI. I tried saving to CKPT, unpruned, f16 (and not) etc. I downgraded diffusers and safetensors and rolled back the extension a few commits but couldn't find a way to get it to work.

After a bunch of troubleshooting I tried saving in regular A1111 w/ current version and the model works in other UIs fine.

rapped.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.layer_norm1.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.layer_norm1.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.layer_norm2.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.layer_norm2.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.mlp.fc1.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.mlp.fc1.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.mlp.fc2.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.mlp.fc2.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.layer_norm1.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.layer_norm1.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.layer_norm2.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.layer_norm2.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.mlp.fc1.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.mlp.fc1.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.mlp.fc2.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.mlp.fc2.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.final_layer_norm.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.final_layer_norm.weight'])
Loading VAE weights specified in settings: D:\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  22845.8193359375
[Memory Management] Model Memory (MB) =  1903.1046981811523
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  19918.714637756348
Moving model(s) has taken 0.27 seconds
Model loaded in 9.1s (unload existing model: 2.5s, calculate hash: 4.3s, load weights from disk: 0.3s, forge load real models: 1.5s, load VAE: 0.3s, calculate empty prompt: 0.3s).
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  21080.38623046875
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  15159.29973602295
Moving model(s) has taken 0.78 seconds
  0%|                                                                                                                                                                                                                                                             | 0/25 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "D:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "D:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "D:\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "D:\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "D:\stable-diffusion-webui-forge\modules\processing.py", line 921, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "D:\stable-diffusion-webui-forge\modules\processing.py", line 1273, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "D:\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
    return func()
  File "D:\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\modules\sd_samplers_cfg_denoiser.py", line 182, in forward
    denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
  File "D:\stable-diffusion-webui-forge\modules_forge\forge_sampler.py", line 83, in forge_sample
    denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
  File "D:\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 289, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "D:\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\stable-diffusion-webui-forge\ldm_patched\modules\model_base.py", line 90, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 849, in forward
    assert (y is not None) == (
AssertionError: must specify y if and only if the model is class-conditional
must specify y if and only if the model is class-conditional
*** Error completing request
*** Arguments: ('task(p4jno6xx41fihsc)', <gradio.routes.Request object at 0x000002003E399ED0>, 'Running ', '', [], 25, 'DPM++ 2M Karras', 1, 1, 6.5, 1000, 1048, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x000002003E399FF0>, False, 0.6, 0.9, 0.25, 1, True, False, False, 'sd_xl_base_0.9.safetensors', 'None', 5, '', {'save_settings': ['fp16', 'prune', 'safetensors'], 'calc_settings': ['GPU']}, True, False, False, False, False, 'z-mixer-2023-12-08-DaveMatthews 5 model merge - lots of realvision - good samples.fp16.safetensors', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, False, False, '', '', '', '', '', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---

Comfy Error when trying to use a model saved w/ MM out of Forge:

Error occurred when executing CLIPTextEncodeSDXL:

'NoneType' object has no attribute 'tokenize'

File "D:\SDXL\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\SDXL\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\SDXL\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\SDXL\ComfyUI\comfy_extras\nodes_clip_sdxl.py", line 42, in encode
tokens = clip.tokenize(text_g)  
  

If you cannot reproduce and/or need more information (system settings/etc) I can try to provide later. Heading away from PC for a bit.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.