Coder Social home page Coder Social logo

a1111-sd-webui-locon's Introduction

Here is KohakuBlueLeaf UwU

Kohaku: A cute dragon girl

BlueLeaf: An undergraduate student in Taiwan

kohakublueleaf

  • 🔭 I’m currently working on LyCORIS

  • 🤝 I’m looking for help with HyperKohaku

  • 💬 Ask me about Python, NN, Web Crawler

  • 📫 How to reach me [email protected]

  • ⚡ Fun fact I never watched Lycoris-Recoil

Connect with me:

kblueleaf blueleaf ZwgFFT4bSy

GitRoll Profile Badge

kohakublueleaf

 kohakublueleaf

trophy

Sponsorship

Buy Me A Coffee

paypal.me: https://www.paypal.com/paypalme/kblueleaf
BTC: 36VHoCKxgp2u3YWQ8gNMDQR3fT49S5sRtf
ETH: 0x8023c8c0a10a4da4e6746cbd238a8bc990fbba60
LTC: MCpMKubB8eeKPZ6LsfW9A7pJP23YLoLT9T

a1111-sd-webui-locon's People

Contributors

bootsoflagrangian avatar breakcore2 avatar kadah avatar kohakublueleaf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

a1111-sd-webui-locon's Issues

Installed via Automatic1111 install from URL but getting error on LOCON

Hello forgive me if this is a simple solution but I installed via automatic111 "install from url" and when I try and run I get this error

LoRA weight_unet: 1.5, weight_tenc: 1.5, model: kristenSchaal_v10(ca85f17f8f89)
dimension: {64, 32}, alpha: {64.0, 32.0}, multiplier_unet: 1.5, multiplier_tenc: 1.5
Error running process_batch: C:\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\additional_networks.py
Traceback (most recent call last):
File "C:\StableDiffusion\stable-diffusion-webui\modules\scripts.py", line 356, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "C:\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\additional_networks.py", line 243, in process_batch
network, info = lora_compvis.create_network_and_apply_compvis(
File "C:\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 149, in create_network_and_apply_compvis
# network_dim = 4
File "C:\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 332, in init
break
File "C:\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 293, in convert_diffusers_name_to_compvis
cv_name = (
AssertionError: conversion failed: lora_unet_time_embedding_linear_1. the model may not be trained by sd-scripts.

Can't use lora

i tried installing time and time again the ui from zero, but i continue to get this error at startup:

Error loading script: lora_script.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 256, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module module_spec.loader.exec_module(module) File "", line 850, in exec_module File "", line 228, in _call_with_frames_removed File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/scripts/lora_script.py", line 4, in import lora File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 238, in

Def lora_apply_weights(self: torch.nn.Conv2d | torch.nn.Linear | torch.nn.MultiheadAttention): TypeError: unsupported operand type(s) for |: 'type' and 'type'

Can someone please help?

Someone please push the FIX to the main branch!!!

On the first report of this issue #40 people are reporting that the fix is very simple, just changing two lines of code is apparently enough to fix:

line 371: lora.mtime = os.path.getmtime(lora_on_disk)

line 373: sd = sd_models.read_state_dict(lora_on_disk)

Issue #40 (comment)

But some of us are working on certain constraints, and speaking for myself, I can't edit any files on the Colab I run A1111, and therefore I cannot apply this fix myself. I can either use the extension as is, or not use it at all!

So, any dev, could you please push a fix to the main branch so we can go back to things working?

'LatentDiffusion' object has no attribute 'lora_layer_mapping' after today git pull stable-diffusion-webui

Hello
after regular git pull I now get, on each generation :

Traceback (most recent call last):
  File "E:\SD\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\lora.py", line 218, in load_loras
    lora = load_lora(name, lora_on_disk.filename)
  File "E:\SD\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 370, in load_lora
    is_sd2 = 'model_transformer_resblocks' in shared.sd_model.lora_layer_mapping
  File "E:\SD\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LatentDiffusion' object has no attribute 'lora_layer_mapping' 

running on windows 11 with RTX 3080
COMMANDLINE_ARGS= --xformers --listen

Error when load Lora

I trained Lora from Linaqruf/kohya-trainer/blob/main/kohya-dreambooth
But after that, when I using this Lora.
Error :
Failed to match keys when loading Lora.....["cond_stage_model.transformer.text_model.embeddings.position_embedding.weight', 'cond_stage_model.transformer.text_model.embeddings.position_ids', 'cond_stage_model.transformer.text_model.embeddings.token_embedding.weight',..........
Anyone has any solution,please help me.

It's impossible to know how it works

I installed Additional Network and have the tab in A1111
Do I need to select LyCORIS here as well?

Screenshot 2023-03-25 222943

I have it selectable in Model 1 of txt2img

Screenshot 2023-03-25 223011

But how do I enter it in the prompt?

In the usual list of lora it does not exist.

Thx

Cannot Use LYCORIS (Loha/Locon)

I am now using theLastBen's flast stable diffusion webui colab version (https://github.com/TheLastBen/fast-stable-diffusion). I found that when I would like to use a loha or locon, the error as below will show up:

Here is a case for a LoHa
Arguments: ('task(uletbrpp1l57wep)', '1girl, intricate details, masterpiece, best quality, original, dynamic posture, dynamic angle, , horse ears, alternate costume, white headwear, short over long sleeves, white shirt, sportswear, baseball uniform, belt, white pants, socks, shoes, black footwear lora:admireVegaUmamusume_v10:1 lora:chevalGrandUmamusume_loha:0.8', '(easynegative:0.8), solo', [], 35, 7, False, False, 5, 1, 9, -1.0, -1.0, 0, 0, 0, False, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, False, True, True, False, 2048, 128, False, '', 0, True, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x7f0a381b7af0>, <scripts.external_code.ControlNetUnit object at 0x7f0a381b7790>, <scripts.external_code.ControlNetUnit object at 0x7f0a381b7f10>, False, '', 0.5, True, False, '', 'Lerp', False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 'black', '20', False, False, False, 3, 0, False, False, 0, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 636, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 852, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 351, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 351, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 119, in forward
x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
x = layer(x, context)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/attention.py", line 329, in forward
x = self.proj_in(x)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 317, in lora_Conv2d_forward
lora_apply_weights(self)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 273, in lora_apply_weights
self.weight += lora_calc_updown(lora, module, self.weight)
RuntimeError: output with shape [320, 320, 1, 1] doesn't match the broadcast shape [320, 320, 320, 320]

Here is a case for a LoCON:
Error completing request
Arguments: ('task(ciwy1odv90uih94)', '1girl, high quality photography, Canon EOS R3, by Annie Leibovitz, 80mm, hasselblad, best quality, original, dynamic posture, dynamic angle, \n\nlora:symboliRudolf_resized:0.8, symboli rudolf \(umamusume\), horse ears, green jacket, red cape, epaulettes, aiguillette, medal, long sleeves, white gloves, white ascot, buttons, double-breasted, belt, green skirt, frilled skirt, dress, zettai ryouiki, gold trim,', '(bad_prompt:0.8), (((blurry)))', [], 35, 7, True, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, False, True, True, False, 2048, 128, False, '', 0, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x7f830238c580>, <scripts.external_code.ControlNetUnit object at 0x7f830238c550>, <scripts.external_code.ControlNetUnit object at 0x7f830238cb80>, False, '', 0.5, True, False, '', 'Lerp', False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 'black', '20', False, False, False, 3, 0, False, False, 0, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 625, in process_images_inner
uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 570, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_clip.py", line 229, in forward
z = self.process_tokens(tokens, multipliers)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_clip.py", line 254, in process_tokens
z = self.encode_with_transformers(tokens)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
return self.text_model(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
encoder_outputs = self.encoder(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
layer_outputs = encoder_layer(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 379, in forward
hidden_states, attn_weights = self.self_attn(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 268, in forward
query_states = self.q_proj(hidden_states) * self.scale
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 305, in lora_Linear_forward
lora_apply_weights(self)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 273, in lora_apply_weights
self.weight += lora_calc_updown(lora, module, self.weight)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 557, in lora_calc_updown
updown = rebuild_weight(module, target)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 550, in rebuild_weight
if len(output_shape) == 4:
UnboundLocalError: local variable 'output_shape' referenced before assignment

On the other hand, everything goes well if I just use a conventional Lora. Therefore, I think that it may be just an issue relevant to this extension. Could you have a look at it?

I also have installed the extension of webui additional network (https://github.com/kohya-ss/sd-webui-additional-networks) but I did not use it.

Unsure if LoCon models are loading correctly.

The command window shows create LoRa and enable LoRa rather than the create LoCon and enable LoCon shown in the example code on the repo README when using models with the addnet extension and the resulting generated images are low quality compared to how the model should appear.

LoRA weight_unet: 0.85, weight_tenc: 0.85, model: swordArtOnlineKirigaya_v30(8a69b051f4e3)0/20 [00:05<00:00,  3.46it/s]
dimension: {128}, alpha: {64.0}, multiplier_unet: 0.85, multiplier_tenc: 0.85
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 278 modules.
original forward/weights is backed up.
enable LoRA for text encoder
enable LoRA for U-Net
shapes for 0 weights are converted.
LoRA model swordArtOnlineKirigaya_v30(8a69b051f4e3) loaded: <All keys matched successfully>
setting (or sd model) changed. new networks created.

`LoraUpDownModel` has no attribute `up_model`

While attempting to use the extension with a Locon model as a Lora, trying to generate a prompt produces the following stacktrace:

Traceback (most recent call last):
  File "F:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "F:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 642, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 587, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "F:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
    return self.text_model(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 379, in forward
    hidden_states, attn_weights = self.self_attn(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 268, in forward
    query_states = self.q_proj(hidden_states) * self.scale
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 305, in lora_Linear_forward
    lora_apply_weights(self)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 273, in lora_apply_weights
    self.weight += lora_calc_updown(lora, module, self.weight)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 612, in lora_calc_updown
    updown = rebuild_weight(module, target)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 536, in rebuild_weight
    up = module.up_model.weight.to(orig_weight.device, dtype=orig_weight.dtype)
AttributeError: 'LoraUpDownModule' object has no attribute 'up_model'

Checkpoint: anythingV3_fp16.ckpt [812cd9f9d9]

Issue with incorrect dtypes on Mac M1 MPS

I was using an older commit of the extension and it was working fine. I updated today and it started causing issues (identical to: AUTOMATIC1111/stable-diffusion-webui#7215):

...
  File "/sd-webui-upd/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 319, in forward
    x = self.proj_in(x)
  File "/sd-webui-upd/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/sd-webui-upd/extensions-builtin/Lora/lora.py", line 182, in lora_Conv2d_forward
    return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
  File "/sd-webui-upd/extensions/a1111-sd-webui-locon/scripts/main.py", line 358, in lora_forward
    res = res + module.inference(x) * scale
  File "/sd-webui-upd/extensions/a1111-sd-webui-locon/scripts/main.py", line 169, in inference
    return self.up_model(self.down_model(x))
  File "/sd-webui-upd/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/sd-webui-upd/extensions-builtin/Lora/lora.py", line 182, in lora_Conv2d_forward
    return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
  File "/sd-webui-upd/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 457, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/sd-webui-upd/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 453, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (MPSHalfType) and weight type (MPSFloatType) should be the same

The issue is with these hardcoded float32 dtypes that are mounting on fp32 precision when using half precision everywhere else:

).to(device=devices.device, dtype=torch.float32)

module.to(device=devices.device, dtype=torch.float32)

weight = weight.to(device=devices.device, dtype=torch.float32)

After replacing all of them with dtype=torch.float16, it worked fine. It should use the same precision configured in the rest of the application to avoid the issue or if float32 is required, cast back to the configured precision.

error when using locon and lora together

Error running process_batch: D:\stable-diffusion-webui\stable-diffusion-webuiLORA\extensions\sd-webui-additional-networks\scripts\additional_networks.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\stable-diffusion-webuiLORA\modules\scripts.py", line 395, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "D:\stable-diffusion-webui\stable-diffusion-webuiLORA\extensions\sd-webui-additional-networks\scripts\additional_networks.py", line 209, in process_batch
network, info = lora_compvis.create_network_and_apply_compvis(du_state_dict, weight_tenc, weight_unet, text_encoder, unet)
File "D:\stable-diffusion-webui\stable-diffusion-webuiLORA\extensions\a1111-sd-webui-locon\locon_compvis.py", line 62, in create_network_and_apply_compvis
network = LoConNetworkCompvis(
File "D:\stable-diffusion-webui\stable-diffusion-webuiLORA\extensions\a1111-sd-webui-locon\locon_compvis.py", line 289, in init
self.text_encoder_loras, te_rep_modules = create_modules(
File "D:\stable-diffusion-webui\stable-diffusion-webuiLORA\extensions\a1111-sd-webui-locon\locon_compvis.py", line 266, in create_modules
alpha = comp_state_dict[f'{lora_name}.alpha'].item()
KeyError: 'lora_te_wrapped_transformer_text_model_encoder_layers_0_self_attn_k_proj.alpha'

it wont stop generation but completely ignore other loras ,

Fail to Use LYCORIS (Loha/Locon)

I am now using theLastBen's fast stable diffusion webui colab version (https://github.com/TheLastBen/fast-stable-diffusion). I found that when I would like to use a loha or locon, the error as below will show up:

Here is a case for a LoHa when I'm using fashion girl locon v51:

Error completing request
Arguments: ('task(9th23669lun0w0k)', '(ultra high res, photorealistic ,realistic, best quality, photo-realistic), (8k, raw photo, best quality, masterpiece), (photon mapping, radiosity, physically-based rendering, automatic white balance), (masterpiece:1.3), (realistic:1.2), (intricate details:1.2), (1girl:1.3), cinematic lighting, cinematic bloom, sidelighting, ambient light, accent Lighting, neon lights, soft light, depth of field, HDR, sharp focus, delicate pattern, intricate detail, ((solo focus)), 1girl, fair skin, perfect face, beautiful face, big eyes, puffy eyes, grey eyes, perfect eyes, eyelashes, lips, classic length braid Red hair, lora:fashionGirl_v51:0.3, fashi-g,', '(worst quality, low quality:1.4), (lowres:1.1), (monochrome:1.1), (greyscale), extra fingers, fewer fingers, strange fingers, bad hand, signature, watermark, username, blurry, bad feet, bad leg, duplicate, extra limb, ugly, disgusting, poorly drawn hands, missing limb, floating limbs, disconnected limbs, malformed hands, blurry, mutated hands and fingers, backlight, logo, bad anatomy, fat, (public hair:2), (verybadimagenegative_v1.3:0.9), ', [], 27, 15, False, False, 10, 1, 8, -1.0, -1.0, 0, 0, 0, False, 768, 512, True, 0.3, 2, 'R-ESRGAN 4x+', 15, 0, 0, [], 0, <scripts.external_code.ControlNetUnit object at 0x7fb649cb7a30>, <scripts.external_code.ControlNetUnit object at 0x7fb649d062e0>, <scripts.external_code.ControlNetUnit object at 0x7fb649d06370>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 636, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 852, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 351, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 351, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 138, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": c_crossattn, "c_concat": [image_cond_in[a:b]]})
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 82, in forward
x = layer(x, emb)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 249, in forward
return checkpoint(
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/util.py", line 129, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 262, in _forward
h = self.in_layers(x)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/container.py", line 204, in forward
input = module(input)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 317, in lora_Conv2d_forward
lora_apply_weights(self)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 273, in lora_apply_weights
self.weight += lora_calc_updown(lora, module, self.weight)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 564, in lora_calc_updown
updown = rebuild_weight(module, target)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 558, in rebuild_weight
updown = updown.reshape(output_shape)
RuntimeError: shape '[16, 320, 3, 3]' is invalid for input of size 921600

Locon doesn't work...?

Starting today I see this error on every locon I have:

locon load lora method
loading Lora D:\Stable-diffusion\stable-diffusion-portable-main\models\Lora\Porn\Touch&Pose\pantypullPantydropV1.safetensors: AttributeError
Traceback (most recent call last):
File "D:\Stable-diffusion\stable-diffusion-portable-main\extensions\a1111-sd-webui-locon\scripts......\extensions-builtin/Lora\lora.py", line 253, in load_loras
lora = load_lora(name, lora_on_disk)
File "D:\Stable-diffusion\stable-diffusion-portable-main\extensions\a1111-sd-webui-locon\scripts\main.py", line 374, in load_lora
is_sd2 = 'model_transformer_resblocks' in shared.sd_model.lora_layer_mapping
File "D:\Stable-diffusion\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LatentDiffusion' object has no attribute 'lora_layer_mapping'

From the other hand, I'm getting an error for lora's as well (no locon used here):

locon load lora method
loading Lora D:\Stable-diffusion\stable-diffusion-portable-main\models\Lora\Lib\Celeb\galGadotLora_v10.safetensors: AttributeError
Traceback (most recent call last):
File "D:\Stable-diffusion\stable-diffusion-portable-main\extensions\a1111-sd-webui-locon\scripts......\extensions-builtin/Lora\lora.py", line 253, in load_loras
lora = load_lora(name, lora_on_disk)
File "D:\Stable-diffusion\stable-diffusion-portable-main\extensions\a1111-sd-webui-locon\scripts\main.py", line 374, in load_lora
is_sd2 = 'model_transformer_resblocks' in shared.sd_model.lora_layer_mapping
File "D:\Stable-diffusion\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LatentDiffusion' object has no attribute 'lora_layer_mapping'

So I'm not sure who is responsible for it.

why did i get this error when using locon and additional networks?

i got 2 errors here

Total progress: 0it [00:00, ?it/s]LoRA weight_unet: 1, weight_tenc: 1, model: bocchitherock_02(45f1f5900b9e)
dimension: {13, 18, 22, 23, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 60, 61, 62, 63, 65, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 94, 96, 97, 98, 99, 100, 101, 103, 105, 106, 108, 109, 111, 112, 115, 116, 117, 118, 120, 121, 123, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 140, 141, 142, 143, 144, 145, 147, 149, 150, 151, 152, 154, 155, 156, 157, 158, 160, 161, 162, 163, 164, 165, 166, 168, 169, 170, 171, 172, 174, 175, 177, 178, 179, 180, 181, 182, 183, 185, 190, 191, 192, 193, 195, 196, 197, 200, 202, 204, 205, 208, 209, 211, 212, 214, 219, 222, 223, 225, 228, 229, 235, 236, 241, 242, 246, 249, 250, 254, 255, 256, 257, 259, 261, 262, 263, 266, 269, 270, 271, 272, 275, 277, 280, 282, 285, 287, 291, 292, 294, 298, 315, 325, 331, 341, 348, 355, 358, 360, 362, 371, 374, 377, 378, 394, 397, 401, 423}, alpha: {13.0, 18.0, 22.0, 23.0, 25.0, 26.0, 27.0, 28.0, 29.0, 30.0, 31.0, 32.0, 34.0, 35.0, 40.0, 41.0, 42.0, 43.0, 45.0, 46.0, 47.0, 48.0, 49.0, 50.0, 51.0, 52.0, 53.0, 54.0, 55.0, 56.0, 57.0, 58.0, 60.0, 61.0, 62.0, 63.0, 65.0, 69.0, 70.0, 71.0, 72.0, 73.0, 74.0, 75.0, 76.0, 77.0, 78.0, 79.0, 80.0, 81.0, 82.0, 83.0, 84.0, 85.0, 86.0, 87.0, 89.0, 90.0, 94.0, 96.0, 97.0, 98.0, 99.0, 100.0, 101.0, 103.0, 105.0, 106.0, 108.0, 109.0, 111.0, 112.0, 115.0, 116.0, 117.0, 118.0, 120.0, 121.0, 123.0, 125.0, 126.0, 127.0, 128.0, 129.0, 130.0, 131.0, 132.0, 133.0, 134.0, 135.0, 136.0, 137.0, 138.0, 140.0, 141.0, 142.0, 143.0, 144.0, 145.0, 147.0, 149.0, 150.0, 151.0, 152.0, 154.0, 155.0, 156.0, 157.0, 158.0, 160.0, 161.0, 162.0, 163.0, 164.0, 165.0, 166.0, 168.0, 169.0, 170.0, 171.0, 172.0, 174.0, 175.0, 177.0, 178.0, 179.0, 180.0, 181.0, 182.0, 183.0, 185.0, 190.0, 191.0, 192.0, 193.0, 195.0, 196.0, 197.0, 200.0, 202.0, 204.0, 205.0, 208.0, 209.0, 211.0, 212.0, 214.0, 219.0, 222.0, 223.0, 225.0, 228.0, 229.0, 235.0, 236.0, 241.0, 242.0, 246.0, 249.0, 250.0, 254.0, 255.0, 256.0, 257.0, 259.0, 261.0, 262.0, 263.0, 266.0, 269.0, 270.0, 271.0, 272.0, 275.0, 277.0, 280.0, 282.0, 285.0, 287.0, 291.0, 292.0, 294.0, 298.0, 315.0, 325.0, 331.0, 341.0, 348.0, 355.0, 358.0, 360.0, 362.0, 371.0, 374.0, 377.0, 378.0, 394.0, 397.0, 401.0, 423.0}, multiplier_unet: 1, multiplier_tenc: 1
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 278 modules.
original forward/weights is backed up.
enable LoRA for text encoder
enable LoRA for U-Net
Error running process_batch: /content/stable-diffusion-webui/extensions/sd-webui-additional-networks/scripts/additional_networks.py
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/scripts.py", line 395, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "/content/stable-diffusion-webui/extensions/sd-webui-additional-networks/scripts/additional_networks.py", line 220, in process_batch
network, info = lora_compvis.create_network_and_apply_compvis(du_state_dict, weight_tenc, weight_unet, text_encoder, unet)
File "/content/stable-diffusion-webui/extensions/sd-webui-additional-networks/scripts/lora_compvis.py", line 150, in create_network_and_apply_compvis
state_dict = network.apply_lora_modules(du_state_dict) # some weights are applied to text encoder
File "/content/stable-diffusion-webui/extensions/sd-webui-additional-networks/scripts/lora_compvis.py", line 521, in apply_lora_modules
state_dict = self.convert_state_dict_shape_to_compvis(state_dict)
File "/content/stable-diffusion-webui/extensions/sd-webui-additional-networks/scripts/lora_compvis.py", line 543, in convert_state_dict_shape_to_compvis
value = value.unsqueeze(2).unsqueeze(3)
IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2)

Error completing request
Arguments: ('task(vxylygfgf8plf9a)', 'my prompt + my neg prompt', [], 38, 7, False, False, 8, 1, 7.5, -1.0, -1.0, 0, 0, 0, False, 1200, 760, True, 0.7, 1, 'None', 0, 0, 0, [], 0, True, False, 'LoRA', 'bocchitherock_02(45f1f5900b9e)', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x7fe398cccc40>, False, False, 'positive', 'comma', 0, False, False, '', '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, 50) {}
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/content/stable-diffusion-webui/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/content/stable-diffusion-webui/modules/processing.py", line 621, in process_images_inner
uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
File "/content/stable-diffusion-webui/modules/processing.py", line 570, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "/content/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/modules/sd_hijack_clip.py", line 229, in forward
z = self.process_tokens(tokens, multipliers)
File "/content/stable-diffusion-webui/modules/sd_hijack_clip.py", line 254, in process_tokens
z = self.encode_with_transformers(tokens)
File "/content/stable-diffusion-webui/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
return self.text_model(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
encoder_outputs = self.encoder(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
layer_outputs = encoder_layer(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 379, in forward
hidden_states, attn_weights = self.self_attn(
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 268, in forward
query_states = self.q_proj(hidden_states) * self.scale
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/extensions/sd-webui-additional-networks/scripts/lora_compvis.py", line 90, in forward
return self.org_forward(x) + self.lora_up(self.lora_down(x)) * self.multiplier * self.scale
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 178, in lora_Linear_forward
return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_mm)

hope you can solve it(2 days ago still normal an be able to run it)

Trying to Use Locon as Lora - Attribute Error String

I'm getting this error every time I use a locon placed in the lora folder. I could be doing something wrong but the readme doesn't really say much about how to use the extension or anything about the installation. Just that you can use a locon as a lora after installing from the URL. I used your Lycoris extension instead and pretty sure the locon was applied then and I didn't get an error but something similar to what AddNet displays when you use lora's through it. Thanks for any response that may help me understand more. I've self-taught myself everything up til now but just couldn't figure this error out.

locon load lora method
activating extra network lora with arguments [<modules.extra_networks.ExtraNetworkParams object at 0x00000253FF23E560>]: AttributeError
Traceback (most recent call last):
File "C:\Users\Joe\stable-diffusion-webui\modules\extra_networks.py", line 75, in activate
extra_network.activate(p, extra_network_args)
File "C:\Users\Joe\stable-diffusion-webui\extensions-builtin\Lora\extra_networks_lora.py", line 23, in activate
lora.load_loras(names, multipliers)
File "C:\Users\Joe\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 170, in load_loras
lora = load_lora(name, lora_on_disk.filename)
File "C:\Users\Joe\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 371, in load_lora
lora.mtime = os.path.getmtime(lora_on_disk.filename)
AttributeError: 'str' object has no attribute 'filename'

extension doesn't load if data directory changed

lora_path = os.path.join(now_dir, '..', '..', '..', 'extensions-builtin/Lora')

Not sure the solution, but i just hard changed this to my directory i was using. I'm sure theres a better way to do it though

[CRITICAL]All LoRAs not working anymore

locon load lora method
loading Lora I:\GitHub\stable-diffusion-webui\models\Lora\Vehicle.safetensors: AttributeError
Traceback (most recent call last):
File "I:\GitHub\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts......\extensions-builtin/Lora\lora.py", line 222, in load_loras
lora = load_lora(name, lora_on_disk.filename)
File "I:\GitHub\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 371, in load_lora
lora.mtime = os.path.getmtime(lora_on_disk.filename)
AttributeError: 'str' object has no attribute 'filename'

All the LoRAs not working anymore.
Maybe caused by the latest commit.

[Bug] UnboundLocalError: local variable 'output_shape' referenced before assignment

I get sometimes this error, trying to generate an image (even with no lycoris loras).

Commit: 8e0ebd7

Traceback (most recent call last):
File "D:\Other Applications\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "D:\Other Applications\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\Other Applications\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "D:\Other Applications\stable-diffusion-webui\modules\processing.py", line 503, in process_images
res = process_images_inner(p)
File "D:\Other Applications\stable-diffusion-webui\modules\processing.py", line 642, in process_images_inner
uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
File "D:\Other Applications\stable-diffusion-webui\modules\processing.py", line 587, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "D:\Other Applications\stable-diffusion-webui\modules\prompt_parser.py", line 140, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "D:\Other Applications\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Other Applications\stable-diffusion-webui\modules\sd_hijack_clip.py", line 229, in forward
z = self.process_tokens(tokens, multipliers)
File "D:\Other Applications\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens
z = self.encode_with_transformers(tokens)
File "D:\Other Applications\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
return self.text_model(
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
encoder_outputs = self.encoder(
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
layer_outputs = encoder_layer(
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 379, in forward
hidden_states, attn_weights = self.self_attn(
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 268, in forward
query_states = self.q_proj(hidden_states) * self.scale
File "D:\Other Applications\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Other Applications\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts......\extensions-builtin/Lora\lora.py", line 305, in lora_Linear_forward
lora_apply_weights(self)
File "D:\Other Applications\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts......\extensions-builtin/Lora\lora.py", line 273, in lora_apply_weights
self.weight += lora_calc_updown(lora, module, self.weight)
File "D:\Other Applications\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 576, in lora_calc_updown
updown = rebuild_weight(module, target)
File "D:\Other Applications\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 565, in rebuild_weight
if len(output_shape) == 4:
UnboundLocalError: local variable 'output_shape' referenced before assignment

Request to fix the import path for build-in lora

There is the patch sample, using the os.path.abspath for loading build-in lora always work for me.

--- extensions/a1111-sd-webui-locon/scripts/main.py     2023-05-03 04:48:01.944314067 +0000
+++ extensions/a1111-sd-webui-locon/scripts/main.path.py        2023-05-03 04:48:34.880357753 +0000
@@ -9,7 +9,7 @@
 from modules import shared, devices, sd_models
 now_dir = os.path.dirname(os.path.abspath(__file__))
 lora_path = os.path.join(now_dir, '..', '..', '..', 'extensions-builtin/Lora')
-sys.path.insert(0, lora_path)
+sys.path.insert(0, os.path.abspath(lora_path))
 import lora
 new_lora = 'lora_calc_updown' in dir(lora)

getting RuntimeError: shape '[1280]' is invalid for input of size 0 while applying locon model

To create a public link, set share=True in launch().
Startup time: 121.7s (import torch: 21.8s, import gradio: 3.5s, import ldm: 1.6s, other imports: 6.8s, setup codeformer: 0.6s, load scripts: 5.4s, load SD checkpoint: 77.1s, create ui: 4.1s, gradio launch: 0.7s).
locon load lora method
activating extra network lora with arguments [<modules.extra_networks.ExtraNetworkParams object at 0x000001BA616390F0>]: RuntimeError
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\extra_networks.py", line 75, in activate
extra_network.activate(p, extra_network_args)
File "D:\stable-diffusion-webui\extensions-builtin\Lora\extra_networks_lora.py", line 23, in activate
lora.load_loras(names, multipliers)
File "D:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts......\extensions-builtin/Lora\lora.py", line 214, in load_loras
lora = load_lora(name, lora_on_disk.filename)
File "D:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 369, in load_lora
sd = sd_models.read_state_dict(filename)
File "D:\stable-diffusion-webui\modules\sd_models.py", line 241, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "D:\stable-diffusion-webui\venv\lib\site-packages\safetensors\torch.py", line 101, in load_file
result[k] = f.get_tensor(k)
RuntimeError: shape '[1280]' is invalid for input of size 0

Additional Network extension not installed, Only hijack built-in lora LoCon Extension hijack built-in lora successfully

Hello, i get this thing when i run stable diffusion webui and don't know what it means... Is it bad? Did something break?
All my loras work... well now, first all of the gave errors
"Error running process_batch: C:\Users\me\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\additional_networks.py
Traceback (most recent call last):
File "C:\Users\me\stable-diffusion-webui\modules\scripts.py", line 427, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "C:\Users\me\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\additional_networks.py", line 209, in process_batch
network, info = lora_compvis.create_network_and_apply_compvis(du_state_dict, weight_tenc, weight_unet, text_encoder, unet)
File "C:\Users\me\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 102, in create_network_and_apply_compvis
state_dict = network.apply_lora_modules(du_state_dict) # some weights are applied to text encoder
File "C:\Users\me\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 289, in apply_lora_modules
state_dict = LoRANetworkCompvis.convert_state_dict_name_to_compvis(self.v2, du_state_dict)
File "C:\Users\me\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 193, in convert_state_dict_name_to_compvis
compvis_name = LoRANetworkCompvis.convert_diffusers_name_to_compvis(v2, tokens[0])
File "C:\Users\me\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 182, in convert_diffusers_name_to_compvis
assert cv_name is not None, f"conversion failed: {du_name}. the model may not be trained by sd-scripts."
AssertionError: conversion failed: lora_unetdown"
-this was the error i was getting when trying to use loras

but afterwards i used git pull command in the extension folder and my loras still did not work
then i used pip install -r requirements.txt in the sd folder
after this install i managed to make my loras work. BUT i still get this thing in my command prompt when i run sd
loras work when i enable them in the additional networks drop down menu

this happened when i updated to torch 2 and changed some requirements like accelerate and transformers version.
now i have xformers==0.0.17.dev476, torch: 1.13.1+cu117 and transformers==4.25.1 accelerate==0.12.0

now loras work but when i open sd webui i get that thing that is in the title.

It wasn't clear

Is this extension for additional network?

I looked at the readme and was left wondering.

Do I need this?

I'm a bit confused when reading the README... If my WebUI already has the built-in LORA extension, do I still need this extension to take full advantage of LoCon models? (As I understand, the "Additional Networks" extension was created when WebUI couldn't natively use Hypernetworks and LORAs, but now it's kinda obsolete...)

Every Lora failed with most recent version of stable-diffusion-webui

Is there an existing issue for this?

It happened with most recent version of the stable-diffusion-webui, released just about 2 hours ago. So I guess there is no exsiting issue.

What happened?

When I tried to use LoRA in txt2txt, the image generation pipeline was failed and dumped some python stack logs.

Steps to reproduce the problem

I was able to reproduce with the clean install. So I write down here what I have done for the testing.

  1. Git clone most recent version of stable-diffusion-webui; commit hash 4c1ad743e3baf1246db0711aa0107debf036a12b.
  2. Run webui.sh script file.
  3. Put any checkpoint and LoRa in corresponding folders under models folder. Refresh checkpoints and loras.
  4. Install a1111-sd-webui-locon extension.
  5. Put any prompt words in txt2txt with lora and click the Generate button.

What should have happened?

Should get a generated image using a LoRa.

Commit where the problem happens

https://github.com/KohakuBlueleaf/a1111-sd-webui-locon/commit/52c730a0d890755947344f6a4064837afe772946
https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/4c1ad743e3baf1246db0711aa0107debf036a12b

What platforms do you use to access the UI ?

Apple Silicon M1 macos 22.3.0

What browsers do you use to access the UI ?

Firefox

Command Line Arguments

Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate

List of extensions

a1111-sd-webui-locon (enabled)

Console logs

knut@Knuts-Mac-Studio sdwu-test % ./webui.sh           

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on knut user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.10 (main, Mar  1 2023, 22:32:25) [Clang 14.0.0 (clang-1400.0.29.202)]
Commit hash: 4c1ad743e3baf1246db0711aa0107debf036a12b
Installing requirements for Web UI
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 1.12.1.
The program is tested to work with torch 1.13.1.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
Loading weights [6ce0161689] from /Users/knut/test/sdwu-test/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /Users/knut/test/sdwu-test/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (InvokeAI).
Textual inversion embeddings loaded(0): 
Model loaded in 2.1s (load weights from disk: 0.2s, create model: 0.5s, apply weights to model: 0.5s, apply half(): 0.4s, move model to device: 0.5s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 4.5s (import torch: 0.7s, import gradio: 0.5s, import ldm: 0.2s, other imports: 0.4s, load scripts: 0.3s, load SD checkpoint: 2.1s, create ui: 0.2s).
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 254, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/fastapi/applications.py", line 273, in __call__
    await super().__call__(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 149, in __call__
    await self.app(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 26, in __call__
    await self.app(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/routing.py", line 341, in handle
    await self.app(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/routing.py", line 82, in app
    await func(session)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/fastapi/routing.py", line 289, in app
    await dependant.call(**values)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/gradio/routes.py", line 517, in join_queue
    if blocks.dependencies[event.fn_index].get("every", 0):
IndexError: list index out of range
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 254, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/fastapi/applications.py", line 273, in __call__
    await super().__call__(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 149, in __call__
    await self.app(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 26, in __call__
    await self.app(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/routing.py", line 341, in handle
    await self.app(scope, receive, send)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/starlette/routing.py", line 82, in app
    await func(session)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/fastapi/routing.py", line 289, in app
    await dependant.call(**values)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/gradio/routes.py", line 517, in join_queue
    if blocks.dependencies[event.fn_index].get("every", 0):
IndexError: list index out of range
Error completing request
Arguments: ('task(95lawgd0zft3kcw)', 'vladilena milize, lena,masterpiece, absurdres, highres, cinematic, 8k, intricate detail, ultra detailed, cinematic lighting, 1girl, pov, silver hair, (((light silver eyes))), white eyebrow, white eyelashes, looking at viewer, narrow waist, (large breast), (white thighhighs),  military uniform    <lora:90sAnimeAesthetic_v10:1>', 'EasyNegative  (((tattoo))), (((animal ears))), (((odd eyes))), (((nsfw))), bare chest', [], 20, 0, False, False, 1, 1, 7, 1816663567.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/Users/knut/test/sdwu-test/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/Users/knut/test/sdwu-test/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/Users/knut/test/sdwu-test/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/Users/knut/test/sdwu-test/modules/processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "/Users/knut/test/sdwu-test/modules/processing.py", line 625, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "/Users/knut/test/sdwu-test/modules/processing.py", line 570, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "/Users/knut/test/sdwu-test/modules/prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "/Users/knut/test/sdwu-test/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/knut/test/sdwu-test/modules/sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "/Users/knut/test/sdwu-test/modules/sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "/Users/knut/test/sdwu-test/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
    return self.text_model(
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 379, in forward
    hidden_states, attn_weights = self.self_attn(
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 268, in forward
    query_states = self.q_proj(hidden_states) * self.scale
  File "/Users/knut/test/sdwu-test/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/knut/test/sdwu-test/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 304, in lora_Linear_forward
    lora_apply_weights(self)
  File "/Users/knut/test/sdwu-test/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 272, in lora_apply_weights
    self.weight += lora_calc_updown(lora, module, self.weight)
  File "/Users/knut/test/sdwu-test/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 226, in lora_calc_updown
    down = module.down.weight.to(target.device, dtype=target.dtype)
AttributeError: 'function' object has no attribute 'weight'

Additional information

When I deleted a1111-sd-webui-locon, Lora worked again, but in this condition I cannot use Locon.

The model might not be trained by sd-scripts error

Error running process: G:\AI\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\additional_networks.py Traceback (most recent call last): File "G:\AI\stable-diffusion-webui\modules\scripts.py", line 386, in process script.process(p, *script_args) File "G:\AI\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\additional_networks.py", line 206, in process network, info = lora_compvis.create_network_and_apply_compvis(du_state_dict, weight_tenc, weight_unet, text_encoder, unet) File "G:\AI\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 102, in create_network_and_apply_compvis state_dict = network.apply_lora_modules(du_state_dict) # some weights are applied to text encoder File "G:\AI\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 289, in apply_lora_modules state_dict = LoRANetworkCompvis.convert_state_dict_name_to_compvis(self.v2, du_state_dict) File "G:\AI\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 193, in convert_state_dict_name_to_compvis compvis_name = LoRANetworkCompvis.convert_diffusers_name_to_compvis(v2, tokens[0]) File "G:\AI\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 182, in convert_diffusers_name_to_compvis assert cv_name is not None, f"conversion failed: {du_name}. the model may not be trained by sd-scripts." AssertionError: conversion failed: lora_unet_down_blocks_0_downsamplers_0_conv. the model may not be trained by sd-scripts.

AttributeError: 'function' object has no attribute 'weight'

It seems to happen with all non-locon loras, the locon works fine, regular loras work fine when used in the additional networks extension, this only happens when using then in the prompt like <lora:name:1>

Traceback (most recent call last): File "F:\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "F:\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "F:\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img processed = process_images(p) File "F:\stable-diffusion-webui\modules\processing.py", line 486, in process_images res = process_images_inner(p) File "F:\stable-diffusion-webui\modules\processing.py", line 621, in process_images_inner uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc) File "F:\stable-diffusion-webui\modules\processing.py", line 570, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps) File "F:\stable-diffusion-webui\modules\prompt_parser.py", line 140, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning c = self.cond_stage_model(c) File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 229, in forward z = self.process_tokens(tokens, multipliers) File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens z = self.encode_with_transformers(tokens) File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers) File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward return self.text_model( File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward encoder_outputs = self.encoder( File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward layer_outputs = encoder_layer( File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 379, in forward hidden_states, attn_weights = self.self_attn( File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 268, in forward query_states = self.q_proj(hidden_states) * self.scale File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "F:\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 178, in lora_Linear_forward return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input)) File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 348, in lora_forward scale = lora_m.multiplier * (module.alpha / module.down.weight.size(0) if module.alpha else 1.0) AttributeError: 'function' object has no attribute 'weight'

With the lora block weight extension it still outputs
File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 350, in lora_forward scale = lora_m.multiplier * (module.alpha / module.dim if module.alpha else 1.0) AttributeError: 'LoraUpDownModule' object has no attribute 'dim'

Error

My locon is not is not applied and I have the following error :

loading Lora S:\test\stable-diffusion-webui\models\Lora\Style\huachong.ckpt: AttributeError
Traceback (most recent call last):
  File "S:\test\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\lora.py", line 215, in load_loras
    lora = load_lora(name, lora_on_disk.filename)
  File "S:\test\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 289, in load_lora
    sd = sd_models.read_state_dict(filename)
  File "S:\test\stable-diffusion-webui\modules\sd_models.py", line 257, in read_state_dict
    sd = get_state_dict_from_checkpoint(pl_sd)
  File "S:\test\stable-diffusion-webui\modules\sd_models.py", line 206, in get_state_dict_from_checkpoint
    pl_sd = pl_sd.pop("state_dict", pl_sd)
AttributeError: 'NoneType' object has no attribute 'pop'

python: 3.10.6  •  torch: 2.0.0+cu118  •  xformers: 0.0.17  •  gradio: 3.28.1  •  commit: 5ab7f213  •  checkpoint: 2c3bbd47cb

Lycoris not supported for inpainting and outpainting

  File "C:\Users\user\Documents\TestSD\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 185, in inference
    return self.op(x, self.weight, **self.extra_args)
RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 9, 64, 64] to have 4 channels, but got 9 channels instead

Those are the last lines of the error message, Regular LoRa are still suported for inpainting and outpaiting.

Error when using in cloab

activating extra network lora with arguments [<modules.extra_networks.ExtraNetworkParams object at 0x7f723c0e7220>]: AssertionError
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/extra_networks.py", line 75, in activate
extra_network.activate(p, extra_network_args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/extra_networks_lora.py", line 23, in activate
lora.load_loras(names, multipliers)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 170, in load_loras
lora = load_lora(name, lora_on_disk.filename)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 141, in load_lora
assert False, f'Bad Lora layer name: {key_diffusers} - must end in lora_up.weight, lora_down.weight or alpha'
AssertionError: Bad Lora layer name: lora_te_text_model_encoder_layers_0_mlp_fc1.hada_w1_a - must end in lora_up.weight, lora_down.weight or alpha

Fail to work with A1111 V1.4.0

My repo is b691135. It is quite annoying! After update to stable-diffusion-webui V1.4.0 (394ffa7), the extension cannot run LyCORIS models anymore. Here are the logs

reading lora D:\AI6\stable-diffusion-webui\models\Lora\Cat Ear Girl.safetensors: AssertionError,  1.14s/it]
Traceback (most recent call last):
  File "D:\AI6\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\lora.py", line 83, in __init__
    self.metadata = sd_models.read_metadata_from_safetensors(filename)
  File "D:\AI6\stable-diffusion-webui\modules\sd_models.py", line 230, in read_metadata_from_safetensors
    assert metadata_len > 2 and json_start in (b'{"', b"{'"), f"{filename} is not a safetensors file"
AssertionError: D:\AI6\stable-diffusion-webui\models\Lora\Cat Ear Girl.safetensors is not a safetensors file

                                                                                                                       locon load lora method                                                                           | 0/12 [00:00<?, ?it/s]
locon load lora method
loading Lora D:\AI6\stable-diffusion-webui\models\Lora\Cat Ear Girl.safetensors: Exception
Traceback (most recent call last):
  File "D:\AI6\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\lora.py", line 253, in load_loras
    lora = load_lora(name, lora_on_disk)
  File "D:\AI6\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 373, in load_lora
    sd = sd_models.read_state_dict(lora_on_disk.filename)
  File "D:\AI6\stable-diffusion-webui\modules\sd_models.py", line 250, in read_state_dict
    pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
  File "D:\AI6\stable-diffusion-webui\venv\lib\site-packages\safetensors\torch.py", line 98, in load_file
    with safe_open(filename, framework="pt", device=device) as f:
Exception: Error while deserializing header: HeaderTooLarge

VRAM leak when extension is installed

When this extension is installed, after a few times generating images with either lora or locon the vram will be filled and there's no way to fix without restarting. Uninstalling the extension fixes the problem.

COMMANDLINE_ARGS= --listen --medvram --xformers --api

support for apple silicon

  File "/Users/koujiaxin/workspace/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 198, in lora_Linear_forward
    return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
  File "/Users/koujiaxin/workspace/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 487, in lora_forward
    res = res + module.inference(x) * scale
  File "/Users/koujiaxin/workspace/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 252, in inference
    return self.op(
  File "/Users/koujiaxin/workspace/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/Users/koujiaxin/workspace/stable-diffusion-webui/modules/sd_hijack_utils.py", line 25, in __call__
    if not self.__cond_func or self.__cond_func(self.__orig_func, *args, **kwargs):
TypeError: <lambda>() missing 1 required positional argument: 'bias'

Error when using LOHA with colab

Error code that shows is

Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/extra_networks.py", line 75, in activate
extra_network.activate(p, extra_network_args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/extra_networks_lora.py", line 23, in activate
lora.load_loras(names, multipliers)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 151, in load_loras
lora = load_lora(name, lora_on_disk.filename)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 161, in load_lora
assert False, f'Bad Lora layer name: {key_diffusers} - must end in lora_up.weight, lora_down.weight or alpha'
AssertionError: Bad Lora layer name: lora_te_text_model_encoder_layers_0_mlp_fc1.hada_w1_a - must end in lora_up.weight, lora_down.weight or alpha

Colab being used for reference is this one
https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb

Locons work fine but Loha always gives this error. Any idea how to fix it?
Thank you for your time.

error: weight's shape is different, _IncompatibleKeys

getting these error with locons trained with 9abae9e
did I do something wrong?

weight's shape is different: lora_unet_input_blocks_3_0_op.lora_down.weight expected torch.Size([64, 320, 3, 3]) found torch.Size([64, 320]). SD version may be different
weight's shape is different: lora_unet_input_blocks_1_0_in_layers_2.lora_down.weight expected torch.Size([64, 320, 3, 3]) found torch.Size([64, 320]). SD version may be different
weight's shape is different: lora_unet_input_blocks_1_0_out_layers_3.lora_down.weight expected torch.Size([64, 320, 3, 3]) found torch.Size([64, 320]). SD version may be different
weight's shape is different: lora_unet_input_blocks_2_0_in_layers_2.lora_down.weight expected torch.Size([64, 320, 3, 3]) found torch.Size([64, 320]). SD version may be different
weight's shape is different: lora_unet_input_blocks_2_0_out_layers_3.lora_down.weight expected torch.Size([64, 320, 3, 3]) found torch.Size([64, 320]). SD version may be different
weight's shape is different: lora_unet_input_blocks_6_0_op.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_input_blocks_4_0_in_layers_2.lora_down.weight expected torch.Size([64, 320, 3, 3]) found torch.Size([64, 320]). SD version may be different
weight's shape is different: lora_unet_input_blocks_4_0_out_layers_3.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_input_blocks_5_0_in_layers_2.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_input_blocks_5_0_out_layers_3.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_input_blocks_9_0_op.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_input_blocks_7_0_in_layers_2.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_input_blocks_7_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_input_blocks_8_0_in_layers_2.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_input_blocks_8_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_input_blocks_10_0_in_layers_2.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_input_blocks_10_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_input_blocks_11_0_in_layers_2.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_input_blocks_11_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_middle_block_0_in_layers_2.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_middle_block_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_middle_block_2_in_layers_2.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_middle_block_2_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_output_blocks_0_0_in_layers_2.lora_down.weight expected torch.Size([64, 2560, 3, 3]) found torch.Size([64, 2560]). SD version may be different
weight's shape is different: lora_unet_output_blocks_0_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_output_blocks_1_0_in_layers_2.lora_down.weight expected torch.Size([64, 2560, 3, 3]) found torch.Size([64, 2560]). SD version may be different
weight's shape is different: lora_unet_output_blocks_1_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_output_blocks_2_0_in_layers_2.lora_down.weight expected torch.Size([64, 2560, 3, 3]) found torch.Size([64, 2560]). SD version may be different
weight's shape is different: lora_unet_output_blocks_2_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_output_blocks_2_1_conv.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_output_blocks_3_0_in_layers_2.lora_down.weight expected torch.Size([64, 2560, 3, 3]) found torch.Size([64, 2560]). SD version may be different
weight's shape is different: lora_unet_output_blocks_3_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_output_blocks_4_0_in_layers_2.lora_down.weight expected torch.Size([64, 2560, 3, 3]) found torch.Size([64, 2560]). SD version may be different
weight's shape is different: lora_unet_output_blocks_4_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_output_blocks_5_0_in_layers_2.lora_down.weight expected torch.Size([64, 1920, 3, 3]) found torch.Size([64, 1920]). SD version may be different
weight's shape is different: lora_unet_output_blocks_5_0_out_layers_3.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_output_blocks_5_2_conv.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_output_blocks_6_0_in_layers_2.lora_down.weight expected torch.Size([64, 1920, 3, 3]) found torch.Size([64, 1920]). SD version may be different
weight's shape is different: lora_unet_output_blocks_6_0_out_layers_3.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_output_blocks_7_0_in_layers_2.lora_down.weight expected torch.Size([64, 1280, 3, 3]) found torch.Size([64, 1280]). SD version may be different
weight's shape is different: lora_unet_output_blocks_7_0_out_layers_3.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_output_blocks_8_0_in_layers_2.lora_down.weight expected torch.Size([64, 960, 3, 3]) found torch.Size([64, 960]). SD version may be different
weight's shape is different: lora_unet_output_blocks_8_0_out_layers_3.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_output_blocks_8_2_conv.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_output_blocks_9_0_in_layers_2.lora_down.weight expected torch.Size([64, 960, 3, 3]) found torch.Size([64, 960]). SD version may be different
weight's shape is different: lora_unet_output_blocks_9_0_out_layers_3.lora_down.weight expected torch.Size([64, 320, 3, 3]) found torch.Size([64, 320]). SD version may be different
weight's shape is different: lora_unet_output_blocks_10_0_in_layers_2.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_output_blocks_10_0_out_layers_3.lora_down.weight expected torch.Size([64, 320, 3, 3]) found torch.Size([64, 320]). SD version may be different
weight's shape is different: lora_unet_output_blocks_11_0_in_layers_2.lora_down.weight expected torch.Size([64, 640, 3, 3]) found torch.Size([64, 640]). SD version may be different
weight's shape is different: lora_unet_output_blocks_11_0_out_layers_3.lora_down.weight expected torch.Size([64, 320, 3, 3]) found torch.Size([64, 320]). SD version may be different
_IncompatibleKeys(missing_keys=['lora_unet_input_blocks_1_0_in_layers_2.lora_down.weight', 'lora_unet_input_blocks_1_0_out_layers_3.lora_down.weight', 'lora_unet_input_blocks_2_0_in_layers_2.lora_down.weight', 'lora_unet_input_blocks_2_0_out_layers_3.lora_down.weight', 'lora_unet_input_blocks_3_0_op.lora_down.weight', 'lora_unet_input_blocks_4_0_in_layers_2.lora_down.weight', 'lora_unet_input_blocks_4_0_out_layers_3.lora_down.weight', 'lora_unet_input_blocks_5_0_in_layers_2.lora_down.weight', 'lora_unet_input_blocks_5_0_out_layers_3.lora_down.weight', 'lora_unet_input_blocks_6_0_op.lora_down.weight', 'lora_unet_input_blocks_7_0_in_layers_2.lora_down.weight', 'lora_unet_input_blocks_7_0_out_layers_3.lora_down.weight', 'lora_unet_input_blocks_8_0_in_layers_2.lora_down.weight', 'lora_unet_input_blocks_8_0_out_layers_3.lora_down.weight', 'lora_unet_input_blocks_9_0_op.lora_down.weight', 'lora_unet_input_blocks_10_0_in_layers_2.lora_down.weight', 'lora_unet_input_blocks_10_0_out_layers_3.lora_down.weight', 'lora_unet_input_blocks_11_0_in_layers_2.lora_down.weight', 'lora_unet_input_blocks_11_0_out_layers_3.lora_down.weight', 'lora_unet_middle_block_0_in_layers_2.lora_down.weight', 'lora_unet_middle_block_0_out_layers_3.lora_down.weight', 'lora_unet_middle_block_2_in_layers_2.lora_down.weight', 'lora_unet_middle_block_2_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_0_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_0_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_1_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_1_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_2_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_2_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_2_1_conv.lora_down.weight', 'lora_unet_output_blocks_3_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_3_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_4_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_4_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_5_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_5_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_5_2_conv.lora_down.weight', 'lora_unet_output_blocks_6_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_6_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_7_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_7_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_8_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_8_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_8_2_conv.lora_down.weight', 'lora_unet_output_blocks_9_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_9_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_10_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_10_0_out_layers_3.lora_down.weight', 'lora_unet_output_blocks_11_0_in_layers_2.lora_down.weight', 'lora_unet_output_blocks_11_0_out_layers_3.lora_down.weight'], unexpected_keys=['lora_unet_input_blocks_1_0_in_layers_2.lora_mid.weight', 'lora_unet_input_blocks_1_0_out_layers_3.lora_mid.weight', 'lora_unet_input_blocks_2_0_in_layers_2.lora_mid.weight', 'lora_unet_input_blocks_2_0_out_layers_3.lora_mid.weight', 'lora_unet_input_blocks_3_0_op.lora_mid.weight', 'lora_unet_input_blocks_4_0_in_layers_2.lora_mid.weight', 'lora_unet_input_blocks_4_0_out_layers_3.lora_mid.weight', 'lora_unet_input_blocks_5_0_in_layers_2.lora_mid.weight', 'lora_unet_input_blocks_5_0_out_layers_3.lora_mid.weight', 'lora_unet_input_blocks_6_0_op.lora_mid.weight', 'lora_unet_input_blocks_7_0_in_layers_2.lora_mid.weight', 'lora_unet_input_blocks_7_0_out_layers_3.lora_mid.weight', 'lora_unet_input_blocks_8_0_in_layers_2.lora_mid.weight', 'lora_unet_input_blocks_8_0_out_layers_3.lora_mid.weight', 'lora_unet_input_blocks_9_0_op.lora_mid.weight', 'lora_unet_input_blocks_10_0_in_layers_2.lora_mid.weight', 'lora_unet_input_blocks_10_0_out_layers_3.lora_mid.weight', 'lora_unet_input_blocks_11_0_in_layers_2.lora_mid.weight', 'lora_unet_input_blocks_11_0_out_layers_3.lora_mid.weight', 'lora_unet_middle_block_0_in_layers_2.lora_mid.weight', 'lora_unet_middle_block_0_out_layers_3.lora_mid.weight', 'lora_unet_middle_block_2_in_layers_2.lora_mid.weight', 'lora_unet_middle_block_2_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_0_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_0_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_1_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_1_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_2_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_2_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_2_1_conv.lora_mid.weight', 'lora_unet_output_blocks_3_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_3_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_4_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_4_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_5_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_5_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_5_2_conv.lora_mid.weight', 'lora_unet_output_blocks_6_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_6_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_7_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_7_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_8_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_8_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_8_2_conv.lora_mid.weight', 'lora_unet_output_blocks_9_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_9_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_10_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_10_0_out_layers_3.lora_mid.weight', 'lora_unet_output_blocks_11_0_in_layers_2.lora_mid.weight', 'lora_unet_output_blocks_11_0_out_layers_3.lora_mid.weight'])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.