Coder Social home page Coder Social logo

Comments (20)

KohakuBlueleaf avatar KohakuBlueleaf commented on July 23, 2024 5

I know which part goes wrong but I cannot find why it goes wrong...
Need more time

from a1111-sd-webui-locon.

KohakuBlueleaf avatar KohakuBlueleaf commented on July 23, 2024 4

@jimlin1668478052 @Stellar-Y @xuxu116 I push a fix for this
plz check it!

from a1111-sd-webui-locon.

KohakuBlueleaf avatar KohakuBlueleaf commented on July 23, 2024 2

I'm investigating
thx to a1111 this problem is super hard to reproduce

from a1111-sd-webui-locon.

KohakuBlueleaf avatar KohakuBlueleaf commented on July 23, 2024 1

@jimlin1668478052 @Stellar-Y @xuxu116 I push a fix for this plz check it!

sorry, this bug may not be solely due to your extension, but rather caused by multiple bugs in sdwebui and lora block weight. I have already rolled back the version. :)

Lot of other lora ext not support new built-in lora systemXD

from a1111-sd-webui-locon.

KohakuBlueleaf avatar KohakuBlueleaf commented on July 23, 2024

@jimlin1668478052 My fault
Already push a fix for this

from a1111-sd-webui-locon.

jimlin1668478052 avatar jimlin1668478052 commented on July 23, 2024

Now LoCon can be used. But when I used a LoHa, there is another error:

Arguments: ('task(skdbm3pokfqf70v)', '1girls, intricate details, masterpiece, best quality, original, dynamic posture, dynamic angle\n\nlora:chevalGrandUmamusume_loha:0.7, cheval grand \(umamusume\), causal, denim jacket, white shirt, black pants', '(easynegative:0.8), solo', [], 35, 7, False, False, 1, 1, 7.5, -1.0, -1.0, 0, 0, 0, False, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, False, True, True, False, 2048, 128, False, '', 0, False, 8, 100, 'Constant', 0, 'Constant', 0, 4, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x7fd9c822ad00>, <scripts.external_code.ControlNetUnit object at 0x7fd9c822a5e0>, <scripts.external_code.ControlNetUnit object at 0x7fd9c822af70>, False, 'white, black, gray, green', 0.5, True, False, '', 'Lerp', False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 'black', '20', False, False, False, 3, 0, False, False, 0, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 35, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 636, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 852, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 351, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 351, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 119, in forward
x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 82, in forward
x = layer(x, emb)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 249, in forward
return checkpoint(
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/util.py", line 129, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 262, in _forward
h = self.in_layers(x)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/container.py", line 204, in forward
input = module(input)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 317, in lora_Conv2d_forward
lora_apply_weights(self)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 273, in lora_apply_weights
self.weight += lora_calc_updown(lora, module, self.weight)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 562, in lora_calc_updown
updown = rebuild_weight(module, target)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 556, in rebuild_weight
updown = updown.reshape(output_shape)
RuntimeError: shape '[4, 320, 3, 3]' is invalid for input of size 921600

from a1111-sd-webui-locon.

Stellar-Y avatar Stellar-Y commented on July 23, 2024

LoCon can normally be used but meet the same problem using LoHA.

from a1111-sd-webui-locon.

KohakuBlueleaf avatar KohakuBlueleaf commented on July 23, 2024

Can you give me the loha you are using?
I cannot reproduce this error

from a1111-sd-webui-locon.

jimlin1668478052 avatar jimlin1668478052 commented on July 23, 2024

https://civitai.com/models/21874/cheval-grand-umamusume
https://civitai.com/models/21347/yaeno-muteki-umamusume

P.S. When I ran the WebUI, the error as below showed

"Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[AddNet] Updating model hashes...
0it [00:00, ?it/s]"

But it did not affect neither the running of WebUI nor the my usage of LORA or any LYCORIS before so I did not care about it at that time. I do not know whether it will affect my using the LoHAs now.

from a1111-sd-webui-locon.

ate214 avatar ate214 commented on July 23, 2024

Same for here. Not just the Umamusume Locons made by mht, some lycoris model by other creators also crashes.
Such as: https://civitai.com/models/23927/cucouroux-granblue-fantasy-or-lycoris-loha

And I got a completely same runtime error.

from a1111-sd-webui-locon.

315deg avatar 315deg commented on July 23, 2024

I've encountered the same error with hako-mikan/sd-webui-lora-block-weight extension. While there may be a partial workaround, it is necessary to wait for sd-webui-lora-block-weight to be updated to support the webui update a few days ago.

from a1111-sd-webui-locon.

FOBobiko avatar FOBobiko commented on July 23, 2024

I can't speak English, so I use Google Translate.
I'll leave a comment as it might be of some help.

I can also generate using 'cheval-grand-umamusume' and 'cucouroux-granblue-fantasy-or-lycoris-loha'.
(I haven't used 'yaeno-muteki-umamusume' yet so I don't know if it's possible.)

"Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[AddNet] Updating model hashes...
0it [00:00, ?it/s]"
I'm also getting this message on boot.
So I think this message is okay.

Is it possible that an extension is causing a conflict...?

from a1111-sd-webui-locon.

BootsofLagrangian avatar BootsofLagrangian commented on July 23, 2024

For now, Some codes are incompatible with locon extension.

As a temporary method, I downgrade sd-webui to older version.

What I used checkout of webui is a9eab236d7e8afa4d6205127904a385b2c43bb24

To apply this, open powershell or cmd thing in your sd-webui.

paste this

git checkout a9eab236d7e8afa4d6205127904a385b2c43bb24

It will done.

from a1111-sd-webui-locon.

mbastias avatar mbastias commented on July 23, 2024

I just made a LoHa and it works on my environment, but it doesn't work on my wife's. Is the same error. If is any help, I made it with Kohya_ss GUI with the LyCORIS/LoHa option.

from a1111-sd-webui-locon.

xuxu116 avatar xuxu116 commented on July 23, 2024

Now LoCon can be used. But when I used a LoHa, there is another error:

Arguments: ('task(skdbm3pokfqf70v)', '1girls, intricate details, masterpiece, best quality, original, dynamic posture, dynamic angle\n\nlora:chevalGrandUmamusume_loha:0.7, cheval grand (umamusume), causal, denim jacket, white shirt, black pants', '(easynegative:0.8), solo', [], 35, 7, False, False, 1, 1, 7.5, -1.0, -1.0, 0, 0, 0, False, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, False, True, True, False, 2048, 128, False, '', 0, False, 8, 100, 'Constant', 0, 'Constant', 0, 4, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x7fd9c822ad00>, <scripts.external_code.ControlNetUnit object at 0x7fd9c822a5e0>, <scripts.external_code.ControlNetUnit object at 0x7fd9c822af70>, False, 'white, black, gray, green', 0.5, True, False, '', 'Lerp', False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 'black', '20', False, False, False, 3, 0, False, False, 0, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 35, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, 50) {} Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f res = func(*args, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img processed = process_images(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 486, in process_images res = process_images_inner(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 636, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 852, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 351, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 227, in launch_sampling return func() File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 351, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 119, in forward x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]}) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 1329, in forward out = self.diffusion_model(x, t, context=cc) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward h = module(h, emb, context) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 82, in forward x = layer(x, emb) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 249, in forward return checkpoint( File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/util.py", line 129, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 262, in _forward h = self.in_layers(x) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/container.py", line 204, in forward input = module(input) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 317, in lora_Conv2d_forward lora_apply_weights(self) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/../../../extensions-builtin/Lora/lora.py", line 273, in lora_apply_weights self.weight += lora_calc_updown(lora, module, self.weight) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 562, in lora_calc_updown updown = rebuild_weight(module, target) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/a1111-sd-webui-locon/scripts/main.py", line 556, in rebuild_weight updown = updown.reshape(output_shape) RuntimeError: shape '[4, 320, 3, 3]' is invalid for input of size 921600

Exactly the same problem. Yog-Sothoth Loha can be loaded successfully. Failed with the model trained with LyCORIS with dim32con4/dim8con4, with the same shape error.

from a1111-sd-webui-locon.

tenabraex avatar tenabraex commented on July 23, 2024

@jimlin1668478052 @Stellar-Y @xuxu116 I push a fix for this plz check it!

cheers, ive tested against a couple i had that weren't working and both work now.

from a1111-sd-webui-locon.

Soulcrzkc avatar Soulcrzkc commented on July 23, 2024

@jimlin1668478052 @Stellar-Y @xuxu116 I push a fix for this plz check it!

I still encounter this error. It mainly occurs when I use 3 or more LoRAs and then modify the LoRA settings. My SDWebUI and Locon are both up to date with the latest versions.
Especially when I uselora:animeScreenshotLikeStyleMixLora_v10:1:BACKGROUND lora:darkAndLight_v10:1:BACKGROUND lora:shinku_v3.0:1:CHARACTER,all of them can be downloaded from civitai.

from a1111-sd-webui-locon.

Soulcrzkc avatar Soulcrzkc commented on July 23, 2024

@jimlin1668478052 @Stellar-Y @xuxu116 I push a fix for this plz check it!

sorry, this bug may not be solely due to your extension, but rather caused by multiple bugs in sdwebui and lora block weight. I have already rolled back the version. :)

from a1111-sd-webui-locon.

Raiden-Coder avatar Raiden-Coder commented on July 23, 2024

The extension is not working at all. Not even slightly for me. I keep getting an error no matter what kind of LoHa I use. The LoHas i have tested works through the conventional method LoRA:namexample:1 but it does not work at all through the extension. I have attached a pic of the extensions I am using as well. They are all up to date as well.
image_2023-03-29_230724036

It is somewhat of a small error and maybe the fix is easier. Here is the error:

LoRA weight_unet: 0.8, weight_tenc: 0.8, model: NIKKEsinNIKKELohaLycoris_v10(4ac77864bfe4)
dimension: None, alpha: 1.0, multiplier_unet: 0.8, multiplier_tenc: 0.8
The selected model is not LoRA or not trained by sd-scripts?
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
Error running process_batch: F:\Pictures\Personal\NovelAIFolders\1\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\additional_networks.py
Traceback (most recent call last):
File "F:\Pictures\Personal\NovelAIFolders\1\stable-diffusion-webui\modules\scripts.py", line 395, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "F:\Pictures\Personal\NovelAIFolders\1\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\additional_networks.py", line 243, in process_batch
network, info = lora_compvis.create_network_and_apply_compvis(
File "F:\Pictures\Personal\NovelAIFolders\1\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 102, in create_network_and_apply_compvis
File "F:\Pictures\Personal\NovelAIFolders\1\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 289, in apply_lora_modules
)
File "F:\Pictures\Personal\NovelAIFolders\1\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 193, in convert_state_dict_name_to_compvis
m = re.search(r"down_blocks(\d+)attentions(\d+)_(.+)", du_name)
File "F:\Pictures\Personal\NovelAIFolders\1\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 182, in convert_diffusers_name_to_compvis
AssertionError: conversion failed: lora_unet_down_blocks_0_downsamplers_0_conv. the model may not be trained by sd-scripts.

One of the many LoHas that don't work https://civitai.com/models/23266/sin-or-nikke-or-loha-lycoris

Edit: I am sorry. It is working now after the latest webui Git Pull.

from a1111-sd-webui-locon.

KohakuBlueleaf avatar KohakuBlueleaf commented on July 23, 2024

@Raiden-Coder your bug is caused by addnet ext not my ext and loha is not supported by addnet
If you want to use addnet but not built-in lora system
you should submit your issue to addnet

from a1111-sd-webui-locon.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.