oedosoldier / sd-webui-image-sequence-toolkit Goto Github PK
View Code? Open in Web Editor NEWExtension for AUTOMATIC111's WebUI
License: Apache License 2.0
Extension for AUTOMATIC111's WebUI
License: Apache License 2.0
When trying to Enhanced img2img process images with the Loopback setting enabled, the following error message is displayed:
RuntimeError: bad number of images passed: 2; expecting 1 or less
I expect the extension to be able to process multiple images even when the Loopback setting is enabled.
extension:
current main (f64a9a9)
stable diffusion web ui:
version: v1.3.0-RC-6-g20ae71fa
python: 3.10.6
torch: 2.0.0+cu118
xformers: N/A
gradio: 3.31.0
checkpoint: cc3a313202
Traceback (most recent call last):
File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\img2img.py", line 176, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\scripts.py", line 441, in run
processed = script.run(p, *script_args)
File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\extensions\sd-webui-image-sequence-toolkit\scripts\enhanced_img2img.py", line 587, in run
proc = process_images_with_size(
File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\extensions\sd-webui-image-sequence-toolkit\scripts\enhanced_img2img.py", line 580, in process_images_with_size
return process_images(p)
File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\processing.py", line 611, in process_images
res = process_images_inner(p)
File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\processing.py", line 671, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\processing.py", line 1225, in init
raise RuntimeError(f"bad number of images passed: {len(imgs)}; expecting {self.batch_size} or less")
RuntimeError: bad number of images passed: 2; expecting 1 or less
Hi, I am trying to reproduce the result in this video, I manually downloaded it and cropped the left side as input. I tried the multi-frame rendering technique and used the same parameters in the guidance video, but the result isn't as good as yours. The generated video is not consistent. Is there any post-processing or any other methods in the generation?
Thanks.
Here is my generated video:
https://user-images.githubusercontent.com/32273662/233239111-48651c56-7c1b-4284-8bc5-158b5450faf7.mp4
This (https://github.com/Scholar01/sd-webui-mov2mov.git) is an extension that can input and output a mov without multiframe rendering.
It just use opencv-python and ffmpeg to crop images from video and synthesis video from images.
Could you help to add these into the extension?
Something went wrong when I used the multi-frame render script on img2img.
Error Message:
Traceback (most recent call last):
File "E:\ai2\sd-webui-aki-v4\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "E:\ai2\sd-webui-aki-v4\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "E:\ai2\sd-webui-aki-v4\modules\img2img.py", line 170, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "E:\ai2\sd-webui-aki-v4\modules\scripts.py", line 407, in run
processed = script.run(p, *script_args)
File "E:\ai2\sd-webui-aki-v4\extensions\enhanced-img2img\scripts\multi_frame_rendering.py", line 450, in run
processed = processing.process_images(p)
File "E:\ai2\sd-webui-aki-v4\modules\processing.py", line 503, in process_images
res = process_images_inner(p)
File "E:\ai2\sd-webui-aki-v4\modules\processing.py", line 711, in process_images_inner
image_mask_composite = Image.composite(image.convert('RGBA').convert('RGBa'), Image.new('RGBa', image.size), p.mask_for_overlay.convert('L')).convert('RGBA')
File "E:\ai2\sd-webui-aki-v4\py310\lib\site-packages\PIL\Image.py", line 3341, in composite
image.paste(image1, None, mask)
File "E:\ai2\sd-webui-aki-v4\py310\lib\site-packages\PIL\Image.py", line 1731, in paste
self.im.paste(im, box, mask.im)
ValueError: images do not match
How can I do this ? input 1 : a directory of images to batch img2img process to upscale into 2x, another directory of images to apply controlnet for this process? the image names can be same for the script
Was trying to test the multi-frame rendering script, but I get this error. Does anyone know why this happens?
My settings
The whole error code:
Traceback (most recent call last):
File "/content/drive/MyDrive/sd22/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/drive/MyDrive/sd22/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/drive/MyDrive/sd22/stable-diffusion-webui/modules/img2img.py", line 170, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "/content/drive/MyDrive/sd22/stable-diffusion-webui/modules/scripts.py", line 407, in run
processed = script.run(p, *script_args)
File "/content/drive/MyDrive/sd22/stable-diffusion-webui/extensions/sd-webui-image-sequence-toolkit/scripts/multi_frame_rendering.py", line 254, in run
reference_imgs = sort_images(reference_imgs)
File "/content/drive/MyDrive/sd22/stable-diffusion-webui/extensions/sd-webui-image-sequence-toolkit/scripts/ei_utils.py", line 30, in sort_images
return sorted(lst, key=lambda x: int(re.search(pattern, x).group()))
File "/content/drive/MyDrive/sd22/stable-diffusion-webui/extensions/sd-webui-image-sequence-toolkit/scripts/ei_utils.py", line 30, in <lambda>
return sorted(lst, key=lambda x: int(re.search(pattern, x).group()))
AttributeError: 'NoneType' object has no attribute 'group'
处理到一半断开,从中断的位置继续任务,或者选择第三列的帧的位置(就是第一帧),单独把剩下的图片处理
When processing is disconnected, continue the task from the interrupted position, or select the position of the frame in the third column (usually the first frame) and process the remaining images separately.
Although my PNG files are named sequentially from 00000.png to 00584.png, the processing is being done in a random order instead of in the correct sequence.
Anyway, thank you very much for your method of video m2m. It's great work. I had an idea of adapting the SD model, and I see that you've already done it.
I'm running WebUI from my server, so I turned the local IP access link on. In webui-user.bat I set
set COMMANDLINE_ARGS= --listen
After I open the local IP in another computer, after every setting set up then hit Generate, It will show error
FileNotFoundErrir: [Eeeno 2] No such file or directory: 'E:\\stable-difusion\\extensions/enhanced-img2img/scripts/util.py'
On the computer I use locally, Stable Diffusion is not installed in the E:\ drive. But in my server, I put my stable diffusion in the E:\ drive. So the folder information it reads is from the E drive of the server, but for some reason it wants to read the information from the E drive of my local computer.
When using multi-frame rendering and attempting to read in a depth map using another image as ControlNet input, the following error occurs
Is this an error on the controlnet side?
Error running process: D:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\scripts.py", line 417, in process
script.process(p, *script_args)
File "D:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 687, in process
input_image = HWC3(np.asarray(p_input_image).astype(np.uint8))
File "D:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\util.py", line 6, in HWC3
assert x.dtype == np.uint8
AssertionError
When attempting to run the famous V1111 fork of the Automatic1111, I encountered an error with the deepdanbooru attribute in the enhanced_img2img.py module. The setup.log displayed the following error message:
Module load: C:\AI\V1111\extensions\enhanced-img2img\scripts\enhanced_img2img.py: AttributeError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\AI\V1111\modules\script_loading.py:10 in load_module │
│ │
│ 9 │ try: │
│ ❱ 10 │ │ module_spec.loader.exec_module(module) │
│ 11 │ except Exception as e: │
│ in exec_module:883 │
│ in _call_with_frames_removed:241 │
│ │
│ C:\AI\V1111\extensions\enhanced-img2img\scripts\enhanced_img2img.py:22 in <module> │
│ │
│ 21 from modules.sd_hijack import model_hijack │
│ ❱ 22 if cmd_opts.deepdanbooru: │
│ 23 │ import modules.deepbooru as deepbooru │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'Namespace' object has no attribute 'deepdanbooru'
After posting my issue on Vlad's GitHub repository, Vladmandic responded that the cause of the error was likely due to the sd-webui-image-sequence-toolkit extension. Vladmandic advised updating the extension since it included a command line flag "--deepdanbooru" that had no function and had been removed as it was useless..
Steps to Reproduce:
Additional Information:
24531398 Tue Apr 25 19:36:41 2023 -0400
Could you please investigate this issue further and provide a fix?
Thank you.
在多帧重绘脚本第三列图片加一个选项可以指定为某一个图片,可以用gradio上传。
这样方便接着上一次的风格画
提示UnboundLocalError: local variable 'cn_images' referenced before assignment
.
因为在这, 判断cn_images不为None
和use_cn为true
, 这是没有必要的, 因为从上面的代码看, 当use_cn不为true时, cn_images一定是None.
因为很简单我就不提pr了.
完整的堆栈:
Will process following files: C:\Users\hbyBy\Desktop\输入\aaa.png
Processing: C:\Users\hbyBy\Desktop\输入\aaa.png
Error completing request
Arguments: ('task(a4p3ppt5wwnlzr4)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=512x512 at 0x1AD5046F7C0>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 10, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, False, 0, True, 384, 384, False, 2, True, True, False, False, 'C:\\Users\\hbyBy\\Desktop\\输入', 'C:\\Users\\hbyBy\\Desktop\\输出', 'C:\\Users\\hbyBy\\Desktop\\蒙版', False, True, False, False, False, 50, '0', False, '', False, False, False, '', False, 1 2 3
0 , False, 512, 512, 0.2, '', 'None', '', '', 1, 'FirstGen', False, False, 'Current', False, 1 2 3
0 , False, '', False, '', False, '') {}
Traceback (most recent call last):
File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\img2img.py", line 169, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "D:\AI\stable-diffusion-webui\modules\scripts.py", line 399, in run
processed = script.run(p, *script_args)
File "D:\AI\stable-diffusion-webui\extensions\enhanced-img2img\scripts\enhanced_img2img.py", line 492, in run
# if cn_images is not None and use_cn:
UnboundLocalError: local variable 'cn_images' referenced before assignment
First of all, the readme is clearly outdated since the option doesn't even have the same name anymore and Excel files are supported.
Secondly, explanations aren't sufficient. As users we're not warned the first line will be treated as the header. Are we supposed to expect to have the prompt chosen according to the file name alphabetical rank?
Also, I've a suggestion: there could be an option for tag in the filenames.
用webui界面去调用的时候是没有问题的
用api去调用的时候就报一个mask读取失败的错误,导致controlnet 没有效果
api的请求参数如下:
` ``
res = api.img2img(images=[input_img],inpainting_fill=1,mask_blur=4,mask_image=None,inpaint_full_res=0,inpaint_full_res_padding=32
,prompt=words,negative_prompt=data["negative_prompt"],seed=data["seed"],
cfg_scale=data["cfg_scale"],denoising_strength=data["denoising_strength"]
,restore_faces=True,width=w,height=h,steps=data["steps"],sampler_name=data["sampler_name"],batch_size=1,controlnet_units=[unit1,unit2]
,script_name="multi-frame rendering",script_args=[
'None',
input_path,
output_path,
'',
data["denoising_strength"],
"FirstGen",
True,
False,
"Current",
False,
'1 2 3 \r\n 0',
False,
"",
True,
'',
False,
'',
False,
*['','']
])
对比了一下网页送入的参数是一致的,怀疑是p这个参数里面的值不一致,又做了对比发现也是一样的,
错误信息如下:
image:
Error running process: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui/modules/api/api.py", line 63, in decode_base64_to_image
image = Image.open(BytesIO(base64.b64decode(encoding)))
File "/root/miniconda3/lib/python3.10/site-packages/PIL/Image.py", line 3283, in open
raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f38587d7f10>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui/modules/scripts.py", line 417, in process
script.process(p, *script_args)
File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 941, in process
image = image_dict_from_any(unit.image)
File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 144, in image_dict_from_any
image['mask'] = external_code.to_base64_nparray(image['mask'])
File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/external_code.py", line 111, in to_base64_nparray
return np.array(api.decode_base64_to_image(encoding)).astype('uint8')
File "/root/autodl-tmp/stable-diffusion-webui/modules/api/api.py", line 66, in decode_base64_to_image
raise HTTPException(status_code=500, detail="Invalid encoded image")
fastapi.exceptions.HTTPException
请问这个该从哪个地方入手
Error completing request
Arguments: ('task(q82jkxax1hxqc96)', 0, 'a medieval man standing in the medieval forest with beautiful sunlight and shadow and a lot of apple trees, detailed , hyper realistic ', '(extra fingers, deformed hands, polydactyl:1.3), ugly, (worst quality, low quality, poor quality, bad quality, muted colors:1.35), artist logo, signature , EasyNegativeV2 , bad-hands-5', [], <PIL.Image.Image image mode=RGBA size=1920x1080 at 0x794AA3D3A560>, None, None, None, None, None, None, 40, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, 6969.0, -1.0, 0, 0, 0, False, 720, 1280, 0, 0, 32, 0, '', '', '', [], 12, False, True, False, 0, -1, False, '', 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x794aa3d3a110>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x794aa3d3aaa0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x794aa3d38df0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x794aa3d39270>, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, '
CFG Scale
should be 2 or lower.Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, None, None, False, None, None, False, None, None, False, None, None, False, 50, '', '', '', False, False, False, False, False, 50, '0', False, '', False, False, False, '', False, 1 ... 3[1 rows x 3 columns], False, 512, 512, 0.2, False, '', False, '', '', '', '', '', '', 'CLIP', '/content/drive/MyDrive/Art/3D', '/content/drive/MyDrive/Art/SD', '', 0.95, 'FirstGen', False, False, 'Current', False, 1 ... 3
0 ...
[1 rows x 3 columns], False, '', False, '', False, '', False, '', '', '', '') {}
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/img2img.py", line 170, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "/content/stable-diffusion-webui/modules/scripts.py", line 407, in run
processed = script.run(p, *script_args)
File "/content/stable-diffusion-webui/extensions/sd-webui-image-sequence-toolkit/scripts/multi_frame_rendering.py", line 292, in run
initial_img = reference_imgs[0] # p.init_images[0]
IndexError: list index out of range
Is this using multi frame render?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.