Coder Social home page Coder Social logo

oedosoldier / sd-webui-image-sequence-toolkit Goto Github PK

View Code? Open in Web Editor NEW
562.0 7.0 41.0 715 KB

Extension for AUTOMATIC111's WebUI

License: Apache License 2.0

Python 100.00%
stable-diffusion-webui stable-diffusion-webui-plugin stable-diffusion ai-art

sd-webui-image-sequence-toolkit's Introduction

Image Sequence Toolkit

This is an extension for AUTOMATIC111's WebUI, which supports batch processing and better inpainting.

Install

Please refer to the official wiki for installation instructions.

Usage

Enhanced img2img

To use Enhanced img2img, switch to the "img2img" tab and select "enhanced img2img" under the "script" column.

  • Input directory: The folder that contains all the images you want to process.
  • Output directory: The folder where you want to save the output images.
  • Mask directory: The folder containing all the masks. This is not essential.
  • Use input image's alpha channel as mask: If your original images are in PNG format with transparent backgrounds, you can use this option to create outputs with transparent backgrounds. Note: when this option is selected, the masks in the "mask directory" will not be used.
  • Use another image as mask: Use masks in the "mask directory" to inpaint images. Note: if the relevant masks are blank images or no mask is provided, the original images will not be processed.
  • Use mask as output alpha channel: Add the mask as an output alpha channel. Note: when the "use input image's alpha channel as mask" option is selected, this option is automatically activated.
  • Zoom in masked area: crop and resize the masked area to square images; this will give better results when the masked area is relatively small compared to the original images.
  • Alpha threshold: The alpha value to determine background and foreground.
  • Rotate images (clockwise): This can improve AI's performance when the original images are upside down.
  • Process given file(s) under the input folder, separated by comma: Process certain image(s) from the text box to the right to it. If this option is not checked, all the images under the folder will be processed.
  • Files to process: Filenames of the images you want to process. It is recommended to name your images with a digit suffix (e.g. 000233.png, 000234.png, 000235.png, ... or image_233.jpg, image_234.jpg, image_235.jpg, ...). This way, you can use 233,234,235 or simply 233-235 to assign these files. Otherwise, you need to give the full filenames like image_a.webp,image_b.webp,image_c.webp.
  • Use deepbooru prompt: Use DeepDanbooru to predict image tags. If you have input some prompts in the prompt area, it will append to the end of the prompts.
  • Using contextual information: This can improve accuracy (maybe) if tags are present in both current and next frames' prediction results.
  • Loopback: Similar to the loopback script, this will run input images img2img twice to enhance AI's creativity.
  • Firstpass width and firstpass height: AI tends to be more creative when the firstpass size is smaller.
  • Denoising strength: The denoising strength for the first pass. It's better to keep it no higher than 0.4.
  • Read tags from text files: This will read tags from text files with the same filename as the current input image.
  • Text files directory: Optional. It will load from the input directory if not specified.
  • Use csv prompt list and input file path: Use a .csv file as prompts for each image. One line for one image.

Multi-frame rendering

To use Multi-frame rendering, switch to the "img2img" tab and select "multi-frame rendering" under the "script" column. This should be used with ControlNet. For more information, see the original post.

  • Input directory: The folder that contains all the images you want to process.
  • Output directory: The folder where you want to save the output images.
  • Initial denoise strength: The denoising strength of the first frame. You can set the noise reduction strength of the first frame and the rest of the frames separately. The noise reduction strength of the rest of the frames is controlled through the img2img main interface.
  • Append interrogated prompt at each iteration: Use CLIP or DeepDanbooru to predict image tags. If you have input some prompts in the prompt area, it will append to the end of the prompts.
  • Third column (reference) image: The image used to be put at the third column.
    • None: use only two images, the previous frame and the current frame, without a third reference image.
    • FirstGen: Use the processed first frame as the reference image.
    • OriginalImg: Use the original first frame as the reference image.
    • Historical: Use the second-to-last frame before the current frame as the reference image.
  • Enable color correction: Use color correction based on the loopback image. When using a non-FirstGen image as the reference image, turn on to reduce color fading.
  • Unfreeze Seed: Once checked, the basic seed value will be incremented by 1 automatically each time an image is generated.
  • Loopback Source: The images in the second column.
    • Previous: Generates the frame from the previous generated frame.
    • Currrent: Generates the frame from the current frame.
    • First: Generates the frame from the first generated frame.
  • Read tags from text files: This will read tags from text files with the same filename as the current input image.
  • Text files directory: Optional. It will load from the input directory if not specified.
  • Use csv prompt list and input file path: Use a .csv file as prompts for each image. One line for one image.

Tutorial video (in Chinese)

Credit

AUTOMATIC1111's WebUI - https://github.com/AUTOMATIC1111/stable-diffusion-webui

Multi-frame Rendering - https://xanthius.itch.io/multi-frame-rendering-for-stablediffusion

sd-webui-image-sequence-toolkit's People

Contributors

erjanmx avatar jtara1 avatar oedosoldier avatar taoismdeeplake avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-webui-image-sequence-toolkit's Issues

Getting IndexError: list index out of range

Error completing request
Arguments: ('task(q82jkxax1hxqc96)', 0, 'a medieval man standing in the medieval forest with beautiful sunlight and shadow and a lot of apple trees, detailed , hyper realistic ', '(extra fingers, deformed hands, polydactyl:1.3), ugly, (worst quality, low quality, poor quality, bad quality, muted colors:1.35), artist logo, signature , EasyNegativeV2 , bad-hands-5', [], <PIL.Image.Image image mode=RGBA size=1920x1080 at 0x794AA3D3A560>, None, None, None, None, None, None, 40, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, 6969.0, -1.0, 0, 0, 0, False, 720, 1280, 0, 0, 32, 0, '', '', '', [], 12, False, True, False, 0, -1, False, '', 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x794aa3d3a110>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x794aa3d3aaa0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x794aa3d38df0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x794aa3d39270>, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, None, None, False, None, None, False, None, None, False, None, None, False, 50, '', '', '', False, False, False, False, False, 50, '0', False, '', False, False, False, '', False, 1 ... 3
0 ...

[1 rows x 3 columns], False, 512, 512, 0.2, False, '', False, '', '', '', '', '', '', 'CLIP', '/content/drive/MyDrive/Art/3D', '/content/drive/MyDrive/Art/SD', '', 0.95, 'FirstGen', False, False, 'Current', False, 1 ... 3
0 ...

[1 rows x 3 columns], False, '', False, '', False, '', False, '', '', '', '') {}
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/img2img.py", line 170, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "/content/stable-diffusion-webui/modules/scripts.py", line 407, in run
processed = script.run(p, *script_args)
File "/content/stable-diffusion-webui/extensions/sd-webui-image-sequence-toolkit/scripts/multi_frame_rendering.py", line 292, in run
initial_img = reference_imgs[0] # p.init_images[0]
IndexError: list index out of range

用api调用多帧渲染的脚本的时候报错

用webui界面去调用的时候是没有问题的
用api去调用的时候就报一个mask读取失败的错误,导致controlnet 没有效果

api的请求参数如下:
` ``
res = api.img2img(images=[input_img],inpainting_fill=1,mask_blur=4,mask_image=None,inpaint_full_res=0,inpaint_full_res_padding=32
,prompt=words,negative_prompt=data["negative_prompt"],seed=data["seed"],
cfg_scale=data["cfg_scale"],denoising_strength=data["denoising_strength"]
,restore_faces=True,width=w,height=h,steps=data["steps"],sampler_name=data["sampler_name"],batch_size=1,controlnet_units=[unit1,unit2]
,script_name="multi-frame rendering",script_args=[
'None',
input_path,
output_path,
'',
data["denoising_strength"],
"FirstGen",

                True,
                False,
                "Current",
                False,
                '1 2 3 \r\n 0',
                
                False,
                "",
                True,
                '',
                False,
                '',
                False,
                *['','']
            ])

对比了一下网页送入的参数是一致的,怀疑是p这个参数里面的值不一致,又做了对比发现也是一样的,
错误信息如下:

image:
Error running process: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui/modules/api/api.py", line 63, in decode_base64_to_image
image = Image.open(BytesIO(base64.b64decode(encoding)))
File "/root/miniconda3/lib/python3.10/site-packages/PIL/Image.py", line 3283, in open
raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f38587d7f10>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui/modules/scripts.py", line 417, in process
script.process(p, *script_args)
File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 941, in process
image = image_dict_from_any(unit.image)
File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 144, in image_dict_from_any
image['mask'] = external_code.to_base64_nparray(image['mask'])
File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/external_code.py", line 111, in to_base64_nparray
return np.array(api.decode_base64_to_image(encoding)).astype('uint8')
File "/root/autodl-tmp/stable-diffusion-webui/modules/api/api.py", line 66, in decode_base64_to_image
raise HTTPException(status_code=500, detail="Invalid encoded image")
fastapi.exceptions.HTTPException

请问这个该从哪个地方入手

Enhanced img2img missing script?

sry, maybe i did something wrong? i installed the extension, and i can see the multi frame script, but i don´t have the enhanced img2img like the image from the example.
Screenshot_1

enhanced_img2img, 在没有勾选"Use another image as ControlNet input"时出错

提示UnboundLocalError: local variable 'cn_images' referenced before assignment.
因为在, 判断cn_images不为Noneuse_cn为true, 这是没有必要的, 因为从上面的代码看, 当use_cn不为true时, cn_images一定是None.
因为很简单我就不提pr了.

完整的堆栈:

Will process following files: C:\Users\hbyBy\Desktop\输入\aaa.png
Processing: C:\Users\hbyBy\Desktop\输入\aaa.png
Error completing request
Arguments: ('task(a4p3ppt5wwnlzr4)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=512x512 at 0x1AD5046F7C0>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', '', [], 10, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, False, 0, True, 384, 384, False, 2, True, True, False, False, 'C:\\Users\\hbyBy\\Desktop\\输入', 'C:\\Users\\hbyBy\\Desktop\\输出', 'C:\\Users\\hbyBy\\Desktop\\蒙版', False, True, False, False, False, 50, '0', False, '', False, False, False, '', False,   1 2 3
0      , False, 512, 512, 0.2, '', 'None', '', '', 1, 'FirstGen', False, False, 'Current', False,   1 2 3
0      , False, '', False, '', False, '') {}
Traceback (most recent call last):
  File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\AI\stable-diffusion-webui\modules\img2img.py", line 169, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "D:\AI\stable-diffusion-webui\modules\scripts.py", line 399, in run
    processed = script.run(p, *script_args)
  File "D:\AI\stable-diffusion-webui\extensions\enhanced-img2img\scripts\enhanced_img2img.py", line 492, in run
    # if cn_images is not None and use_cn:
UnboundLocalError: local variable 'cn_images' referenced before assignment

Cannot reproduce bilibili result

Hi, I am trying to reproduce the result in this video, I manually downloaded it and cropped the left side as input. I tried the multi-frame rendering technique and used the same parameters in the guidance video, but the result isn't as good as yours. The generated video is not consistent. Is there any post-processing or any other methods in the generation?

Thanks.

Here is my generated video:
https://user-images.githubusercontent.com/32273662/233239111-48651c56-7c1b-4284-8bc5-158b5450faf7.mp4

Error with deepdanbooru attribute in enhanced_img2img.py

Issue Description

When attempting to run the famous V1111 fork of the Automatic1111, I encountered an error with the deepdanbooru attribute in the enhanced_img2img.py module. The setup.log displayed the following error message:

Module load: C:\AI\V1111\extensions\enhanced-img2img\scripts\enhanced_img2img.py: AttributeError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\AI\V1111\modules\script_loading.py:10 in load_module                                                              │
│                                                                                                                      │
│    9 │   try:                                                                                                        │
│ ❱ 10 │   │   module_spec.loader.exec_module(module)                                                                  │
│   11 │   except Exception as e:                                                                                      │
│ in exec_module:883                                                                                                   │
│ in _call_with_frames_removed:241                                                                                     │
│                                                                                                                      │
│ C:\AI\V1111\extensions\enhanced-img2img\scripts\enhanced_img2img.py:22 in <module>                                   │
│                                                                                                                      │
│    21 from modules.sd_hijack import model_hijack                                                                     │
│ ❱  22 if cmd_opts.deepdanbooru:                                                                                      │
│    23 │   import modules.deepbooru as deepbooru                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'Namespace' object has no attribute 'deepdanbooru'

After posting my issue on Vlad's GitHub repository, Vladmandic responded that the cause of the error was likely due to the sd-webui-image-sequence-toolkit extension. Vladmandic advised updating the extension since it included a command line flag "--deepdanbooru" that had no function and had been removed as it was useless..

Steps to Reproduce:

  1. Clone V1111 repository.
  2. Install the sd-webui-image-sequence-toolkit extension.
  3. Run the webui.bat file to start the V1111 application.

Additional Information:

  • The extension works fine with Automatic1111 version.

Version Platform Description

  • Python 3.10.9 on Windows 11
  • vlad stable diffusion Version: 24531398 Tue Apr 25 19:36:41 2023 -0400
  • sd-webui-image-sequence-toolkit version: latest

Could you please investigate this issue further and provide a fix?
Thank you.

ValueError: images do not match

Something went wrong when I used the multi-frame render script on img2img.
Error Message:
Traceback (most recent call last):
File "E:\ai2\sd-webui-aki-v4\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "E:\ai2\sd-webui-aki-v4\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "E:\ai2\sd-webui-aki-v4\modules\img2img.py", line 170, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "E:\ai2\sd-webui-aki-v4\modules\scripts.py", line 407, in run
processed = script.run(p, *script_args)
File "E:\ai2\sd-webui-aki-v4\extensions\enhanced-img2img\scripts\multi_frame_rendering.py", line 450, in run
processed = processing.process_images(p)
File "E:\ai2\sd-webui-aki-v4\modules\processing.py", line 503, in process_images
res = process_images_inner(p)
File "E:\ai2\sd-webui-aki-v4\modules\processing.py", line 711, in process_images_inner
image_mask_composite = Image.composite(image.convert('RGBA').convert('RGBa'), Image.new('RGBa', image.size), p.mask_for_overlay.convert('L')).convert('RGBA')
File "E:\ai2\sd-webui-aki-v4\py310\lib\site-packages\PIL\Image.py", line 3341, in composite
image.paste(image1, None, mask)
File "E:\ai2\sd-webui-aki-v4\py310\lib\site-packages\PIL\Image.py", line 1731, in paste
self.im.paste(im, box, mask.im)
ValueError: images do not match

AssertionError when using "Use another image as ControlNet input"

When using multi-frame rendering and attempting to read in a depth map using another image as ControlNet input, the following error occurs
Is this an error on the controlnet side?

Error running process: D:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\scripts.py", line 417, in process
script.process(p, *script_args)
File "D:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 687, in process
input_image = HWC3(np.asarray(p_input_image).astype(np.uint8))
File "D:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\util.py", line 6, in HWC3
assert x.dtype == np.uint8
AssertionError

If with command --listen, plug-in will show error

I'm running WebUI from my server, so I turned the local IP access link on. In webui-user.bat I set
set COMMANDLINE_ARGS= --listen

After I open the local IP in another computer, after every setting set up then hit Generate, It will show error
FileNotFoundErrir: [Eeeno 2] No such file or directory: 'E:\\stable-difusion\\extensions/enhanced-img2img/scripts/util.py'

On the computer I use locally, Stable Diffusion is not installed in the E:\ drive. But in my server, I put my stable diffusion in the E:\ drive. So the folder information it reads is from the E drive of the server, but for some reason it wants to read the information from the E drive of my local computer.

RuntimeError regarding the number of images when Loopback is enabled

Summary

When trying to Enhanced img2img process images with the Loopback setting enabled, the following error message is displayed:

RuntimeError: bad number of images passed: 2; expecting 1 or less

Expected behavior:

I expect the extension to be able to process multiple images even when the Loopback setting is enabled.

Environment

extension:
current main (f64a9a9)

stable diffusion web ui:

version: v1.3.0-RC-6-g20ae71fa
python: 3.10.6
torch: 2.0.0+cu118
xformers: N/A
gradio: 3.31.0
checkpoint: cc3a313202

output log

Traceback (most recent call last):
  File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\img2img.py", line 176, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\scripts.py", line 441, in run
    processed = script.run(p, *script_args)
  File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\extensions\sd-webui-image-sequence-toolkit\scripts\enhanced_img2img.py", line 587, in run
    proc = process_images_with_size(
  File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\extensions\sd-webui-image-sequence-toolkit\scripts\enhanced_img2img.py", line 580, in process_images_with_size
    return process_images(p)
  File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\processing.py", line 611, in process_images
    res = process_images_inner(p)
  File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\processing.py", line 671, in process_images_inner
    p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
  File "C:\Users\<username>\workspace\ai\stable-diffusion-webui\modules\processing.py", line 1225, in init
    raise RuntimeError(f"bad number of images passed: {len(imgs)}; expecting {self.batch_size} or less")
RuntimeError: bad number of images passed: 2; expecting 1 or less

Reprocess the image after interruption

处理到一半断开,从中断的位置继续任务,或者选择第三列的帧的位置(就是第一帧),单独把剩下的图片处理
When processing is disconnected, continue the task from the interrupted position, or select the position of the frame in the third column (usually the first frame) and process the remaining images separately.

Instructions unclear for the `Read tabular command` option

First of all, the readme is clearly outdated since the option doesn't even have the same name anymore and Excel files are supported.

Secondly, explanations aren't sufficient. As users we're not warned the first line will be treated as the header. Are we supposed to expect to have the prompt chosen according to the file name alphabetical rank?

Also, I've a suggestion: there could be an option for tag in the filenames.

AttributeError: 'NoneType' object has no attribute 'group'

Was trying to test the multi-frame rendering script, but I get this error. Does anyone know why this happens?

My settings

image

image

The whole error code:

Traceback (most recent call last):
  File "/content/drive/MyDrive/sd22/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/content/drive/MyDrive/sd22/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/content/drive/MyDrive/sd22/stable-diffusion-webui/modules/img2img.py", line 170, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "/content/drive/MyDrive/sd22/stable-diffusion-webui/modules/scripts.py", line 407, in run
    processed = script.run(p, *script_args)
  File "/content/drive/MyDrive/sd22/stable-diffusion-webui/extensions/sd-webui-image-sequence-toolkit/scripts/multi_frame_rendering.py", line 254, in run
    reference_imgs = sort_images(reference_imgs)
  File "/content/drive/MyDrive/sd22/stable-diffusion-webui/extensions/sd-webui-image-sequence-toolkit/scripts/ei_utils.py", line 30, in sort_images
    return sorted(lst, key=lambda x: int(re.search(pattern, x).group()))
  File "/content/drive/MyDrive/sd22/stable-diffusion-webui/extensions/sd-webui-image-sequence-toolkit/scripts/ei_utils.py", line 30, in <lambda>
    return sorted(lst, key=lambda x: int(re.search(pattern, x).group()))
AttributeError: 'NoneType' object has no attribute 'group'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.