Coder Social home page Coder Social logo

huchenlei / stable-diffusion-ps-pea Goto Github PK

View Code? Open in Web Editor NEW
67.0 6.0 3.0 2.39 MB

Use Stable Diffusion in Photopea!

License: GNU General Public License v3.0

JavaScript 5.92% HTML 0.15% TypeScript 37.18% Vue 55.64% CSS 1.11%
stable-diffusion stable-diffusion-webui typescript photopea-plugin generative-ai photopea stable-diffusion-api vue3

stable-diffusion-ps-pea's Introduction

Stable Diffusion Photopea

Stable Diffusion plugin for Photopea based on A1111 API.

Changelog ยท Report Bug ยท Request Feature

Discord

Installation

Step1: Setup backend service Set following command line arguments in webui-user.bat:

set COMMANDLINE_ARGS=--api --cors-allow-origins https://huchenlei.github.io [Rest of ARGS...]

For SDNext(V1111) users, set following arguments:

set COMMANDLINE_ARGS=--cors-origins https://huchenlei.github.io [Rest of ARGS...]

Step2: Click Window>Plugin Step2 Step3: Search for stable-diffusion-ps-pea Install

Features

๐Ÿ”ฅ[New Feature][2023-11-26] Realtime rendering powered by LCM

Recent advancement in LCM(Latent Consistency Model) has significantly increased the speed of inference of stable diffusion. The inference time now can be so fast that we can do real-time rendering of the canvas.

Some preparations before you start exploring the real-time rendering tab:

  • Make sure to download the latest version of config_sharing/huchenlei_configs.json5 and upload it in the config tab. The new config file provides lcm_base, lcm_lora_sd15, lcm_sd15_scribble configs that are necessary.
  • Make sure you have LCM LoRA named lcm_lora_sd15.safetensor in A1111. Or you can change the name of LoRA in config lcm_lora_sd15. You can download LCM LoRAs here.

After these preparations, you can now navigate to the real-time render tab (๐Ÿ“น).

  • Select lcm_base, lcm_lora_sd15 in RealtimeConfig.
  • Start drawing on canvas and enjoy!

Other features:

  • If you have any selections on canvas, LCM will only render the selected area.
  • You can add lcm_sd15_scribble to RealtimeConfig, which will invoke ControlNet scribble model on canvas content. Make sure you have solid black brush color active when scribbling.
  • You can click Send to canvas to send the rendered view to canvas.

Screen Capture 034 - Photopea - Online Photo Editor - www photopea com scribble apples

...More documentation work in progress...

Reference range selection In A1111 img2img inpaint, one painpoint is that the inpaint area selection is either WholeImage or OnlyMasked. This might not be an issue when the image is within reasonable size (512x512). Once the image becomes big (1024x1024+), the time and resouce required for inpaint area to be WhileImage grows exponentially, which makes this option not viable, but sometimes we do want to reference a limited range of surroundings. In this situation, one need to crop the image in an image editor, ask A1111 to only process the cropped image, then put the cropeed image back to the original big image.

This is a tedious process, but now we have this behaviour as default in stable-diffusion-ps-pea. Everytime you do an img2img, optionally you can apply a reference range (%/px), or you can just manually specify the range by creating another selection on canvas.

ref_area

Scale ratio In whole body generation, some body parts (hand/face) often becomes bad in quality, because there are just not enough pixels for the diffusion model to add details to. The diffusion model also performs less well on aspect ratios other than the ratios it was trained on (512x512 for SD1.5, 1024x1024 for SDXL), so doing inpaint in a small area only help a little. The solution is simple here, when inpainting a small area, we let A1111 target a bigger area closer to diffusion model's trained aspect ratio and resize the output to put the result image back to the original inpaint spot. The very popular extension ADetailer is doing this exact process but using image detection models to automatically detect face/hand/body to fix.

scale_ratio

ControlNet Majority of ControlNet models can be applied to a specific part of the image (canny, depth, openpose, etc). However, in normal A1111 ControlNet UI, you cannot easily visualize the spatial relationship between each ControlNet unit.

One example is shown in following video. The author uses openpose to control body pose, and softedge to control hand detail. Noting that he is using a image editor to edit the softedge map to keep only the hand part. Basic Workflow

This type of operation now becomes very easy in stable-diffusion-ps-pea. The ControlNet maps can easily overlay on top of each other. Here I am using a openpose unit and a lineart unit.

elon cnet 5b57323c40b09034008b45e7

Initial proposal to implement layer control in ControlNet's repo: Issue #1736.

Config System One pain point about A1111 is that it is hard to define workflow. There are many configuration I wish can be restored later when I was using A1111. So here I designed a configuration system that let users easily define workflow.

There are 3 types of config in stable-diffusion-ps-pea:

  • Base: The config representing the hardcoded default values for each generation parameters.
  • Default: The default config when each time you enter the UI or click the refresh button at the bottom right corner. Clicking the checkmark will activate the current selected config as default. default_config
  • Toolbox: The addon configs that only applied temporarily on the generation triggered by clicking the corresponding toolbox button. This is where you can define you customized workflow. toolbox

Configs are defined as the delta to apply on top of the current UI states. Here are some examples I wrote and you can download config_sharing/huchenlei_configs.json5 and upload it in config panel to get access to them.

LamaGenFill: Use ControlNet's inpaint_only+lama to achieve similar effect of adobe's generative fill, and magic eraser. We accept JSON5 as config format, so you can actually add comment in config file.

"LamaGenFill": [
        {
            "kind": "E",
            "path": [
                "img2imgPayload",
                "denoising_strength"
            ],
            "lhs": 0.75,
            "rhs": 1
        },
        {
            "kind": "E",
            "path": [
                "img2imgPayload",
                "inpainting_fill"
            ],
            "lhs": 1,
            "rhs": 3, // Inpaint fill is latent nothing.
        },
        {
            "kind": "E",
            "path": [
                "img2imgPayload",
                "inpaint_full_res"
            ],
            "lhs": 0,
            "rhs": 0, // Make sure inpaint reference range is whole image.
        },
        {
            "kind": "A",
            "path": [
                "controlnetUnits"
            ],
            "index": 0,
            "item": {
                "kind": "N",
                "rhs": {
                    "batch_images": "",
                    "control_mode": 2,
                    "enabled": true,
                    "guidance_end": 1,
                    "guidance_start": 0,
                    "input_mode": 0,
                    "low_vram": false,
                    "model": "control_v11p_sd15_inpaint [ebff9138]",
                    "module": "inpaint_only+lama",
                    "output_dir": "",
                    "pixel_perfect": false,
                    "processor_res": 512,
                    "resize_mode": 1,
                    "threshold_a": 64,
                    "threshold_b": 64,
                    "weight": 1,
                    "linkedLayerName": ""
                }
            }
        }
    ],

Generative Fill using LamaGenFill workflow: GenFill1 GenFill2 Magic Eraser using LamaGenFill workflow: Eraser1 Eraser2

TileUpscale2x As previously demoed about scale ratio, this workflow is used to fix hand/face, and add details to the selected region.

"TileUpscale2x": [
        {
            "kind": "E",
            "path": ["imageScale"],
            "lhs": 1,
            "rhs": 2,
        },
        {
            "kind": "A",
            "path": [
                "controlnetUnits"
            ],
            "index": 0,
            "item": {
                "kind": "N",
                "rhs": {
                    "batch_images": "",
                    "control_mode": 0,
                    "enabled": true,
                    "guidance_end": 1,
                    "guidance_start": 0,
                    "input_mode": 0,
                    "low_vram": false,
                    "model": "control_v11f1e_sd15_tile [a371b31b]",
                    "module": "tile_resample",
                    "output_dir": "",
                    "pixel_perfect": false,
                    "processor_res": 512,
                    "resize_mode": 1,
                    "threshold_a": 1,
                    "threshold_b": 64,
                    "weight": 1,
                    "linkedLayerName": ""
                }
            }
        }
    ],

Here is a video demo using it: https://www.loom.com/share/fb11c0206d7045469b82fe9d6342bd15

Overall, the config system gives users full capability on A1111 API. Even the plugin does not build UI support for some extensions, users can still invoke the extensions they want by setting entries of alwayson_scripts.

Interfacing with A1111: Optionally you can use https://github.com/yankooliveira/sd-webui-photopea-embed to send images between photopea and A1111.

Development

Setup HTTPS

The dev server needs to run under HTTPS because the plugin runs in an iframe that is embedded in an HTTPS environment. Using HTTP will make the browser complain about mixing HTTP/HTTPS content on a page.

Linux/Mac bash openssl req -x509 -nodes -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -subj "/CN=localhost"

Windows bash openssl req -x509 -nodes -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -subj "//CN=localhost"

Setup A1111

Add --cors-allow-origins https://localhost:5173 to WebUI command line args for local development.

Add dev plugin to photopea plugin store

I do not make my dev plugin setup public as it might confuse user which plugin to install. So if you are planning to do development of this plugin, I kindly ask every developer to add their own dev plugin to photopea plugin store following these steps:

Step1: Click Window>Plugin Step1 Step2: Click Add Plugin Step2 Step3: Click New Step3 Step4: Fill the Form
Step4

  • File: upload photopea_dev.json in project root directory
  • Thumbnail: Use any image link with proper size. I use https://huchenlei.github.io/stable-diffusion-ps-pea/sd.png
  • Make sure to check Make Public.

Step5: Install the plugin
You should be able to find the newly added plugin in the plugin store.

Step6: Make the plugin private
Go back to Step3 panel, and click Edit on the plugin you just added. Uncheck Make Public.

Recommended IDE Setup

VSCode + Volar (and disable Vetur) + TypeScript Vue Plugin (Volar).

Type Support for .vue Imports in TS

TypeScript cannot handle type information for .vue imports by default, so we replace the tsc CLI with vue-tsc for type checking. In editors, we need TypeScript Vue Plugin (Volar) to make the TypeScript language service aware of .vue types.

If the standalone TypeScript plugin doesn't feel fast enough to you, Volar has also implemented a Take Over Mode that is more performant. You can enable it by the following steps:

  1. Disable the built-in TypeScript Extension
    1. Run Extensions: Show Built-in Extensions from VSCode's command palette
    2. Find TypeScript and JavaScript Language Features, right click and select Disable (Workspace)
  2. Reload the VSCode window by running Developer: Reload Window from the command palette.

Customize configuration

See Vite Configuration Reference.

Project Setup

npm install

Compile and Hot-Reload for Development

npm run dev

Type-Check, Compile and Minify for Production

npm run build

Run Unit Tests with Vitest

npm run test:unit

Run End-to-End Tests with Nightwatch

# When using CI, the project must be built first.
npm run build

# Runs the end-to-end tests
npm run test:e2e
# Runs the tests only on Chrome
npm run test:e2e -- --env chrome
# Runs the tests of a specific file
npm run test:e2e -- tests/e2e/example.ts
# Runs the tests in debug mode
npm run test:e2e -- --debug

Run Headed Component Tests with Nightwatch Component Testing

npm run test:unit
npm run test:unit -- --headless # for headless testing

Lint with ESLint

npm run lint

stable-diffusion-ps-pea's People

Contributors

huchenlei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

stable-diffusion-ps-pea's Issues

[Bug] When pasting result image back. There is visible border.

00005-3154518538 (1)

There is very visible misalignment on the result image boundary against the original image. I have tried to crop the result image down to the inpaint bounding box (Currently the bounding box is the reference range box), but seems to be blocked on photopea/photopea#5902.

Need further digging to verify the root cause. For now, the best solution seems to be after all editings are done, do a low denosing strength tile img2img generation to smooth out all boundaries.

[DevTask] Style management

Using current config for style management is somewhat cumbersome:

  • All negative text will be croweded in the textbox, which is distracting
  • Hard to compose multiple styles together

We should have something resembles the A1111 style management system.

Error Message in Photopea When Attempting to Connect

"Connection Failed: TypeError: Failed to fetch"

I have Automatic1111 open with a live connection. I am able to create images in that application. However, when I try to connect in Photopea, I get that error message. Any suggestions to help resolve this?

I'm running the new Automatic1111 RC 1.6.0 if that makes a difference.

[DevTask] Implement segment anything support

Due to constraint on Adobe JS API, it is not possible to get the canvas click event. It is also not possible to get details from the history queue.

So the SAM support won't be live picking points on canvas, and do segmentation. The only feature to port is segment the whole picture / selected part of the picture, and output one/multiple segmented map.

[DevTask] Implement seed/subseed control

In my Txt2Img generation workflow, after the initial prototyping, a candidate image is selected for further tuning. The first step of further tuning is fixing the seed, and use subseed to explore images with same overall structure but variate in detail.

It would be nice if we can make this process seamless in the photopea plugin.

[Bug] Auto generation type selection broken

The auto generation type selection is broken since the commit reference range is added.

The old conditioning was checking whether the captured image and masks are both solid color (i.e. all pixels are same color), but obviously the mask of the expanded area will be black while the selected area will be white, so the condition for txt2img is only satisfied when selecting the whole canvas, and the whole canvas is solid color.

After using the the feature for several weeks, I found that it is pretty annoying an unpredictable, as a single pixel in the selection can break the logic, leading to wrong generation type, thus I propose removing the Auto generation type feature.

[DevTask] Link controlnet unit to current layer

It can be annoying that controlnet unit information are lost when re-open the plugin. It would be nice if we can provide an option to link an existing layer as ControlNet unit's input instead of generating a new map.

[DevTask] ControlNet seg color picker

It is very annoying when you want to edit the seg map used by ControlNet, and need to lookup what each color correspond to. Implementing a seg color picker would solve this issue.

The seg color picker should have a search filter to filter the result by meaning.

[ConfigChange] Fetch shared config if no existing config

In order to simplify the process of user going to github download huchenlei_configs.json5 and manually upload it in ConfigView for first time users, we should automatically fetch the config when there is no existing configs.

[DevTask] Support SD ultimate upscale

SD ultimate upscale is a crutial part of my current workflow, which bridges the initial prototyping and editing for final details. Running the script should ideally expand the canvas to fit the upscaled image or open a new document for the upscaled image.

Making a selection that is too big on canvas for img2img should also trigger the SD ultimate upscale? (Ideally?)

[Bug]: Error received in CMD window upon completion of image in Photopea

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

Starting with a blank 512x512 canvas in Photopea. I created a new image using your plugin and the RealisticVision v5 model. The image completed successfully in Photopea, but I received a very long error in the Stable Diffusion cmd window.

Steps to reproduce the problem

Read Above

What should have happened?

I would assume I should NOT receive any error messages in the cmd window.

Commit where the problem happens

webui: v1.5.1
stable-diffusion-ps-pea: [Previous good version]

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

set COMMANDLINE_ARGS= --opt-sdp-attention --xformers --autolaunch --no-half-vae --api --cors-allow-origins https://huchenlei.github.io

Console logs

100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 40/40 [00:08<00:00,  4.69it/s]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 40/40 [00:43<00:00,  1.10s/it]
ERROR:asyncio:Exception in callback H11Protocol.timeout_keep_alive_handler()โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 80/80 [00:56<00:00,  1.22it/s]
handle: <TimerHandle when=261762.453 H11Protocol.timeout_keep_alive_handler()>
Traceback (most recent call last):
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\h11\_state.py", line 249, in _fire_event_triggered_transitions
    new_state = EVENT_TRIGGERED_TRANSITIONS[role][state][event_type]
KeyError: <class 'h11._events.ConnectionClosed'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "asyncio\events.py", line 80, in _run
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 363, in timeout_keep_alive_handler
    self.conn.send(event)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\h11\_connection.py", line 468, in send
    data_list = self.send_with_data_passthrough(event)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\h11\_connection.py", line 493, in send_with_data_passthrough
    self._process_event(self.our_role, event)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\h11\_connection.py", line 242, in _process_event
    self._cstate.process_event(role, type(event), server_switch_event)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\h11\_state.py", line 238, in process_event
    self._fire_event_triggered_transitions(role, event_type)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\h11\_state.py", line 251, in _fire_event_triggered_transitions
    raise LocalProtocolError(
h11._util.LocalProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE
*** API error: POST: http://127.0.0.1:7860/api/predict {'error': 'LocalProtocolError', 'detail': '', 'body': '', 'errors': "Can't send data when our state is ERROR"}
    Traceback (most recent call last):
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
        await self.app(scope, receive, _send)
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\middleware\base.py", line 109, in __call__
        await response(scope, receive, send)
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\responses.py", line 270, in __call__
        async with anyio.create_task_group() as task_group:
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 597, in __aexit__
        raise exceptions[0]
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\responses.py", line 273, in wrap
        await func()
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\middleware\base.py", line 134, in stream_response
        return await super().stream_response(send)
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\responses.py", line 255, in stream_response
        await send(
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\middleware\errors.py", line 159, in _send
        await send(message)
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 490, in send
        output = self.conn.send(event=response)
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\h11\_connection.py", line 468, in send
        data_list = self.send_with_data_passthrough(event)
      File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\h11\_connection.py", line 483, in send_with_data_passthrough
        raise LocalProtocolError("Can't send data when our state is ERROR")
    h11._util.LocalProtocolError: Can't send data when our state is ERROR

---
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 408, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\fastapi\applications.py", line 273, in __call__
    await super().__call__(scope, receive, send)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\middleware\base.py", line 109, in __call__
    await response(scope, receive, send)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\responses.py", line 270, in __call__
    async with anyio.create_task_group() as task_group:
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 597, in __aexit__
    raise exceptions[0]
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\responses.py", line 273, in wrap
    await func()
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\middleware\base.py", line 134, in stream_response
    return await super().stream_response(send)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\responses.py", line 255, in stream_response
    await send(
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\starlette\middleware\errors.py", line 159, in _send
    await send(message)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 490, in send
    output = self.conn.send(event=response)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\h11\_connection.py", line 468, in send
    data_list = self.send_with_data_passthrough(event)
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\h11\_connection.py", line 483, in send_with_data_passthrough
    raise LocalProtocolError("Can't send data when our state is ERROR")
h11._util.LocalProtocolError: Can't send data when our state is ERROR
ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='18gfcfeobsu_744' coro=<Queue.process_events() done, defined at H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\gradio\queueing.py:343> exception=ValueError('[<gradio.queueing.Event object at 0x000001D69DEE58D0>] is not in list')>
Traceback (most recent call last):
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\gradio\queueing.py", line 370, in process_events
    while response.json.get("is_generating", False):
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\gradio\utils.py", line 538, in json
    return self._json_response_data
AttributeError: 'AsyncRequest' object has no attribute '_json_response_data'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "H:\Stable Diffusion - Automatic1111\sd.webui\system\python\lib\site-packages\gradio\queueing.py", line 432, in process_events
    self.active_jobs[self.active_jobs.index(events)] = None
ValueError: [<gradio.queueing.Event object at 0x000001D69DEE58D0>] is not in list

Additional information

No response

[Minor Improvement] Make sure `control_net_no_detectmap` is enabled

Currently we are dropping the detected maps from the A1111 response based on how many ControlNet units are active.

control_net_no_detectmap will make A1111 not sending the ControlNet detected map all together. We should use that option instead to save some network traffic.

Adding this repo to the SD Auto1111

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

Hi, I want to do exactly the same process as the image
https://user-images.githubusercontent.com/20929282/256951742-5dcb6d6f-5c3e-4cf8-abf6-c5223059a8af.png
into the auto1111 but without using external applications like photoshop /pea.

Actually i want to concatenate controlnet Pose ( openpose as a unit0 of ControlNet) and CN canny (canny as a unit1 of Controlnet)
but not just the whole canny image, just the hands part!

is it possible to use this repo?
any guide or insight in that regards would be much appritiated

Thnaks
Best regards

Steps to reproduce the problem

  1. Go to Auto1111, ControlNet tabs and add openpose and canny Model
  2. edit canny Model to just include hand parts!
  3. Auto detect hands to be used in canny CN unit1 for video generation

What should have happened?

autodetect hands and just use those parts for canny!

Commit where the problem happens

webui:
stable-diffusion-ps-pea: [Previous good version]

What browsers do you use to access the UI ?

No response

Command Line Arguments

no

Console logs

no

Additional information

No response

[DevTask] Move object to its own layer and fill the background

Stable diffusion can often generate items on undesired location. There is often a need to move the object just slightly on canvas.

Currently the solution would be

  • duplicate the layer
  • select the object
  • reverse select
  • delete everything outside selection
  • hides the object layer
  • generative fill / magic erasor the place where the object used to be

After all these operations, you ended up with the object in its own layer, and can be freely adjusted. This issue is proposing adding a button to streamline those operations to a single click. The proposed workflow would be:

  • select the object
  • click the button
  • pick the desired magic erasor result

[DevTask] Support openpose editor

It would be really nice to directly edit the detected openpose map with openpose editor, as photopea is not suited to edit openpose skeleton.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.