sygil-dev / sygil-webui Goto Github PK
View Code? Open in Web Editor NEWStable Diffusion web UI
License: GNU Affero General Public License v3.0
Stable Diffusion web UI
License: GNU Affero General Public License v3.0
With the queue enabled, sharing the public url with "share=True" makes the submit button do nothing.
When running webui.cmd, the dependencies are installed, but the it is stuck in this loop:
Relauncher: Launching...
Traceback (most recent call last):
File "C:\Users\abc\Desktop\sd-w\scripts\webui.py", line 2, in <module>
import gradio as gr
ModuleNotFoundError: No module named 'gradio'
Relauncher: Process is ending. Relaunching in 0.5s...
Relauncher: Launching...
Relaunch count: 1
Traceback (most recent call last):
File "C:\Users\abc\Desktop\sd-w\scripts\webui.py", line 2, in <module>
import gradio as gr
ModuleNotFoundError: No module named 'gradio'
Relauncher: Process is ending. Relaunching in 0.5s...
Relauncher: Launching...
Relaunch count: 2
Traceback (most recent call last):
File "C:\Users\abc\Desktop\sd-w\scripts\webui.py", line 2, in <module>
import gradio as gr
ModuleNotFoundError: No module named 'gradio'
Relauncher: Process is ending. Relaunching in 0.5s...
Relauncher: Launching...
Relaunch count: 3
Traceback (most recent call last):
File "C:\Users\abc\Desktop\sd-w\scripts\webui.py", line 2, in <module>
import gradio as gr
ModuleNotFoundError: No module named 'gradio'
Relauncher: Process is ending. Relaunching in 0.5s...
Relauncher: Launching...
Relaunch count: 4
Traceback (most recent call last):
File "C:\Users\abc\Desktop\sd-w\scripts\webui.py", line 2, in <module>
import gradio as gr
ModuleNotFoundError: No module named 'gradio'
Relauncher: Process is ending. Relaunching in 0.5s...
Relauncher: Launching...
Relaunch count: 5
Traceback (most recent call last):
File "C:\Users\abc\Desktop\sd-w\scripts\webui.py", line 2, in <module>
import gradio as gr
ModuleNotFoundError: No module named 'gradio'
Relauncher: Process is ending. Relaunching in 0.5s...
Traceback (most recent call last):
File "C:\Users\abc\Desktop\sd-w\scripts\relauncher.py", line 11, in <module>
time.sleep(0.5)
This is printed every two seconds. Is this intentional or a bug?
Memory monitor does a: pynvml.nvmlDeviceGetHandleByIndex(0) and gets the first GPU, which in a multi-GPU scenario may not be the correct one.
RealESRGAN causes GPU out of memory crash at the end of each batch. CPU can run RealESRGAN pretty quickly and without reloading models each batch.
I have a gtx 1060 with 6Gb VRAM. Webui claims almost the whole VRAM and keeps crashing when doing either txt2img or img2img:
!!Runtime error (img2img)!!
CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
exiting...calling os._exit(0)
Please optimize the memory usage, so the webui is more stable and not crashing the generarion.
( an example of webui with more stable generation would be basujindal/stable-diffusion, however the drawback is the lack of advanced features )
Loopback would be way more useful if it also randomized the seed on each loop (or incremented the seed). When it is stuck on the same seed the image will rapidly degrade. Adding this option would be very useful.
When using a non-NVIDIA platform like AMD ROCm to run the webui script, the memory management code throws an exception when trying to initialize pynvml
(pynvml.nvmlInit()
)
This would normally be harmless, but because the default value for the total
memory is 0
, this causes a divide by zero error when the value is used to calculate final formatted string.
I am currently using Colab Pro (assigned a T4 while experiencing these issues) and am getting an error when attempting to generate an image while I have a textual inversion checkpoint uploaded. The error is:
Index put requires the source and destination dtypes match, got Half for the destination and Float for the source
All other functions are working perfectly as long as I haven't uploaded a checkpoint.
If 2 devices are connected to the web ui, and they both submit a prompt, the first one gets interrupted and gives a memory allocation error
Screenshot are shown above.
Masking: No matter which image i put in, it sometimes show wrong aspect ratio, always reset to square but not landscape or portrait.
Cropping: image always too small.
Camera's aperture settings comes with the **/**
sign. f/22, f2.8 and so on. For example, look here: https://promptomania.com/stable-diffusion-prompt-builder/
Just adding .replace('/', '_')
into webui.py
file, at the file saving function san solve problem.
embeddings files from textual-inversion notebooks don't work when models are loaded with .half() :
!!Runtime error (txt2img)!!
Index put requires the source and destination dtypes match, got Half for the destination and Float for the source.
Allowing for an in GUI toggle-switch/slider that would allow for larger image generations at the cost of speed would be a good option to have. Or, by optimizing, having faster image generations at normal size would be good as well.
Please check https://github.com/basujindal/stable-diffusion/tree/main/optimizedSD and combine with webui, unless there are better options available.
I implemented globbing to determine the indices of samples.
But I just realized that globbing is incompatible with prompts that contain square brackets because square brackets have a special meaning.
Solutions would be to either remove them from the filename or to just use os.listdir
and a loop/list expression.
At the moment I can figure out how to use it on localhost, and via gradio link by adding share=True. Is there an option to just use my local ip?
Something has changed with how a public IP can work with Gradio.
With (share=True) turned on or if I turn it off and just allow anyone from outside to call upon my public IP:7860.
Sometimes you can press the submit button and SD will start sampling and you can see this in Condra, other times and most of the time submit does absolutely nothing. I tried other browsers and devices too.
If you go locally via the LAN to your computer or just the local host, it works just fine.
This was something that just started happening with the newer version.
I've tried removing and cleaning out the environment and reinstalling, but the same issue persists.
Also, Is there any way we can create a simple password challenge? I would like to allow my close friends access via public, but obviously, keep out anyone that could try to put images on my computer that may not be appropriate. I'm not talking about going all crazy SSL, but just something small that I could put in the webui.py. Just to make it a little harder.
Thanks,
In my experience CodeFormer can work a fair bit better than GFPGAN (aside from the lack of smooth blending with the background). May be worth implementing as an option.
Trying to install and getting stuck in a module not found loop.
System info: Windows 11 Pro 64 bit 21H2 22000.856
I have a fresh anaconda install from a few days ago.
I had the previous model installed but installed
I already have the full version of anaconda installed, made sure that the script locations match the paths in webui. I followed instructions closely but every time i try and run webui i get stuck in an error loop:
Traceback (most recent call last):
File "C:\Users\heart\Desktop\DEV\stable-diffusion-main\stable-diffusion-main\scripts\webui.py", line 2, in
import gradio as gr
ModuleNotFoundError: No module named 'gradio'
Relauncher: Process is ending. Relaunching in 0.5s...
I have tried 'pip install gradio' before running it, this didnt help either.
help!!
Thank you so much for pointing me to the right place. I'm still very new to github.
self.demo.launch(share=True, auth=("username", "password"), show_error=True, server_name='0.0.0.0') results in an error
self.demo.launch(share=True, auth=("username", "password"), show_error=True, server_name='0.0.0.0')
File "C:\Users\Tomoose.conda\envs\ldo\lib\site-packages\gradio\blocks.py", line 921, in launch
raise ValueError(
ValueError: Cannot queue with encryption or authentication enabled.
Hi, tried to run webui on google collab, no problem with installation, but it gives an error when trying to generate:
WebSocket connection to 'wss://27257.gradio.app/queue/join' failed:
close
Is there any way to fix this?
Google Colab.
https://colab.research.google.com/drive/1MfnpAl5BkIgegl2bImBzcASF71eN_ruS?usp=sharing
Hi
This happens when adding a picture to Textual Inversion
Traceback (most recent call last):
File "C:\Users\esfle\miniconda3\envs\ldc\lib\site-packages\gradio\routes.py", line 247, in run_predict
output = await app.blocks.process_api(
File "C:\Users\esfle\miniconda3\envs\ldc\lib\site-packages\gradio\blocks.py", line 641, in process_api
predictions, duration = await self.call_function(fn_index, processed_input)
File "C:\Users\esfle\miniconda3\envs\ldc\lib\site-packages\gradio\blocks.py", line 556, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\esfle\miniconda3\envs\ldc\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\esfle\miniconda3\envs\ldc\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\esfle\miniconda3\envs\ldc\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "webui.py", line 702, in txt2img
output_images, seed, info, stats = process_images(
File "webui.py", line 440, in process_images
load_embeddings(fp)
File "webui.py", line 261, in load_embeddings
model.embedding_manager.load(fp.name)
File "C:\Users\esfle\Downloads\stable-diffusion-main\stable3\ldm\modules\embedding_manager.py", line 136, in load
ckpt = torch.load(ckpt_path, map_location='cpu')
File "C:\Users\esfle\miniconda3\envs\ldc\lib\site-packages\torch\serialization.py", line 713, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "C:\Users\esfle\miniconda3\envs\ldc\lib\site-packages\torch\serialization.py", line 920, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
[MemMon] Recording max memory usage...
_pickle.UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.
Currently all individual samples are saved to the same directory.
I prefer to sort individual samples by prompt.
With previous scripts I simply hardcoded this in.
Is this functionality desirable to more people than just me?
If yes I will do a proper implementation.
To reproduce:
This results in the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/gradio/routes.py", line 247, in run_predict
output = await app.blocks.process_api(
File "/usr/lib/python3.10/site-packages/gradio/blocks.py", line 639, in process_api
processed_input = self.preprocess_data(fn_index, raw_input, state)
File "/usr/lib/python3.10/site-packages/gradio/blocks.py", line 543, in preprocess_data
processed_input.append(block.preprocess(raw_input[i]))
File "/usr/lib/python3.10/site-packages/gradio/components.py", line 1546, in preprocess
x, mask = x["image"], x["mask"]
TypeError: string indices must be integers
Manifestation is different when using this PR #29 but the effect and steps to reproduce are the same.
Someone requested Tileable textures be implemented, this code is for diffusers codebase, needs implementing for stable-diffusion codebase
operating system:
version (10, 11 etc), build (21h2 etc)
language
do you have git4windows installed?
conda
did you install as all users?
if you answered yes, are you sure?
stable-diffusion repo
directory location
directory listing (dir /b from cmd or cmd /c dir /b from powershell)
first line of environment.yaml
did the stable-refusion repo exist previously? or was it a fresh clone (as in the stable-diffusion directory did not exist at all until you did git clone or downloaded the zip)
did you git clone or did you download the zip
launching
run webui.cmd from cmd/powershell
post the output from the line where you ran it to the first gradio error (do not post the repeated error messages... just once is fine)
Requested information could change, if you are asked to please post it again with the extra information
stable diffusion is supported in Hugging Face diffusers including img2img and inpainting now
can use it easily like this
# make sure you're logged in with `huggingface-cli login and pip install diffusers`
from torch import autocast
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
lms = LMSDiscreteScheduler(
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear"
)
pipe = StableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-3",
scheduler=lms,
use_auth_token=True
).to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt)["sample"][0]
image.save("astronaut_rides_horse.png")
and here is a colab showing how to use it in a gradio demo: https://colab.research.google.com/drive/1NfgqublyT_MWtR5CsmrgmdnkWiijF3P3?usp=sharing
Once the specification of settings via YAML is on master I plan to write CLI scripts for automatically processing large amounts of prompts/settings.
If other people are also interested in these scripts I'll write Python scripts and document them.
If there is no interest I will simply hack something together for my personal use.
In the readme it seems to imply that I can generate and compare three different sampling methods with txt2img.py at once - is that the case? If so - where is the option to enable that?
Referring to this: https://github.com/hlky/stable-diffusion-webui#sampling-method-selection
i loads the generator, but does not send the command to the server. it use to work fine. it does work localy with localhost
Hi,
Depending on the system language, the default CSV separator can be different from a comma (for example, in French, it is ";" by default). So when we open the log.csv file with Excel on a French system, everything is put into the first column. There is a way to fix this though, by specifying the separator on the first line of the file, like this:
sep=;
So the fix to make the log compatible with all languages is to modify the following code:
writer = csv.writer(file)
if at_start:
writer.writerow(["sep=,"])
writer.writerow(["prompt", "seed", "width", "height", "sampler", "toggles", "n_iter", "n_samples", "cfg_scale", "steps", "filename"])
Thank you for your awesome work!
Hi :)
Is there a button that cancels the current generation that I'm too blind to find? Sometimes I accidentally generate too many images and have to wait forever for it to finish or quit and restart the whole script. It would be really great if there were a button to just straight up cancel the current generation (and maybe - to be extra fancy - display the results that finished generating before cancelling).
Thanks!
Integration with RealESRGAN, for "simpler" upscaling jobs.
I've already got an implementation here.
It's based on henk's fork, so I need to move it over to this fork, along with some minor improvements.
There is a interesting feature in another Fork
maybe get in contact with them
Still looking for a fix for gtx16** series. The problem is, that it doesn't matter what settings i use, i'm always get a green picture instead of the result.
And it happens only if i use web version, if i generate pictures via cmd, i use in the end of command "--precision full" and everything works fine i got the pictures.
If i try to run webui.py with --precision full, i have an error "expected scalar type Half but found Float".
Is it possible to fix this bug and use the web version with gtx16**
It's not a big problem, because i can generate pistures with cmd, but with web it would be a little more convenient :)
I can't completely reproduce it, but when using img2img right now I sometimes/often get this error:
File "C:\tools\Miniconda\envs\ldo\lib\site-packages\gradio\routes.py", line 247, in run_predict
output = await app.blocks.process_api(
File "C:\tools\Miniconda\envs\ldo\lib\site-packages\gradio\blocks.py", line 639, in process_api
processed_input = self.preprocess_data(fn_index, raw_input, state)
File "C:\tools\Miniconda\envs\ldo\lib\site-packages\gradio\blocks.py", line 543, in preprocess_data
processed_input.append(block.preprocess(raw_input[i]))
File "C:\tools\Miniconda\envs\ldo\lib\site-packages\gradio\components.py", line 1546, in preprocess
x, mask = x["image"], x["mask"]
TypeError: string indices must be integers```
urllib3.exceptions.SSLError: [SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] decryption failed or bad record mac (_ssl.c:2622)
Cant fully load stable diffusion model first launch, everything done correctly.
GoBIG is an upscaling technique, where a starting image is cut up into sections, and then each of those sections is re-rendered at a higher resolution. Once each section is done, they're all gathered and composited together, resulting in a new image that is 2x the size of the original.
https://github.com/lowfuel/progrock-stable
The main issue is probably adding one more requirement (realesrgan-ncnn-vulkan) and having people complain it doesn't work for them.
Trying to use image-to-image tab in webui if you try to use the Editor mode crop which is the default this is the error you get:
Traceback (most recent call last):
File "Miniconda3\envs\ldm\lib\site-packages\gradio\routes.py", line 248, in run_predict
output = await app.blocks.process_api(
File "Miniconda3\envs\ldm\lib\site-packages\gradio\blocks.py", line 641, in process_api
processed_input = self.preprocess_data(fn_index, raw_input, state)
File "Miniconda3\envs\ldm\lib\site-packages\gradio\blocks.py", line 545, in preprocess_data
processed_input.append(block.preprocess(raw_input[i]))
File "Miniconda3\envs\ldm\lib\site-packages\gradio\components.py", line 1546, in preprocess
x, mask = x["image"], x["mask"]
TypeError: string indices must be integers
Looking a bit deeper seems to be the one of the image editors from gradio is in "sketch" tool mode but the data it has is just the name of the file, that is part of "select" mode
Gradio sends telemetry by default, adding analytics_enabled=false to the Interface definition disables this.
However I am not sure if it also works in the current way the WebUI is setup with "gradio.Blocks" demo.
demo = gr.Interface(fn=image_classifier, inputs="image", outputs="label", analytics_enabled=false)
I also saw one of the older gradio interfaces connect to an amazon ec2 instance, possibly there's more telemetry to be disabled.
When you git clone RealESRGAN, the directory is Real-ESRGAN.
You must rename the directory or modify line 59 of webui.py
parser.add_argument("--realesrgan-dir", type=str, help="RealESRGAN directory", default=('./src/realesrgan' if os.path.exists('./src/realesrgan') else './RealESRGAN'))
by
parser.add_argument("--realesrgan-dir", type=str, help="RealESRGAN directory", default=('./src/realesrgan' if os.path.exists('./src/realesrgan') else './Real-ESRGAN'))
Trying to flag outputs right now won't work and result in a ValueError: too many values to unpack
ui option for saving samples as jpg
I post series of images shown below to demonstrate this problem, hopefully the dev can understand what I am facing lol
It always occur on my end, The second and the fourth image always flip the mask and generate wrong result. even if I try to lower down the Batch size to 1.
There might be a bug just letting u know.
Thank you
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.