Coder Social home page Coder Social logo

hentai-diffusion's Introduction

We Moved (Downloads Hosted On Huggingface)

Better management and more details for descriptions and guides, direct contact for technical questions, and faster releases.

Read about our new and upcoming art projects that involve both A.I art and not. We hope everyone enjoys our models, and they will stay free forever.

Addressing the issue involving the fork. We were made aware of a supply chain issue on the First of January and have been working to address the issue. One of those ways is to move to an official website so that false or malicious files can't be added. We'll be working on a forum on our website in the next couple of months. You can contact us directly on the site.

hentai-diffusion's People

Contributors

delcos avatar hopto-dot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hentai-diffusion's Issues

Training HD-22 with LoRA/Dreambooth and contributing to project

I want to understand how this works more, and hopefully be able to contribute to the project, so I'm trying to start training the model. I'm following a guide, but I am not sure of some specifics so was hoping you could help me out.

  • Is HD-22 based of stable diffusion 1 or 2? Do I train on 512x512 or 768x768?
  • Should the negative prompt be included in the training?
  • Is it best to merge the resulting model back with HD-22, or just use the newly trained model outright?
  • If i were to train a model that includes a character that isn't currently available in HD, how could I contribute this to HD?

I have more than a decade of software engineering experience, if there's anything I can help with outside of training (even with the website) please let me know. HD has provided me with a lot of entertainment, so i really want to contribute.

Thanks!

can't find "lauch( )"

I want to turn on sharing but I can't find the file or line of code launch( ) to set share=true. I want to be able to run the UI on my phone basically while my computer does the processing.

Torch Not Downloading

When running the user bat file, the following error occurs

Traceback (most recent call last):
File "C:\Users\Aaryan\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\launch.py", line 250, in
prepare_enviroment()
File "C:\Users\Aaryan\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\launch.py", line 171, in prepare_enviroment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch")
File "C:\Users\Aaryan\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\launch.py", line 34, in run
raise RuntimeError(message)
RuntimeError: Couldn't install torch.
Command: "C:\Users\Aaryan\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Scripts\python.exe" -m pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
Error code: 1
stdout: Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu113

stderr: ERROR: Could not find a version that satisfies the requirement torch==1.12.1+cu113 (from versions: none)
ERROR: No matching distribution found for torch==1.12.1+cu113

Getting started point not clear for newbies (like me)

Hello to all!

I'm having a problem with the step 3:


_Step 3:
Return to the main directory.

stable-diffusion-webui-master\

On windows run the webui-user.sh file. For linux use webui.sh.

Let it run and download all the needed requirements. It may seem frozen at times but it is doing fine, just don't close the terminal._


How I'm supposed to run the webui-user.sh file? I tried looking elsewhere, but mostly was doing linux stuff to run windows.

Thanks and sorry for asking something that will be stupidly easy,

JM

KeyError: 'state_dict' with HD-18

Not sure if this is an issue with the code for my UI(I use Sygil-Dev) or an issue with the model but I can't run HD-18. Attempting to gives me the following error message. If anyone has any insight it would be greatly appreciated. HD-16 and 17 worked fine so I assume it's a change in how models are handled that my UI hasn't been updated for.

 File "C:\Users\QSDFR\.conda\envs\ldm\lib\threading.py", line 890, in _bootstrap
    self._bootstrap_inner()
    │    └ <function Thread._bootstrap_inner at 0x00000210B3402F70>
    └ <Thread(ScriptRunner.scriptThread, started 16904)>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
    │    └ <function Thread.run at 0x00000210B3402CA0>
    └ <Thread(ScriptRunner.scriptThread, started 16904)>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
    │    │        │    │        │    └ {}
    │    │        │    │        └ <Thread(ScriptRunner.scriptThread, started 16904)>
    │    │        │    └ ()
    │    │        └ <Thread(ScriptRunner.scriptThread, started 16904)>
    │    └ <bound method ScriptRunner._run_script_thread of ScriptRunner(_session_id='e090a7d7-971c-4361-bd43-c2bbabe5b345', _main_scrip...
    └ <Thread(ScriptRunner.scriptThread, started 16904)>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 298, in _run_script_thread
    self._run_script(request.rerun_data)
    │    │           │       └ <property object at 0x00000210BB519F40>
    │    │           └ ScriptRequest(type=<ScriptRequestType.RERUN: 'RERUN'>, _rerun_data=RerunData(query_string='', widget_states=widgets {
    │    │               id: "...
    │    └ <function ScriptRunner._run_script at 0x00000210BB5C4DC0>
    └ ScriptRunner(_session_id='e090a7d7-971c-4361-bd43-c2bbabe5b345', _main_script_path='D:\\StabDiff\\SD-20062022\\stable-diffusi...

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 563, in _run_script
    exec(code, module.__dict__)
         │     │      └ <member '__dict__' of 'module' objects>
         │     └ <module '__main__' from 'D:\\StabDiff\\SD-20062022\\stable-diffusion-webui\\scripts\\webui_streamlit.py'>
         └ <code object <module> at 0x000002128BAEC500, file "D:\StabDiff\SD-20062022\stable-diffusion-webui\scripts\webui_streamlit.py"...

  File "D:\StabDiff\SD-20062022\stable-diffusion-webui\scripts\webui_streamlit.py", line 203, in <module>
    layout()
    └ <function layout at 0x00000210C67565E0>

> File "D:\StabDiff\SD-20062022\stable-diffusion-webui\scripts\webui_streamlit.py", line 136, in layout
    layout()
    └ <function layout at 0x00000210D200E670>

  File "scripts\txt2img.py", line 664, in layout
    load_models(use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
    │                    │  │                                     │  └ <streamlit.runtime.state.session_state_proxy.SessionStateProxy object at 0x00000210BCFBDDF0>
    │                    │  │                                     └ <module 'hydralit' from 'C:\\Users\\QSDFR\\.conda\\envs\\ldm\\lib\\site-packages\\hydralit\\__init__.py'>
    │                    │  └ <streamlit.runtime.state.session_state_proxy.SessionStateProxy object at 0x00000210BCFBDDF0>
    │                    └ <module 'hydralit' from 'C:\\Users\\QSDFR\\.conda\\envs\\ldm\\lib\\site-packages\\hydralit\\__init__.py'>
    └ <function load_models at 0x00000210D1CBCF70>

  File "scripts\sd_utils\__init__.py", line 458, in load_models
    config, device, model, modelCS, modelFS = load_sd_model(custom_model)
                                              │             └ 'HD-18'
                                              └ <function load_sd_model at 0x00000210D1DACF70>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\site-packages\decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
           │      │       │        │        └ {}
           │      │       │        └ ('HD-18',)
           │      │       └ ()
           │      └ <function load_sd_model at 0x00000210D1DACEE0>
           └ <function retry.<locals>.retry_decorator at 0x00000210D1DACCA0>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\site-packages\retry\api.py", line 73, in retry_decorator
    return __retry_internal(partial(f, *args, **kwargs), exceptions, tries, delay, max_delay, backoff, jitter,
           │                │       │   │       │        │           │      │      │          │        └ 0
           │                │       │   │       │        │           │      │      │          └ 1
           │                │       │   │       │        │           │      │      └ None
           │                │       │   │       │        │           │      └ 0
           │                │       │   │       │        │           └ 5
           │                │       │   │       │        └ <class 'Exception'>
           │                │       │   │       └ {}
           │                │       │   └ ('HD-18',)
           │                │       └ <function load_sd_model at 0x00000210D1DACEE0>
           │                └ <class 'functools.partial'>
           └ <function __retry_internal at 0x00000210D0578EE0>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\site-packages\retry\api.py", line 33, in __retry_internal
    return f()
           └ functools.partial(<function load_sd_model at 0x00000210D1DACEE0>, 'HD-18')

  File "scripts\sd_utils\__init__.py", line 1570, in load_sd_model
    model = load_model_from_config(config, ckpt_path)
            │                      │       └ 'models\\custom\\HD-18.ckpt'
            │                      └ {'model': {'base_learning_rate': 0.0001, 'target': 'ldm.models.diffusion.ddpm.LatentDiffusion', 'params': {'linear_start': 0....
            └ <function load_model_from_config at 0x00000210D1DB5040>

  File "scripts\sd_utils\__init__.py", line 498, in load_model_from_config
    sd = pl_sd["state_dict"]
         └ {'betas': tensor([0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009,
                   0.0009, 0.0009, 0.0009, 0.0...

KeyError: 'state_dict'
ERROR      @ 2022-12-25 16:12:20 | __main__:<module>:203 - An error has been caught in function '<module>', process 'MainProcess' (7404), thread 'ScriptRunner.scriptThread' (16904):
Traceback (most recent call last):

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\threading.py", line 890, in _bootstrap
    self._bootstrap_inner()
    │    └ <function Thread._bootstrap_inner at 0x00000210B3402F70>
    └ <Thread(ScriptRunner.scriptThread, started 16904)>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
    │    └ <function Thread.run at 0x00000210B3402CA0>
    └ <Thread(ScriptRunner.scriptThread, started 16904)>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
    │    │        │    │        │    └ {}
    │    │        │    │        └ <Thread(ScriptRunner.scriptThread, started 16904)>
    │    │        │    └ ()
    │    │        └ <Thread(ScriptRunner.scriptThread, started 16904)>
    │    └ <bound method ScriptRunner._run_script_thread of ScriptRunner(_session_id='e090a7d7-971c-4361-bd43-c2bbabe5b345', _main_scrip...
    └ <Thread(ScriptRunner.scriptThread, started 16904)>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 298, in _run_script_thread
    self._run_script(request.rerun_data)
    │    │           │       └ <property object at 0x00000210BB519F40>
    │    │           └ ScriptRequest(type=<ScriptRequestType.RERUN: 'RERUN'>, _rerun_data=RerunData(query_string='', widget_states=widgets {
    │    │               id: "...
    │    └ <function ScriptRunner._run_script at 0x00000210BB5C4DC0>
    └ ScriptRunner(_session_id='e090a7d7-971c-4361-bd43-c2bbabe5b345', _main_script_path='D:\\StabDiff\\SD-20062022\\stable-diffusi...

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 563, in _run_script
    exec(code, module.__dict__)
         │     │      └ <member '__dict__' of 'module' objects>
         │     └ <module '__main__' from 'D:\\StabDiff\\SD-20062022\\stable-diffusion-webui\\scripts\\webui_streamlit.py'>
         └ <code object <module> at 0x000002128BAEC500, file "D:\StabDiff\SD-20062022\stable-diffusion-webui\scripts\webui_streamlit.py"...

> File "D:\StabDiff\SD-20062022\stable-diffusion-webui\scripts\webui_streamlit.py", line 203, in <module>
    layout()
    └ <function layout at 0x00000210C67565E0>

  File "D:\StabDiff\SD-20062022\stable-diffusion-webui\scripts\webui_streamlit.py", line 136, in layout
    layout()
    └ <function layout at 0x00000210D200E670>

  File "scripts\txt2img.py", line 664, in layout
    load_models(use_LDSR=st.session_state["use_LDSR"], LDSR_model=st.session_state["LDSR_model"],
    │                    │  │                                     │  └ <streamlit.runtime.state.session_state_proxy.SessionStateProxy object at 0x00000210BCFBDDF0>
    │                    │  │                                     └ <module 'hydralit' from 'C:\\Users\\QSDFR\\.conda\\envs\\ldm\\lib\\site-packages\\hydralit\\__init__.py'>
    │                    │  └ <streamlit.runtime.state.session_state_proxy.SessionStateProxy object at 0x00000210BCFBDDF0>
    │                    └ <module 'hydralit' from 'C:\\Users\\QSDFR\\.conda\\envs\\ldm\\lib\\site-packages\\hydralit\\__init__.py'>
    └ <function load_models at 0x00000210D1CBCF70>

  File "scripts\sd_utils\__init__.py", line 458, in load_models
    config, device, model, modelCS, modelFS = load_sd_model(custom_model)
                                              │             └ 'HD-18'
                                              └ <function load_sd_model at 0x00000210D1DACF70>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\site-packages\decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
           │      │       │        │        └ {}
           │      │       │        └ ('HD-18',)
           │      │       └ ()
           │      └ <function load_sd_model at 0x00000210D1DACEE0>
           └ <function retry.<locals>.retry_decorator at 0x00000210D1DACCA0>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\site-packages\retry\api.py", line 73, in retry_decorator
    return __retry_internal(partial(f, *args, **kwargs), exceptions, tries, delay, max_delay, backoff, jitter,
           │                │       │   │       │        │           │      │      │          │        └ 0
           │                │       │   │       │        │           │      │      │          └ 1
           │                │       │   │       │        │           │      │      └ None
           │                │       │   │       │        │           │      └ 0
           │                │       │   │       │        │           └ 5
           │                │       │   │       │        └ <class 'Exception'>
           │                │       │   │       └ {}
           │                │       │   └ ('HD-18',)
           │                │       └ <function load_sd_model at 0x00000210D1DACEE0>
           │                └ <class 'functools.partial'>
           └ <function __retry_internal at 0x00000210D0578EE0>

  File "C:\Users\QSDFR\.conda\envs\ldm\lib\site-packages\retry\api.py", line 33, in __retry_internal
    return f()
           └ functools.partial(<function load_sd_model at 0x00000210D1DACEE0>, 'HD-18')

  File "scripts\sd_utils\__init__.py", line 1570, in load_sd_model
    model = load_model_from_config(config, ckpt_path)
            │                      │       └ 'models\\custom\\HD-18.ckpt'
            │                      └ {'model': {'base_learning_rate': 0.0001, 'target': 'ldm.models.diffusion.ddpm.LatentDiffusion', 'params': {'linear_start': 0....
            └ <function load_model_from_config at 0x00000210D1DB5040>

  File "scripts\sd_utils\__init__.py", line 498, in load_model_from_config
    sd = pl_sd["state_dict"]
         └ {'betas': tensor([0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009, 0.0009,
                   0.0009, 0.0009, 0.0009, 0.0...

Negative Prompt not working

The picture it generates wont be based on the tags, if i copy the default tags from you or other people that posted them here on the issues it just generates random stuff:

test

HD 18 does not install on Diffusionbee

HD 17 can be installed with no issue, but HD 18 fails to be added and gives the error: Error Traceback (most recent call last): File "convert_model.py", line 28, in KeyError: 'state_dict' [3923] Failed to execute script 'convert_model' due to unhandled exception!

Unable to create venv in directory venv

In Step 3, when I run the webui-user.bat file I get the following:


Creating venv in directory venv using python "C:\Users\XXXXXX\AppData\Local\Programs\Python\Python311\python.exe"
Unable to create venv in directory venv

exit code: 3

stderr:
El sistema no puede encontrar la ruta especificada.

Launch unsuccessful. Exiting.
Presione una tecla para continuar . . .

What can I do?

Struggling to have certain details generate

Heya! First off, this is a -fantastic- resource and I appreciate it immensely! I'm have a lot of trouble getting pussy/no panties to show. Even if I specify, it'll either warp to show nothing, show other articles of clothes covering, or will create panties regardless. Any tips to avoid this problem?

Error verifying pickled file

I am super new to using python ect. This is the error I am getting :
Error verifying pickled file from C:\Users\xxdau/.cache\huggingface\transformers\c506559a5367a918bab46c39c79af91ab88846b49c8abd9d09e699ae067505c6.6365d436cc844f2f2b4885629b559d8ff0938ac484c01a6796538b2665de96c7:
Traceback (most recent call last):
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\modules\safe.py", line 81, in check_pt
with zipfile.ZipFile(filename) as z:
File "C:\Users\xxdau\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1267, in init
self._RealGetContents()
File "C:\Users\xxdau\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1334, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\modules\safe.py", line 135, in load_with_extra
check_pt(filename, extra_handler)
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\modules\safe.py", line 102, in check_pt
unpickler.load()
_pickle.UnpicklingError: persistent IDs in protocol 0 must be ASCII strings

-----> !!!! The file is most likely corrupted !!!! <-----
You can skip this check with --disable-safe-unpickle commandline argument, but that is not going to help you.

Traceback (most recent call last):
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\launch.py", line 295, in
start()
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\launch.py", line 290, in start
webui.webui()
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\webui.py", line 132, in webui
initialize()
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\webui.py", line 62, in initialize
modules.sd_models.load_model()
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\modules\sd_models.py", line 260, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 100, in init
self.transformer = CLIPTextModel.from_pretrained(version)
File "C:\Users\xxdau\OneDrive\Desktop\SUPER SD\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py", line 2006, in from_pretrained
loaded_state_dict_keys = [k for k in state_dict.keys()]
AttributeError: 'NoneType' object has no attribute 'keys'
Press any key to continue . . .

Torch is not able to use GPU

I'm attempting to run webui.sh on an Arch Linux install (Linux Arch 6.0.9-arch1-1).
when launching python I'm getting this error;

################################################################
Launching launch.py...
################################################################
Python 3.10.8 (main, Nov  1 2022, 14:18:21) [GCC 12.2.0]
Commit hash: 828438b4a190759807f9054932cae3a8b880ddf1
Traceback (most recent call last):
  File "/home/foxmaccloud/stable-diffusion-webui/launch.py", line 250, in <module>
    prepare_enviroment()
  File "/home/foxmaccloud/stable-diffusion-webui/launch.py", line 174, in prepare_enviroment
    run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
  File "/home/foxmaccloud/stable-diffusion-webui/launch.py", line 58, in run_python
    return run(f'"{python}" -c "{code}"', desc, errdesc)
  File "/home/foxmaccloud/stable-diffusion-webui/launch.py", line 34, in run
    raise RuntimeError(message)
RuntimeError: Error running command.
Command: "/home/foxmaccloud/stable-diffusion-webui/venv/bin/python3" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout: <empty>
stderr: Traceback (most recent call last):
  File "<string>", line 1, in <module>
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

I know that cuda is Nvidia technology, so was thinking it could be an issue with me having an AMD gpu?

animations

Anyone have an ideas on how to do animations like the one shown in the post? Do we need specific settings or prompts?

AttributeError: 'NoneType'

I was having a lot of trouble with getting this started as im pretty new with handling Python and this stuff in general.
Im now running into this error when starting the webui-user.bat (Windows) and cant seem to find a solution:

loaded_state_dict_keys = [k for k in state_dict.keys()]
AttributeError: 'NoneType' object has no attribute 'keys'

Also, do I need everything you put in that "requirements" document?
If so it seems you left a hell lot of stuff out of your described installation process.
And as a newbe I have no plan on how to install all that stuff.

KeyError: 'state_dict' Cannot load model

I'm having an issue with the current model, HD-18. The error message I get is this:

KeyError                                  Traceback (most recent call last)
<ipython-input-4-127af4bf47ad> in <module>
    110 opt = config()
    111 config = OmegaConf.load(f"{opt.config}")
--> 112 model = load_model_from_config(config, f"{opt.ckpt}")
    113 model = model.to(device)
    114 batch_idx = 0

<ipython-input-4-127af4bf47ad> in load_model_from_config(config, ckpt, verbose)
     79     if "global_step" in pl_sd:
     80         print(f"Global Step: {pl_sd['global_step']}")
---> 81     sd = pl_sd["state_dict"]
     82     model = instantiate_from_config(config.model)
     83     m, u = model.load_state_dict(sd, strict=False)

KeyError: 'state_dict'

16 and 17 both worked for me, so it's confusing why 18 is showing such an error. I tried redownloading the ckpt and even loaded the model into a Google colab. The model failed to work in all these instances due to this error.

oval purple blob

I find a lot of my images have oval purple blobs, especially where ladies have bare shoulders and where leather straps etc would create shadow on skin.

Is this a flaw in the model or is it a consequence of certain artists work or styles I can just neg out?

Pip version 23.1.2 Not being Recognized

During the initial installation I got an error message that said my pip version needed to be update to pip version 23.1.2. I installed the new pip version via command prompt and it said it all was successful. I tried running the install script again and it still said that I had and old version installed and didn't recognize I had a new version.

Any help would be nice XD

How the F do you make this run?!

I've been wasting like an hour of my life trying to get this to run. I followed the instructions on the front page, ran webui-user.bat and it immediately gives the error message that it can't install torch and torchvision.

I saw the thread saying the same thing, and the only real help was being told to install anaconda (which shouldn't be necessary) and pip (which IS necessary).

The error still came up anyway so I opened launch.py with notepad to read the commands themselves and try to find where it's trying to download and install these packages. I managed to easily find the URL where torch and torch vision are being downloaded from, so I got those which are whl files but I've no idea how to install them, so I just put them a python folder that had other whl files. Ran python but no idea how to get it to install the packages anyway...

Still I edited launch.py again to ignore torch and just try to install the next thing. Still ran into another error message. Guess what. Can't install gfpgan. Error message says you need git installed. So I download and install that.
Ran the bat again. ANOTHER ERROR MESSAGE. Can't install gfpgan, followed by a shitload of text explaining the error.

How can ANYONE get this crap to work at all when you have to jump through so many hoops? Why isn't there just a normal installer??!?!? WHy is the TUTORIAL so grossly incomplete? How the fuck is anyone supposed to troubleshoot this when your "installer" doesn't even work when you have the required software? FIX YOUR TUTORIAL OR MAKE AN INSTALLER THAT DOESN'T MAKE ME WANT TO BASH MY BRAINS AGAINST THE WALL

Congratulations for the model!

I came here just to thank you for the model, I've tested several famous models and this was the one that pleased me the most, unbelievable that it's not known, when the negative prompts are correct it looks amazing, I mixed it with 20% of Anything 3.0 and it turned out even better, I definitely can't live without this model anymore... keep improving it! pls!

no xformers, do I need it? (HD-22)

during installation it skipped over installing the 'xformers' module and reminds me that it did every time I boot the program. What does it do and would it help the quality of pictures?

Guide for mastering the prompt

Hello,

I would like to ask if you can suggest material on how to do advanced prompts?

I'm probably blind, but I can't find any material on how much effect: "()", "{}", ":number" has on the prompt.

For example, you recommended to do a "Sanity Test" in which you used advanced parameters.

If I play with the parameters, I see the difference in them, but I do not understand the technical component behind them.

If I break down my question into points:

  1. I don't understand how much the result is multiplied when using "()"
  2. I don't know what is the difference between "()" and "{}"
  3. If you give a weight to a parameter, how much does it affect? For example "forest" and "(forest:1)" are parameters of the same weight? How will the values ​​above (:1.5) and below(0.5) be interpreted.

Maybe prompt can do a lot more things that I don't know about. I would not like to waste your time and explain to me how it works. If you can give a direction to an article/book that describes such details - I would be grateful to you.

I have one question for your model:
Do you have a library of words that were used to train the model? I understand that you used materials from "34" and "Gelbooru" and most of the tags will do, but what to do with little-known tags or non-obvious moments?
For example:

  • Сan I be sure that character X or tag Y is in your model. Can I find out without testing it on the prompt?
  • I would not know that you can use tags such as "masterpiece" or "best quality" in the prompt.

How should I know what prompt can accept without trying to interpret my words which the model may not understand?

I apologize in advance if there are any very obvious points here. II'm still new at work with "stable diffusion" and his branches

I have already worked with your model. In some places I am shocked by how well hands, details, objects are drawn. You have done an excellent job. I will look forward to your updates :)

I keep getting this error "LayerNormKernellmpl"

Hello! I got it running, eventually after a grueling forever... and now there's just one obstacle, and that is this error, " RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' " whenever I press generate the image based on my prompts "1girl, anime" as a test. Don't know if I am going to receive any aid, so I am bracing myself for the long wait.

error starting up

I am getting an error saying it cant find the command "git" and asking if i have it in my path does anyone know how to fix this?

Tutorial sucks

The guide is not up to date and doesn't adress any possible errors we might get or even novice questions like why i can't run that damn webui-user.sh file. I love these kind of open source projects but when they lack of proper guidance they become torture...

Help needed

Couldn't launch python

exit code: 9009

stderr:
Nie mo
Launch unsuccessful. Exiting.
Press any key to continue . . .

This appears when I try to run webui-user.bat

Can't get it to generate what I want

I'm using the universal negative and the following prompt:
1girl, anime, monika_(doki_doki_literature_club)

Generated four images, this is the results:

00001-371883470-1girl, anime,_

Log:

prompt,seed,width,height,sampler,cfgs,steps,filename,negative_prompt
"1girl, anime, monika_(doki_doki_literature_club)",371883470,512,512,Euler a,16,20,"00001-371883470-1girl, anime, monika_(doki_doki_literature_club).png","(((deformed))), blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), fused fingers, messy drawing, broken legs censor, censored, censor_bar, multiple breasts, (mutated hands and fingers:1.5), (long body :1.3), (mutation, poorly drawn :1.2), black-white, bad anatomy, liquid body, liquidtongue, disfigured, malformed, mutated, anatomical nonsense, text font ui, error, malformed hands, long neck, blurred, lowers, low res, bad anatomy, bad proportions, bad shadow, uncoordinated body, unnatural body, fused breasts, bad breasts, huge breasts, poorly drawn breasts, extra breasts, liquid breasts, heavy breasts, missingbreasts, huge haunch, huge thighs, huge calf, bad hands, fused hand, missing hand, disappearing arms, disappearing thigh, disappearing calf, disappearing legs, fusedears, bad ears, poorly drawn ears, extra ears, liquid ears, heavy ears, missing ears, fused animal ears, bad animal ears, poorly drawn animal ears, extra animal ears, liquidanimal ears, heavy animal ears, missing animal ears, text, ui, error, missing fingers, missing limb, fused fingers, one hand with more than 5 fingers, one hand with less than5 fingers, one hand with more than 5 digit, one hand with less than 5 digit, extra digit, fewer digits, fused digit, missing digit, bad digit, liquid digit, colorful tongue, blacktongue, cropped, watermark, username, blurry, JPEG artifacts, signature, 3D, 3D game, 3D game scene, 3D character, malformed feet, extra feet, bad feet, poorly drawnfeet, fused feet, missing feet, extra shoes, bad shoes, fused shoes, more than two shoes, poorly drawn shoes, bad gloves, poorly drawn gloves, fused gloves, bad cum, poorly drawn cum, fused cum, bad hairs, poorly drawn hairs, fused hairs, big muscles, ugly, bad face, fused face, poorly drawn face, cloned face, big face, long face, badeyes, fused eyes poorly drawn eyes, extra eyes, malformed limbs, more than 2 nipples, missing nipples, different nipples, fused nipples, bad nipples, poorly drawnnipples, black nipples, colorful nipples, gross proportions. short arm, (((missing arms))), missing thighs, missing calf, missing legs, mutation, duplicate, morbid, mutilated, poorly drawn hands, more than 1 left hand, more than 1 right hand, deformed, (blurry), disfigured, missing legs, extra arms, extra thighs, more than 2 thighs, extra calf,fused calf, extra legs, bad knee, extra knee, more than 2 legs, bad tails, bad mouth, fused mouth, poorly drawn mouth, bad tongue, tongue within mouth, too longtongue, black tongue, big mouth, cracked mouth, bad mouth, dirty face, dirty teeth, dirty pantie, fused pantie, poorly drawn pantie, fused cloth, poorly drawn cloth, badpantie, yellow teeth, thick lips, bad camel toe, colorful camel toe, bad asshole, poorly drawn asshole, fused asshole, missing asshole, bad anus, bad pussy, bad crotch, badcrotch seam, fused anus, fused pussy, fused anus, fused crotch, poorly drawn crotch, fused seam, poorly drawn anus, poorly drawn pussy, poorly drawn crotch, poorlydrawn crotch seam, bad thigh gap, missing thigh gap, fused thigh gap, liquid thigh gap, poorly drawn thigh gap, poorly drawn anus, bad collarbone, fused collarbone, missing collarbone, liquid collarbone, strong girl, obesity, worst quality, low quality, normal quality, liquid tentacles, bad tentacles, poorly drawn tentacles, split tentacles, fused tentacles, missing clit, bad clit, fused clit, colorful clit, black clit, liquid clit, QR code, bar code, censored, safety panties, safety knickers, beard, furry, pony, pubic hair, mosaic, futa, testis, (((deformed))), blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), fused fingers, messy drawing, broken legs censor, censored, censor_bar, multiple breasts, (mutated hands and fingers:1.5), (long body :1.3), (mutation, poorly drawn :1.2), black-white, bad anatomy, liquid body, liquidtongue, disfigured, malformed, mutated, anatomical nonsense, text font ui, error, malformed hands, long neck, blurred, lowers, low res, bad anatomy, bad proportions, bad shadow, uncoordinated body, unnatural body, fused breasts, bad breasts, huge breasts, poorly drawn breasts, extra breasts, liquid breasts, heavy breasts, missingbreasts, huge haunch, huge thighs, huge calf, bad hands, fused hand, missing hand, disappearing arms, disappearing thigh, disappearing calf, disappearing legs, fusedears, bad ears, poorly drawn ears, extra ears, liquid ears, heavy ears, missing ears, fused animal ears, bad animal ears, poorly drawn animal ears, extra animal ears, liquidanimal ears, heavy animal ears, missing animal ears, text, ui, error, missing fingers, missing limb, fused fingers, one hand with more than 5 fingers, one hand with less than5 fingers, one hand with more than 5 digit, one hand with less than 5 digit, extra digit, fewer digits, fused digit, missing digit, bad digit, liquid digit, colorful tongue, blacktongue, cropped, watermark, username, blurry, JPEG artifacts, signature, 3D, 3D game, 3D game scene, 3D character, malformed feet, extra feet, bad feet, poorly drawnfeet, fused feet, missing feet, extra shoes, bad shoes, fused shoes, more than two shoes, poorly drawn shoes, bad gloves, poorly drawn gloves, fused gloves, bad cum, poorly drawn cum, fused cum, bad hairs, poorly drawn hairs, fused hairs, big muscles, ugly, bad face, fused face, poorly drawn face, cloned face, big face, long face, badeyes, fused eyes poorly drawn eyes, extra eyes, malformed limbs, more than 2 nipples, missing nipples, different nipples, fused nipples, bad nipples, poorly drawnnipples, black nipples, colorful nipples, gross proportions. short arm, (((missing arms))), missing thighs, missing calf, missing legs, mutation, duplicate, morbid, mutilated, poorly drawn hands, more than 1 left hand, more than 1 right hand, deformed, (blurry), disfigured, missing legs, extra arms, extra thighs, more than 2 thighs, extra calf,fused calf, extra legs, bad knee, extra knee, more than 2 legs, bad tails, bad mouth, fused mouth, poorly drawn mouth, bad tongue, tongue within mouth, too longtongue, black tongue, big mouth, cracked mouth, bad mouth, dirty face, dirty teeth, dirty pantie, fused pantie, poorly drawn pantie, fused cloth, poorly drawn cloth, badpantie, yellow teeth, thick lips, bad camel toe, colorful camel toe, bad asshole, poorly drawn asshole, fused asshole, missing asshole, bad anus, bad pussy, bad crotch, badcrotch seam, fused anus, fused pussy, fused anus, fused crotch, poorly drawn crotch, fused seam, poorly drawn anus, poorly drawn pussy, poorly drawn crotch, poorlydrawn crotch seam, bad thigh gap, missing thigh gap, fused thigh gap, liquid thigh gap, poorly drawn thigh gap, poorly drawn anus, bad collarbone, fused collarbone, missing collarbone, liquid collarbone, strong girl, obesity, worst quality, low quality, normal quality, liquid tentacles, bad tentacles, poorly drawn tentacles, split tentacles, fused tentacles, missing clit, bad clit, fused clit, colorful clit, black clit, liquid clit, QR code, bar code, censored, safety panties, safety knickers, beard, furry, pony, pubic hair, mosaic, futa, testis"

What am I doing wrong?

--Suggestion-- Certain parts are inaccurately drawn, possible to train AI to draw them better?

Oftentimes bodily parts are drawn incorrectly by the AI when the prompt is done nsfw.

  • Feet often have multiple heels and almost never have 5 toes
  • The body is often contorted ( like the upper body is drawn laying on their back but the butt is depicted as if they were on their hands and knees )
  • If not specified by the prompt, nipples are usually not drawn at all
  • Breasts if held by some sort of string will have 2 sets of nipples, one above the string and one below
  • The anus is almost never correctly drawn ( usually just a red hole with no seam if it is even present )
  • Arms often consist of only the shoulder and disappear into the body or clothing. Even when drawn, they sometimes turn into other anatomy, like breasts or legs
  • Limbs often bend backwards or sideways

In addition, it is quite difficult to get the Ai to draw a dick correctly either. Even after being very literal and pushy with the prompt to even get one drawn in order to show the deed, the 'dick' usually ends up not being human at all. They come in all shapes and colors, oftentimes very thin and white, red like an animal's, or just made up of whatever was randomly in the environment. As for getting the Ai to depict services with said anatomy such as hotdogs, titjobs, footjobs, handjobs, etc, forget it, when it is drawn in, it always goes in the mouth, ass, or pussy.

Overall, the AI works great for depicting only 1 girl in various levels states of nudity. Once sex of any kind is introduced however, it quickly fails to preform properly. That would be my biggest suggestion on a direction to work towards for the moniker, 'Hentai Diffusion'

PS - something was misspelled on the readme.
Readme: Dispite the name there is no adult content on this page --> Despite the name there is no adult content on this page

HD 18 Out of memory error on webui, safetensors, pruning

On Auto111 running on cloud, HD-18 loading caused out of memory failure.
But HD-17 works well without issue.

Can I suggest converting .ckpt to .safetensors ?
This is more memory efficient, and enables model to load faster in Auto111.
https://github.com/huggingface/safetensors

Also, have you tried pruning the model ?
URPM reduced the size from 8gb to 1.6gb using below script.

https://github.com/saftle/stable-diffusion-prune
(-eca command).

Image not generating/stuck

So I finished setting everything up and now that I put a prompt in ('1girl, anime,' is included) it starts but gets stuck at almost no progress and I was wondering if it's supposed to take a while or something, because it just seems to be stuck.
obraz_2022-12-22_211707777

Edit: It's been 3 hours and there is still no progress

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.