Coder Social home page Coder Social logo

shifthackz / stable-diffusion-android Goto Github PK

View Code? Open in Web Editor NEW
470.0 10.0 50.0 5.25 MB

Stable Diffusion AI client app for Android

Home Page: https://sdai.moroz.cc

License: GNU Affero General Public License v3.0

Kotlin 99.99% Shell 0.01%
android automatic1111 clean-architecture compose compose-ui kotlin multimodule-android-app mvi stable-diffusion stable-diffusion-webui

stable-diffusion-android's Introduction

Header

Stable-Diffusion-Android

Google Play F-Droid

Google Play F-Droid 4pda

Stable Diffusion AI is an easy-to-use app that lets you quickly generate images from text or other images with just a few clicks. With this app, you can communicate with your own server and generate high-quality images in seconds.

Features

  • Can use server environment powered by AI Horde (a crowdsourced distributed cluster of Stable Diffusion workers)
  • Can use server environment powered by Stable-Diffusion-WebUI (AUTOMATIC1111)
  • Can use server envitonment powered by Hugging Face Inference API.
  • Can use server environment powered by OpenAI (DALL-E-2, DALL-E-3).
  • Can use server environment powered by Stability AI.
  • Can use local environment powered by LocalDiffusion (Beta)
  • Supports original Txt2Img, Img2Img modes
    • Positive and negative prompt support
    • Support dynamic size in range from 64 to 2048 px (for width and height)
    • Selection of different sampling methods (available samplers are loaded from server)
    • Unique seed input
    • Dynamic sampling steps in range from 1 to 150
    • Dynamic CFG scale in range from 1.0 to 30.0
    • Restore faces option
    • ( Img2Img ONLY ) : Image selection from device gallery (requires user permission)
    • ( Img2Img ONLY ) : Capture input image from camera (requires user permission)
    • ( Img2Img ONLY ) : Fetching random image for the input
    • ( Img2Img ONLY ) : Inpaint (for A1111)
      • Mask blur (1 to 64)
      • Mask mode (Masked, not masked)
      • Masked content (Fill, Original, Latent noise, Latent nothing)
      • Inpaint area (Whole picture, only masked)
      • Only maked padding (0 to 256 px)
    • Batch generation with maximum of 20 images (for A1111 and Horde)
    • Lora picker (for A1111)
    • Textual inversion picker (for A1111)
    • Hypernetworks picker (for A1111)
    • SD Model picker (for A1111)
  • In-app Gallery, stored locally, contains all AI generated images
    • Displays generated images grid
    • Image detail view: Zoom, Pinch, Generation Info.
    • Export all gallery to .zip file
    • Export single photo to .zip file
  • Settings
    • WebUI server URL
    • Active SD Model selection
    • Server availability monitoring (http-ping method)
    • Enable/Disable auto-saving of generated images
    • Enable/Disable saving generated images to Download/SDAI android MediaStore folder
    • Clear gallery / app cache

Setup instruction

Option 1: Use your own Automatic1111 instance

This requires you to have the AUTOMATIC1111 WebUI that is running in server mode.

You can have it running either on your own hardware with modern GPU from Nvidia or AMD, or running it using Google Colab.

  1. Follow the setup instructions on Stable-Diffusion-WebUI repository.
  2. Add the arguments --api --listen to the command line arguments of WebUI launch script.
  3. After running the server, get the IP address, or URL of your WebUI server.
  4. On the first launch, app will ask you for the server URL, enter it and press Connect button. If you want to change the server URL, go to Settings tab, choose Configure option, and repeat the setup flow.

If for some reason you have no ability to run your server instance, you can toggle the Demo mode swith on server setup page: it will allow you to test the app and get familiar with it, but it will return some mock images instead of AI-generated ones.

Option 2: Use AI Horde

AI Horde is a crowdsourced distributed cluster of Image generation workers and text generation workers.

AI Horde requires to use API KEY, this mobile app alows to use either default API KEY (which is "0000000000"), or type your own. You can sign up and get your own AI Horde API KEY here.

Option 3: Hugging Face Inference

Hugging Face Inference API allows to test and evaluate, over 150,000 publicly accessible machine learning models, or your own private models, via simple HTTP requests, with fast inference hosted on Hugging Face shared infrastructure. This service is free, but is rate-limited.

Hugging Face Inference requires to use API KEY, which can be created in Hugging Face account settings.

Option 4: OpenAI

OpenAI provides a service for text to image generation using DALLE-2 or DALLE-3 models. This service is paid,

OpenAI requires to use API KEY, which can be created in OpenAI API Key settings.

Option 5: StabilityAI

StabilityAI is the image generation service provided by DreamStudio.

StabilityAI requires to use API KEY, which can be created in API Keys page.

Option 6: Local Diffusion (Beta)

Only txt2img mode is supported.

Allows to use phone resources to generate images.

Supported languages

App uses the language provided by OS default settings.

User interface of the app is translated for languages listed in this table:

Language Since version Status
English 0.1.0 Translated
Ukrainian 0.1.0 Translated
Turkish 0.4.1 Translated
Russian 0.5.5 Translated

Any contributions to the translations are welcome.

Difference between builds from Google Play and F-Droid/GitHub releases

As Google Play has some policies that app needs to be compliant with in order to be allowed to publist on Google Play there are some differences between builds distributed via Google Play and F-Droid/GitHub releases, listed in table.

Feature Google Play build F-Droid/GitHub build Reason
Sideloading LocalDiffusion custom model Google Play does not allow publishing apps with android.permission.MANAGE_EXTERNAL_STORAGE permission, which is required to read custom model files from external storage directly.

Donate

This software is open source, provided with no warranty, and you are welcome to use it for free.

In case you find this software valuable, and you'd like to say thanks and show a little support, here is the button:

"Buy Me A Coffee"

stable-diffusion-android's People

Contributors

dependabot[bot] avatar maxxxel avatar shifthackz avatar umut-ore avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stable-diffusion-android's Issues

.safetensors

Loading a custom is some kind of hard to implement when using safetensors files or ckpt files.
I hope there will be in the future a way to use safetensors files or ckpt files.
For now if there is another way to convert safetensors to onnx that would be good.

Feature Request: Local Img2Img

Is your feature request related to a problem? Please describe.
Add/enable local img2img feature, perhaps labeled as risky/experimental. I have stable diffusion setup inside chroot'ed debian in termux and it functions reasonably well but feels cluttered. Some devices these days can handle it!

Hide it in a developer menu with a disclaimer attached if you have to 💯

Describe alternatives you've considered
Chroot'ed SD server + webui in termux using ->
https://github.com/leejet/stable-diffusion.cpp
https://github.com/DaniAndTheWeb/sd.cpp-webui

Add popup before model deletion

Is your feature request related to a problem? Please describe.
I tested a few of the local diffusion models and accidentaly clicked on delete, which instantly deleted the model.

Describe the solution you'd like
Add a popup to make sure the model isn't deleted by accident.

Describe alternatives you've considered
None

Allow nsfw

Is your feature request related to a problem? Please describe.
I would like that the nsfw flag not be set to false at all times in both Img2Img and Text2Img Payloads for Horde AI
Referencing files

and

(i.e. you can send nsfw true, false is just hardcoded for some reason)

Describe the solution you'd like
Add a simple config option (radio button) for nsfw on/off.

Describe alternatives you've considered

Additional context

Can't click on local diffusion

I'm on Android 13 (Galaxy M52 5G), latest version 0.5.4, both google play and FOSS version.

Screen_Recording_20240207_120251_SDAI.mp4

Request: Saving prompts as templates / styles

As a user, I would like to be able to save the prompts I use as a template so that I don't have to enter them manually each time.

  • I can use different templates
  • Templates can be combined with each other
  • When selecting a template, the text in the prompt is prefilled
  • A template can contain positive & negative prompts
  • Saving is done via a "Save as template" button

Mesa driver

Hello, the application is really good, but it is really slow, maybe it could be better with the Mesa driver.

Issue when using Image To Image

I got this "java.lang.IllgelStateException: Expected BEGIN_OBJECT but was STRING at line 1 column 1 path $" when using image to image.


• What I did:

(my original image)

Prompt: animated city, ultra HD, extreme details, 8K, masterpiece, best prompt, best artist, best image, best AI

Negative prompt: blur, ugly, worst image, worst prompt, worst animated city, worst artist, worst quality

Width/Height: 512

Sampling Method: DPM++ 2M Karras

Seed: 1

Varitation seed: (empty)

Variation strength: 0

Sampling steps: 30

CFG Scale: 6

Denoising Strength: 0.75

Restore faces: (no check)


Please help me.

HTTP Error 503

I understand its in beta but local diffusion just returns the answer ERROR: HTTP 503

To Reproduce
Steps to reproduce the behavior:

  1. Open app
  2. Select local diffusion
  3. Select download
  4. See error

Expected Behavior
It would download the ai model.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Android 14

Smartphone (please complete the following information):

  • Device: Note 10+ 5g
  • OS: Android 14
  • Browser Chrome

App not working

I'm trying b to tap stuff but the ui isn't working:( I was so ready for this new update! I reinstall twice but same buf nothing works can't tap local diffusion;horde ai or Automatic 1111 even demo switch not work

error code 500

Describe the bug
Error: HTTP 500 on generation attempt

To Reproduce
Steps to reproduce the behavior:

  1. hit generate
  2. Error: HTTP 500 pops up

Expected behavior
Image should generate

Screenshots

Desktop (please complete the following information):

  • Android
  • 0.3.1

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
I'm pretty sure this is because I'm using cloudflare. Is there any way to bypass this, or should I be using localtunnel instead? I'm running this against my colab...

Can't go over 1024 px

Describe the bug
As mentioned in the readme up to 2048 px should be supported but I can't set it. If I do so I see the message 'Minimum Size is 1024'.

To Reproduce
Steps to reproduce the behavior:

  1. Go to text2image
  2. Click on width/height and enter 1025 px
  3. Enter
  4. See error

Expected behavior
Upto 2048 px in size

Screenshots
Screenshot_20230924_115110_SDAI FOSS.jpg

Smartphone (please complete the following information):

  • Device: Samsung Galaxy S22 Ultra
  • OS: Android
  • Version: 0.5.2 foss

F-Droid reproducible build failed

go

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Request to adjust settings, have dfaults, and saved profiles for local models.

Is your feature request related to a problem? Please describe.
when a local model is selected have text under that says advanced options.

Describe the solution you'd like
this option would have the ability to adjust the weights of the model, and steps.

Describe alternatives you've considered
as there is no alternitve ST that runs local on the hardware i am not sure what else i can do here. maybe run actual SD in Exagear?

Request: Apache Proxy Basic Auth Support

Is your feature request related to a problem? Please describe.
I can not add my A1111 instance because you need to authenticate beforehand using apache's basic auth, like the following url structure: https://username:[email protected]

Describe the solution you'd like
A way to be able to use the auth and proxy.

Describe alternatives you've considered
None, to be honest.

Request: Hi-res fix options

Perhaps I'm just not seeing them or otherwise ignorant as to how to include them but it would be nice to be able to specify hi-res options, at least for sessions connected to LAN a1111 servers. Selecting different upscalers, choosing number of steps, denoising strength, etc. Many models are optimized to predict or generate at ~512x512 and thus we rely on the upscalers to make the generated images closer to modern HD resolutions. It would also be nice to mess with extensions such as Adetailer, but that's probably waaaaaaay more difficult. I'm really enjoying the app otherwise!

TensorFlow Lite based local Stable Diffusion

Is your feature request related to a problem? Please describe.
It would be neat if you could run a model locally on your phone, devices with a lot of RAM this could be possible perhaps. I think you would need to implement a Stable Diffusion backend in TensorFlow Lite, then convert the models. I understand this would be a massive undertaking and would only be useful to some users, but I still would think it would be neat! But I understand if this is too much to ask for, maybe someone else would be willing to implement it

Describe the solution you'd like
A backend for Stable Diffusion in TensorFlow Lite allowing images to be generated locally on very powerful phones

Describe alternatives you've considered
Theirs a few other frameworks that support NNAPI I think, these could also be used perhaps. You could also not use NNAPI and do everything on the CPU with something like NCNN in theory with a wrapper or something like that

Additional context
Referenced libraries:
https://developer.android.com/ndk/guides/neuralnetworks
https://www.tensorflow.org/lite
https://github.com/Tencent/ncnn
Proof of concept:
https://www.qualcomm.com/news/onq/2023/02/worlds-first-on-device-demonstration-of-stable-diffusion-on-android

'Cloudflare Zero Trust' ability to connect to remote server utilizing Cloudflare Zero Trust..

As it said in the title I would love to be able to use this to manage SD remotely. I have my remote access setup using Cloudflare Tunnel with Cloudflare Zero Trust utilizing Oath authorization with GitHub login emails. Sounds complicated but it is incredibly streamlined. Simply if I am logged into my GitHub account, or other accounts I have allowed access to. I can easily navigate to specified domain as if I was connecting locally. However only those with specific emails connected to GitHub can do so. I love the security and how you hardly even notice it.

It would be amazing to somehow be able to use this client remotely. Are there any plans to allow what I said above to be possible?? Or potentially some way to make it work now??? Thank you

Local A1111 session not working

Describe the bug
When trying to use a local A1111 instance I receive an error:

"Parameter specified as non-null is null: method com.shifthackz.aisdv1.domain.entitiy.ServerConfiguration., parameter sdModelCheckpoint

To Reproduce
Unknown as it always happens for me on multiple devices.

Things I have tried:
Simplified network so that A111 server and mobile are on same L2 segment.

Simplified startup flags to only "--api --listen"

Tried authed and anonymous access

Tried FOSS and Play Store version

Updated A1111 to latest

Expected behavior
Local A1111 usage works as described

Desktop (please complete the following information):

  • OS: Fedora Linux
  • Version 37

Smartphone (please complete the following information):

  • Device: Samsung / Motorola
  • OS: Android12

Request: Improvement of saving images in local Gallery

User Story:
As a user, I would like to be able to access the images in my local image gallery.

  • The images should be automatically saved in a desired directory
  • Alternatively I would like to save the image with one (!) click into a directory
  • Ideally the path can be configured in the settings

To the background: The pictures are stored at present apparently in a local data base?! (I could not find the images in the Android file system).
The share function works - BUT: It's just too many clicks to save a picture to the phone.
So, in addition to the share function, another "Save" button would be useful here.

Image to video or animate image

I have an idea u don't have to add text to video cause our phones probably can't handle it but image to video may be better cause we can animate the images we create with the app and also add an face fixer option cuz most pics with people the app won't create decent faces even with the new models u added. So maybe my phone is too weak but please add image to video.

Text Generation Webui

Good afternoon. Thanks for your project. But I would like to ask you to add the possibility to use this https://github.com/oobabooga/text-generation-webui

The application works identically to automatic 1111. I would really like to use a good chat ai on the phone instead of chatgpt.

Thank you in advance and good luck!

P.S. For example, you can add tabs where we select automatic1111 or text generation ai

[Feature Request] Ability to use other models (checkpoints) when running locally

Is your feature request related to a problem? Please describe.
Me and most people find the base SD model to be pretty bad.
For example if you want to do anime/manga artstyle, it's just awful.

Describe the solution you'd like
Ability to import checkpoints (based on SD 1.5 arch) in .safetensors or .pth into the app
Let us use them for inference, and switch between them easily.

Describe alternatives you've considered
Maybe it's possible to replace the checkpoint used by default, in Android\data or whatever, but I won't mess with that.

Additional context
Please support importing SD 1.5 checkpoints and VAE

Maybe a way to easily retrieve them from HuggingFace/CivitAI with a nice UI
but not all have VAE, so it would pretty hard to make something that works with every model there

Batch

Please add option to request a batch

Leverage Qualcomm APIs

Many thanks for a good app, having the ability to perform on-device inference is very welcome.

Unfortunately the time required for each image generation is painful slow.

Qualcomm have recently announced the AI hub that includes specific APIs and models for on device inference.

https://aihub.qualcomm.com/models/stable_diffusion_quantized

Any plans to update the app to take advantage of the Qualcomm announcements?

Some features request

Is your feature request related to a problem? Please describe.
Hi! @ShiftHackZ! Thank you very much for this application (and especially for the Ukrainian language). I tested it, it works, but I can say, that if you have slash at the end of URL (I tested it with my notepad in Colab, repository is https://github.com/anapnoe/stable-diffusion-webui-ux) then you can not connect to environment, maybe it's one of the reasons why some users can not connect. The project is very promising but first of all it really lacks inpaint and all the other standard features that webui has. Hopefully these things will be added as well. Thank you!

500 Internal Server Error

Describe the bug
When I try to access sd through the app it results in a 500 Internal server error while it works if i go to my servers ip

To Reproduce
Steps to reproduce the behavior:

  1. run ./webui.sh --use-cpu all --precision full --no-half --skip-torch-cuda-test --api --listen
  2. enter server ip with port in the app and click connect
  3. 500 internal server error with the following error output ``` output.txt

Expected behavior
To be connected without errors

Desktop (please complete the following information):

  • desktop/server: Linux 6.4.7-zen1-3-zen (arch linux)

Smartphone (please complete the following information):

  • Device: Moto G13
  • OS: Android 13
  • Browser Latest Bromite/Firefox worked by going to server ip

Feature request: StabilityAI as a new provider

Is your feature request related to a problem? Please describe.
I have pretty much points on DreamStudio, and I'd like to use its API (which is official Stability API) in the app.

Describe the solution you'd like
Add StabilityAI as a new provider.

Describe alternatives you've considered
The only alternative is to use DreamStudio's official website.

Additional context
DreamStudio website: https://dreamstudio.ai/
Stability API documentation: https://platform.stability.ai/docs/getting-started/
https://platform.stability.ai/docs/api-reference/
https://platform.stability.ai/docs/features/

F-Droid can't build

ref: https://gitlab.com/fdroid/fdroiddata/-/jobs/6136043977#L1808 the difference is empty hence I smell an APK issue

Comparing directly:

$ apksigcopier compare sdai-foss-release-0.5.4.apk com.shifthackz.aisdv1.app.foss_168_signed.apk && echo OK
DOES NOT VERIFY
ERROR: APK Signature Scheme v2 signer #1: APK integrity check failed. CHUNKED_SHA256 digest mismatch. Expected: <3036ab79c1e6736c09688a098dffbc4855159cdd235ace29afac617c407ce6a1>, actual: <9e08aca38457ddad31ba19483ac3aeec852ede381c5ff887bce093482fd20cdc>
Error: failed to verify /tmp/tmpm9_v9asj/output.apk.

Is the APK aligned? Does it have all the needed signatures?

Gallery not always visible

Thanks for the great work. I hope in the future the API will support more features.
I can successfully generate images, but even with "always save images" activated, they are only visible in the app gallery and I have to manually save them in the phone gallery. Is it possible to automatically save them there?
My problem is, that after a while, the app gallery is not visible anymore and then I have no access to the generated images at all. Screenshot of the app gallery showing the problem
Screenshot_2023-05-28-16-11-14-441_com.shifthackz.aisdv1.app.jpg

Text to video please

Can you please add text to video model in local diffusion I really want to make videos offline,stable diffusion works better on my phone than my pc my pc is too weak to make videos ai so can you please add text to videos or even text to 1 or 2 seconds gif model pleaseeè.

Won't save with version 5.3.0 (regression)


A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Disable auto-save
  2. Generate img2img
  3. Click on 'Save'
  4. Generated image closes
  5. Go to Gallery tab
  6. Gallery images won't load
  7. Click 'Browse'
  8. Image is not in directory list

Expected behavior
Saved file should be in directory list and gallery

Screenshots
Gallery not loading

Desktop (please complete the following information):

  • OS: Android 12
  • Browser
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: Moto One 5G Ace
  • OS: Android
  • Version 12

Additional context
Running automatic111/stable-diffusion-webui 1.7.0 as a render server.

Request: An option for Extra Networks Tab like A1111

Is your feature request related to a problem? Please describe.
I have installed multiple TIs(Embeddings), Loras, Lycoris and hypernetworks and there is no way that I can remember all of them.

Describe the solution you'd like
Maybe if its not much work on your end that adding a modal that is showing these options with search.

Describe alternatives you've considered
n/a

Additional context
Screenshot of said screen

API error: POST [SSL: CERTIFICATE_VERIFY_FAILED]

Describe the bug
When i try to generate an image using the app, i get an error 500 and in the logs of stable-diffusion-webui i see this:

*** API error: POST: http://192.168.178.220:9000/sdapi/v1/txt2img {'error': 'URLError', 'detail': '', 'body': '', 'errors': '<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)>'}
    Traceback (most recent call last):
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 98, in receive
        return self.receive_nowait()
               ^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 93, in receive_nowait
        raise WouldBlock
    anyio.WouldBlock

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 78, in call_next
        message = await recv_stream.receive()
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 118, in receive
        raise EndOfStream
    anyio.EndOfStream

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "/config/02-sd-webui/webui/modules/api/api.py", line 186, in exception_handling
        return await call_next(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 84, in call_next
        raise app_exc
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 70, in coro
        await self.app(scope, receive_or_disconnect, send_no_error)
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 108, in __call__
        response = await self.dispatch_func(request, call_next)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/modules/api/api.py", line 150, in log_and_time
        res: Response = await call_next(req)
                        ^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 84, in call_next
        raise app_exc
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 70, in coro
        await self.app(scope, receive_or_disconnect, send_no_error)
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/cors.py", line 84, in __call__
        await self.app(scope, receive, send)
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 24, in __call__
        await responder(scope, receive, send)
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 44, in __call__
        await self.app(scope, receive, self.send_with_gzip)
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
        raise exc
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
        await self.app(scope, receive, sender)
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
        raise e
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
        await self.app(scope, receive, send)
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
        await route.handle(scope, receive, send)
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
        await self.app(scope, receive, send)
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
        response = await func(request)
                   ^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/fastapi/routing.py", line 237, in app
        raw_response = await run_endpoint_function(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/fastapi/routing.py", line 165, in run_endpoint_function
        return await run_in_threadpool(dependant.call, **values)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
        return await anyio.to_thread.run_sync(func, *args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
        return await get_asynclib().run_sync_in_worker_thread(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
        return await future
               ^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
        result = context.run(func, *args)
                 ^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/modules/api/api.py", line 379, in text2imgapi
        processed = process_images(p)
                    ^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/modules/processing.py", line 734, in process_images
        res = process_images_inner(p)
              ^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/webui/modules/processing.py", line 808, in process_images_inner
        sd_vae_approx.model()
      File "/config/02-sd-webui/webui/modules/sd_vae_approx.py", line 53, in model
        download_model(model_path, 'https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/download/v1.0.0-pre/' + model_name)
      File "/config/02-sd-webui/webui/modules/sd_vae_approx.py", line 39, in download_model
        torch.hub.download_url_to_file(model_url, model_path)
      File "/config/02-sd-webui/webui/venv/lib/python3.11/site-packages/torch/hub.py", line 611, in download_url_to_file
        u = urlopen(req)
            ^^^^^^^^^^^^
      File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 216, in urlopen
        return opener.open(url, data, timeout)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 519, in open
        response = self._open(req, data)
                   ^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 536, in _open
        result = self._call_chain(self.handle_open, protocol, protocol +
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 496, in _call_chain
        result = func(*args)
                 ^^^^^^^^^^^
      File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 1391, in https_open
        return self.do_open(http.client.HTTPSConnection, req,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/config/02-sd-webui/env/lib/python3.11/urllib/request.py", line 1351, in do_open
        raise URLError(err)
    urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)>

---

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
No error

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: Oneplus 8
  • OS: Android 13

Additional context
Add any other context about the problem here.

Can't download local diffusion

Just downloaded a few minutes ago. I don't know what is causing this error :(

Screenshot
Screenshot_2023-08-17-22-16-05-868_com.shifthackz.aisdv1.app.foss-edit.jpg

  • Device: [e.g. XiaomiPad5]
  • OS: [e.g. MIUI14]

Wildcards?

Please add option where we can add wildcards for easier prompting ,wildcards can be fun to use:)

NNAPI doesn't work on Pixel 6a and maybe other Google Tensor devices?

Describe the bug
When attempting to use NNAPI on at least my device (Pixel 6a) it fails run the model on the target device for whatever reason. I suspect this is due to the Tensor chips found in Pixel devices, it's a obviously rare and unusual CPU/TPU (based on Samsung Exynos apparently but with a cut down TPU added) so probably due to that I would imagine. Could be wrong though because as the app suggests it's a experimental feature on a already experimental backend might be unrelated to the TPU

To Reproduce
Steps to reproduce the behavior:

  1. Get Pixel 6a or perhaps any device with Tensor CPU?
  2. Enable local Stable Diffusion
  3. See that it works fine without NNAPI
  4. Enable NNAPI
  5. Try to generate image
  6. See error

Expected behavior
It is able to load the model and hopefully be much faster in theory

Screenshots
Screenshot_20230808-173608

Smartphone (please complete the following information):

  • OS: Android 13 with July 5, 2023 security patch running CalyxOS 4.11.3 (shouldn't matter as Google photos smart eraser under CalyxOS works which was made for the hardware)
  • Version 0.5.0

Please add options to add our fav loras and checkpoints and add sdx1 1.0

Can yall please add sdxl 1.0 local diffusion mode and make it where we can add our own loras and checkpoints from civitai. I'm enjoying this app but I can't get good looking faces. Please add sdxl 1.0 please it will make our images look better especially if we can add loras and checkpoints and add advanced options in local diffusion please! I really want this I always wanted to use sdxl on my phone local. Thx u guys for such a great app:)

Maybe something wrong with custom model

it appears that the app wants 2 tokenizer_config.json in the same folder, which is impossible. On presentation/src/main/java/com/shifthackz/aisdv1/presentation/screen/setup/ServerSetupScreen.kt line 618 do you mean vocab.json?

Screenshot_20240209_074816_SDAI FOSS

Another question, I converted to ort successfully this, right here do I use the model.with_runtime_opt.ort or the normal one? and does the app needs the model_index.json?

Lora/Hypernetwork/textual-inversion picker

Is your feature request related to a problem? Please describe.
I don't know the plethora of loras I have installed nor their unique call outs so I can just write prompts and hope I get their trigger word right.

Describe the solution you'd like
A drop down list to add them to the prompt, or better a tab that includes all their preview images that either adds to the prompt or copies trigger to clip board

Describe alternatives you've considered
Running a bash script to generate all the triggers to n a text or csv file that I can just use syncthing to sync to my phone.

URL errorr

Describe the bug
I tried following the procedure described on the ap page but it doesn't work I had to add --share to get a public url.

To Reproduce
Steps to reproduce the behavior:

  1. enter as argument in 'webui_user.bat': --xformers --autolaunch --opt-split-attention --precision full --no-half --medvram --share --api --listen
  2. Copy and paste from CMD the public URL (sample: https://02c0c88cb5143e3673.gradio.live:7860) into the app
  3. Click on 'Connect'
  4. See error 404

Expected behavior
As written in the guide I expected to start the interface where to generate the images

Desktop (please complete the following information):

  • OS: Windows 11
  • Browser: Opera Developer
  • Version: 101.0.4822.0

Smartphone (please complete the following information):

  • Device: Realme GT 5G
  • OS: Android 13
  • Browser: Your App (Playstore version and GitHub version )
  • Version IDK

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.