Coder Social home page Coder Social logo

swift-coreml-diffusers's Introduction

Swift Core ML Diffusers 🧨

This is a native app that shows how to integrate Apple's Core ML Stable Diffusion implementation in a native Swift UI application. The Core ML port is a simplification of the Stable Diffusion implementation from the diffusers library. This application can be used for faster iteration, or as sample code for any use cases.

This is what the app looks like on macOS: App Screenshot

On first launch, the application downloads a zipped archive with a Core ML version of Stability AI's Stable Diffusion v2 base, from this location in the Hugging Face Hub. This process takes a while, as several GB of data have to be downloaded and unarchived.

For faster inference, we use a very fast scheduler: DPM-Solver++, that we ported to Swift from our diffusers DPMSolverMultistepScheduler implementation.

The app supports models quantized with coremltools version 7 or better. This requires macOS 14 or iOS/iPadOS 17.

Compatibility and Performance

  • macOS Ventura 13.1, iOS/iPadOS 16.2, Xcode 14.2.
  • Performance (after the initial generation, which is slower)
    • ~8s in macOS on MacBook Pro M1 Max (64 GB). Model: Stable Diffusion v2-base, ORIGINAL attention implementation, running on CPU + GPU.
    • 23 ~ 30s on iPhone 13 Pro. Model: Stable Diffusion v2-base, SPLIT_EINSUM attention, CPU + Neural Engine, memory reduction enabled.

See this post and this issue for additional performance figures.

Quantized models run faster, but they require macOS Ventura 14, or iOS/iPadOS 17.

The application will try to guess the best hardware to run models on. You can override this setting using the Advanced section in the controls sidebar.

How to Run

The easiest way to test the app on macOS is by downloading it from the Mac App Store.

How to Build

You need Xcode to build the app. When you clone the repo, please update common.xcconfig with your development team identifier. Code signing is required to run on iOS, but it's currently disabled for macOS.

Known Issues

Performance on iPhone is somewhat erratic, sometimes it's ~20x slower and the phone heats up. This happens because the model could not be scheduled to run on the Neural Engine and everything happens in the CPU. We have not been able to determine the reasons for this problem. If you observe the same, here are some recommendations:

  • Detach from Xcode
  • Kill apps you are not using.
  • Let the iPhone cool down before repeating the test.
  • Reboot your device.

Next Steps

  • Allow additional models to be downloaded from the Hub.

swift-coreml-diffusers's People

Contributors

1ucas avatar julien-c avatar mattusi avatar pcuenca avatar zachnagengast avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

swift-coreml-diffusers's Issues

Some models have a performance degradation of approximately 20 times on the iPhone Pro Max compared to running on M1 Pro Macbook while standard Stable Diffusion 2.1 is just ~1.5x degradation on iPhone.

I have tested the following models on my iPhone 14 Pro Max:

1: coreml-stable-diffusion-2-1-base
https://huggingface.co/pcuenq/coreml-stable-diffusion-2-1-base
https://huggingface.co/pcuenq/coreml-stable-diffusion-2-1-base/blob/main/coreml-stable-diffusion-2-1-base_split_einsum_compiled.zip

took ~15s on Mackbook M1 Pro
took ~ 20s on iPhone 14 Pro Max

2: coreml-8528-diffusion
https://huggingface.co/coreml/coreml-8528-diffusion
https://huggingface.co/coreml/coreml-8528-diffusion/blob/main/split_einsum/8528-diffusion_split-einsum_compiled.zip

took ~15s on Mackbook M1 Pro
took ~5min on iPhone 14 Pro Max

and the memory usage is alot more than the first model.

here is my configuration:

#if targetEnvironment(macCatalyst)
let runningOnMac = true
#else
let runningOnMac = false
#endif

let configuration = MLModelConfiguration()
configuration.computeUnits = runningOnMac ? .cpuAndGPU : .cpuAndNeuralEngine
let pipeline = try StableDiffusionPipeline(resourcesAt: url,
configuration: configuration,
disableSafety: false,
reduceMemory: !runningOnMac)

var config = StableDiffusionPipeline.Configuration(
    prompt: "string1"
)
config.negativePrompt = "string2"
config.stepCount = numInferenceSteps // 15
config.seed = UInt32(seed) // 32
config.guidanceScale = Float(guidanceScale) // 7.5
config.disableSafety = disableSafety // true
config.schedulerType = .dpmSolverMultistepScheduler

High RAM usage on GPU mode compared to using apple/ml-stable-diffusion CLI tool

I noticed that the diffuser app, while running on GPU mode, uses just over 13GB RAM while infering on the non-quantized SDXL 1.0 model. If I use pretty much the same settings with Apple's Core ML Stable Diffusion software (https://github.com/apple/ml-stable-diffusion), on the same model, my system uses just under 8GB of ram. Both result in different pictures. Hardware: Apple Mac-Mini M2 Pro, 16GB RAM, latest MacOS 14 public beta.

swift-coreml-diffuser settings:

Positive prompt: a photo of an astronaut dog on mars
Negative prompt: [empty]
Guidance Scale: 7.5
Step count: 20
Preview count: 25
Random seed: 4184258190
Advanced: GPU
Disable Safety Checker: Selected

Commandline prompt with arguments:
swift run StableDiffusionSample "a photo of an astronaut dog on mars" --compute-units cpuAndGPU --step-count 20 --seed 4184258190 --resource-path <path to model> --xl --disable-safety --output-path <path to image folder>

I do make the assumption here that selecting GPU is in actual fact the same as the CLI's cpuAndGPU (considering the CLI has no GPU option). Perhaps the difference lies there? In that case, can cpu & gpu mode support be added?

First time loading the model in the app (e.g. first time after starting app or switching models) also takes a lot longer vs. loading time in the command line. 13GB of RAM use by the app leads to a bunch of swapfile use on my 16GB M2 Pro Mac Mini, while running the CLI tool does not lead to swap file use, which most likely explains this difference.

Considering model sizes and RAM usage, it almost looks like the app is loading the model twice? That's pure speculation though, I imagine there's plenty of overhead involved. But considering the App itself, before a model is loaded, uses 40MB of ram, there's a difference with the commandline tools of just over 5GB (about the size of the unet weights) while generating an image.

I haven't tested for non-sdxl models, I might follow up if I find some time for that (at which point I can also compare ram use when using the neural engine).

I'm honestly not sure if this is a bug or simply caused by some different settings/features under the hood I am not aware of. But it is an issue for how usable the software is on machines with lower ram.

App doesn't run on macOS

No releases or CI

Have you thought about creating a GitHub Actions CI job to build the app, run the tests, and publish releases to GitHub?

Fails to run on macOS

The app doesn't run on macOS 13.1 (22C65) with Xcode 14.1 out of the box. It works if I run in an iPad simulator.

image

I believe the error is due to code signing, since the huggingface credentials are hard coded.

Fails to build with CLI

I'm also seeing compile errors when trying to build the app from the CLI. When building from Xcode, the build phase succeeds however launch fails.

$ xcodebuild clean build CODE_SIGN_IDENTITY="" CODE_SIGNING_REQUIRED=NO

/swift-coreml-diffusers/Diffusion/Pipeline/Pipeline.swift:13:8: error: no such module 'StableDiffusion'
import StableDiffusion
       ^

** BUILD FAILED **

Edit: I successfully built and ran the app using this command:

xcodebuild clean build -scheme Diffusion CODE_SIGN_IDENTITY="" CODE_SIGNING_REQUIRED=NO

How do I view/toggle the options pane?

Might be a rookie question, but after building and launching on my MacBook Pro M1 I only get the main window with no options pane to the left as in the screenshots. Is there a magic trick to have it enabled as I cannot seem to find where to do so - but I do find the code :)

Could not launch “Diffusion”

hello, when i was trying to run this program on my m1 chip mac, i got error. how can i solve this error ? help me please.
error message :
Could not launch “Diffusion”
The app is incompatible with the current version of macOS. Please check the app's deployment target.

error screenshot:
errorss1

about my mac screenshot :
errorss2

Port of Euler a for future release ?

Hi, do you plan to port this scheduler ?
I also have a question for the seed, it is in the range [0,1000] but in automatic1111 it's on 32bits, why such a limitation ?
Thanks.

Cound not download SDXL model by default options.

Both SDXL models are not found and cannot download.
SDXL base:
https://huggingface.co/apple/coreml-stable-diffusion-xl-base/resolve/main/coreml-stable-diffusion-xl-base_split_einsum_compiled.zip
SDXL base(4.5 bit):
https://huggingface.co/apple/coreml-stable-diffusion-mixed-bit-palettization/resolve/main/coreml-stable-diffusion-mixed-bit-palettization_split_einsum_compiled.zip
are not found.

I found the file on HuggingFace is only original one:
https://huggingface.co/apple/coreml-stable-diffusion-xl-base/resolve/main/coreml-stable-diffusion-xl-base_original_compiled.zip

Unrelated results

Hello,

Just found out about the MacOS version of this tool. I am not sure if I am doing something wrong, but using the suggested ranges params, I get totally unrelated results with the prompt. I have tried several of them and all are nothing close to it:

For example

Model: stable diffusion 2 - base

prompt: flying cat with a hat

flying_cat_with_a_hat

prompt: A dog wearing jacket

A_dog_wearing_jacket

Can't build project on iPhone

Hi, I can't build this project on iPhone, with serveral errors. Not change any code. Is there a bug, or my project setting is not correct?
Screenshot 2023-02-09 at 20 04 52

Denoising strength

Thanks for sharing this great repo!

Is it possible to control Denoising strength? I know by default the app doesn’t support, but does the coreML SD package support it?

Can't load new models

After building from the current main branch on github:

When downloading a zip, the app will look for the merges.txt file and other files at the base of the downloaded folder, but they are in the subfolder compiled.

An easy work around is to go to the folder (using the reveal in finder option) and move the files from the compiled folder to the base folder of the download.

PS. Brilliant performance using the 4.5 bit SDXL version running on the GPU with an M2 Pro that has 16GB of RAM! 20 steps takes ~1.5 minutes. Thanks for the great work.

Warning when prompts needs to be truncated

Hi,
I understood the previous issue I've posted only when I tried to generate image with command line ml-stable-diffusion. The prompts are truncated and images generated are consistent with what I get in A1111.
Could it be possible to display a warning to avoid any confusion ?
Thanks.

Downloaded models disappearing?

I switched to stable-diffusion-2-base, I think there was an error with it then going back to stable-diffusion-2-1-base it's downloading again as if it was never here. Reveal in finder... doesn't open anything too and there's no more models in /Library/Containers/com.huggingface.Diffusers/Data/Library/Application Support

image

On an Intel Mac with discrete GPU, when using the GPU, the generation outputs some kind of random pattern

Running on an Intel iMac (2020) with a discrete Radeon 5700 (8GB), the result is always something in the like of the attached screenshot.

I've cloned the repository :

  • Diffusion-macOS: the problem is identical. The GPU is doing the work but the result is random pixels.
  • Diffusion-macOS: using ComputeUnits.cpuOnly (two modifications to ControlsView.swift), the CPU is (slowly) doing the work and the result is OK.
  • Diffusion: the CPU is doing the work and the result is OK.

In all cases, no error or exception is raised.

The console output is very similar :

Generating...
Got images: [Optional(<CGImage 0x7f813e3b19c0> (IP)
	<<CGColorSpace 0x60000192dda0> (kCGColorSpaceDeviceRGB)>
		width = 512, height = 512, bpc = 8, bpp = 24, row bytes = 1536 
		kCGImageAlphaNone | 0 (default byte order)  | kCGImagePixelFormatPacked 
		is mask? No, has masking color? No, has soft mask? No, has matte? No, should interpolate? Yes)] in 17.003490924835205

Diffusion also outputs this, for each step :

2023-04-22 17:30:02.375841+0200 Diffusion[7894:267125] [API] cannot add handler to 3 from 3 - dropping

Screenshot 2023-04-22 at 17 02 43

Help

  • Differences between models.
  • Brief explanation of the controls.

It doesn't have to reside in the Help menu, it could be tooltips, first-time popovers, etc.

MetalPerformanceShadersGraph folder taking large amounts of space

I'm not entirely sure if the issue is with Apple's SD Core ML implementation or if its how its being used in the sample app but after a few XCode debugging sessions this folder balloons in size. Currently its comfortably taking around 151GB of space. It can be deleted but will gradually return until there is no more space left in the drive again.

/private/var/folders/<random>/<random>/<random>/com.apple.MetalPerformanceShadersGraph

image

Not running on a 2017 MacBook Pro

I’m trying run the following setup:

  • App version: Version 1.1 (20230222.140932), from the App Store
  • model: stability/stable-diffusion-2-base or the 2.1 model
  • 13-inch, 2017 MacBook Pro:
      -  Processor: 2.3 GHz Dual-Core Intel Core i5
      - Graphics: Intel Iris Plus Graphics 640 1536 MB
      - Memory: 16 GB 2133 MHz LPDDR3
  • macOS: 13.2.1 (Ventura)

What I did:

I downloaded from the App Store, ran the app, waited for it to install the default initial model, and then hit generate to generate an image with all default prompt and settings. It says “Preparing Model” and then says “Generation Error” and when I hit info the error message says it’s from com.apple.CoreML domain and that the error message is “Error computing NN outputs”. I got the same error when I switched to the 2.1 model, waited for it to download, and tried to generate with it.

Is this configuration expected to work?

Efficiency cores being used

I'm not 100% sure on this but it seems when the models are being loaded and when they are being generated only the E cores seems to be used, rather than the P cores. This slows down the model loading process and image generation by quite a lot.

Change between GPU+NeuralEngine / NeuralEngine will error generate image.

I'm testing the Neural Engine supported models. When I use the SD1.5 model with Neural Engine first time, it works well with log "ANE model load has failed for on-device compiled macho. Must re-compile the E5 bundle. @ GetANEFModel".

Then I switch to GPU and Neural Engine, it will load the pipeline about 330 seconds, and then click generate it runs into error.

The error image is as the link.

Then I close the diffusers app and restart it again.
It starts with GPU and Neural Engine, and it load the pipeline about 143.7 seconds, and It can generate images.
But when I switch it back to Neural Engine, it runs into error when generate images.

This can be reproduced easily, but I don't know how to fix this.

Mac Catalyst error:: 'PixelBuffer' is not a member type of enum 'Accelerate.vImage'

I get the following error when choosing to build for the Mac using Catalyst, either Rosetta or not

/SourcePackages/checkouts/ml-stable-diffusion/swift/StableDiffusion/pipeline/Decoder.swift:80:40: error build: 'PixelBuffer' is not a member type of enum 'Accelerate.vImage'

Building on device for iPad does work, however - any ideas how to run on a Mac?

Mac OS 13.1 - iOS 16.1 Xcode 14.0

Downloading doesn't work in production?

When I deploy an app with the same download mechanism to production, it doesn't work. When I plugged it into Xcode, it said 'unwritableFile' error.

Could it be something to do with developer mode on the iPhone? Or maybe the files size of the models are too large? Is there an Apple official limit to the max file size an app can download? I can't use On demand resources because it's got a 2gb limit at any one time...

How to run this as a newb?

I'm a coding newbie, so this may be sort of a dumb question. How do I run this on my M1 Max 64GB? Downloaded the package and also through GitHub Desktop. As far as I understand, I need to compile it but cannot find the right tutorial to help me with this.

Any help would be much appreciated.

Using too much memory.

Hi,

I am running the the app on the the iphone 14 pro with my personal convert models. It could load the model, but will crashed the app when I clicked the generate button. And shows: The app “Diffusion” has been killed by the operating system because it is using too much memory on the xcode. I tried sdv1.5, 2.0- base and 2.0. They have the same issue. Is there any way I can reduce the memory? Thank you.

Safety checker triggered

Hello! I'm getting an error when trying any prompt on any model, even if I change the seed value.
Any idea on how to fix it?
Thank you!

FnyTeMTacAAe5Ge

スクリーンショット 2023-01-31 17 22 03

Could not launch Diffusion app

Detailed log is here:

Could not launch “Diffusion”
Domain: IDELaunchErrorDomain
Code: 20
Recovery Suggestion: The LaunchServices launcher has returned an error. Please check the system logs for the underlying cause of the error.
User Info: {
    DVTErrorCreationDateKey = "2022-12-29 08:05:09 +0000";
    DVTRadarComponentKey = 968756;
    IDERunOperationFailingWorker = IDELaunchServicesLauncher;
}
--
The operation couldn’t be completed. Launch failed.
Domain: RBSRequestErrorDomain
Code: 5
Failure Reason: Launch failed.
--
Launchd job spawn failed
Domain: NSPOSIXErrorDomain
Code: 153
--

Analytics Event: com.apple.dt.IDERunOperationWorkerFinished : {
    "device_model" = "MacBookPro18,4";
    "device_osBuild" = "13.2 (22D5027d)";
    "device_platform" = "com.apple.platform.macosx";
    "launchSession_schemeCommand" = Run;
    "launchSession_state" = 1;
    "launchSession_targetArch" = arm64;
    "operation_duration_ms" = 3129;
    "operation_errorCode" = 20;
    "operation_errorDomain" = IDELaunchErrorDomain;
    "operation_errorWorker" = IDELaunchServicesLauncher;
    "operation_name" = IDERunOperationWorkerGroup;
    "param_consoleMode" = 0;
    "param_debugger_attachToExtensions" = 0;
    "param_debugger_attachToXPC" = 1;
    "param_debugger_type" = 3;
    "param_destination_isProxy" = 0;
    "param_destination_platform" = "com.apple.platform.macosx";
    "param_diag_MainThreadChecker_stopOnIssue" = 0;
    "param_diag_MallocStackLogging_enableDuringAttach" = 0;
    "param_diag_MallocStackLogging_enableForXPC" = 1;
    "param_diag_allowLocationSimulation" = 1;
    "param_diag_checker_tpc_enable" = 1;
    "param_diag_gpu_frameCapture_enable" = 0;
    "param_diag_gpu_shaderValidation_enable" = 0;
    "param_diag_gpu_validation_enable" = 0;
    "param_diag_memoryGraphOnResourceException" = 0;
    "param_diag_queueDebugging_enable" = 1;
    "param_diag_runtimeProfile_generate" = 0;
    "param_diag_sanitizer_asan_enable" = 0;
    "param_diag_sanitizer_tsan_enable" = 0;
    "param_diag_sanitizer_tsan_stopOnIssue" = 0;
    "param_diag_sanitizer_ubsan_stopOnIssue" = 0;
    "param_diag_showNonLocalizedStrings" = 0;
    "param_diag_viewDebugging_enabled" = 1;
    "param_diag_viewDebugging_insertDylibOnLaunch" = 1;
    "param_install_style" = 0;
    "param_launcher_UID" = 2;
    "param_launcher_allowDeviceSensorReplayData" = 0;
    "param_launcher_kind" = 0;
    "param_launcher_style" = 99;
    "param_launcher_substyle" = 8192;
    "param_runnable_appExtensionHostRunMode" = 0;
    "param_runnable_productType" = "com.apple.product-type.application";
    "param_runnable_type" = 2;
    "param_testing_launchedForTesting" = 0;
    "param_testing_suppressSimulatorApp" = 0;
    "param_testing_usingCLI" = 0;
    "sdk_canonicalName" = "macosx13.1";
    "sdk_osVersion" = "13.1";
    "sdk_variant" = iosmac;
}
--


System Information

macOS Version 13.2 (a) (Build 22D7750270d)
Xcode 14.2 (21534) (Build 14C18)
Timestamp: 2022-12-29T16:05:09+08:00

Is this a network error? Should network errors be catched?

Stable Diffusion 2.1-based models fail to load or generate blank black images

Hi 👋

I’ve been doing some experiments with the app and, for the most part, it works. However, I’ve noticed that the Diffusion app fails to work with models that are based on Stable Diffusion 2.1.

  • For split_einsum_compiled CoreML models, the app reports that it’s “Preparing the model...” or “Generating...” in the console and never (after waiting hours) finishes or starts generating an image.
  • For original_compiled CoreML models, the app does load the model but it produces images that are completely black; no images are generated.

This behavior is consistent among a variety of SD-2.1 checkpoints. Here are some that I’ve confirmed to have this issue:

Strangely, the stable-diffusion-2-1-base does work so I’m pretty confused and wanted to report this issue because this would heavily restrict users who want to use models beyond SD 1.x.

Even more peculiarly, this behavior isn’t unique to just this app. Other Stable Diffusion apps are exhibiting similar issues (see here)

System Info:

  • Mac mini (M1, 2020)
  • RAM: 16 GB
  • macOS 13.3 (22E252)

iPad memory lack

Hi all! Has anyone run the app on an iPad?
When start generate image - error and crash on start
Pipeline loaded in 0.5994950532913208
Generating...
2023-03-31 10:13:38.375773+0800 DiffusionApp[20491:6142612] Error: Transpose unit is not supported.

Any ideas?

Cannot download SDXL GPU+NE

Starting download of https://huggingface.co/apple/coreml-stable-diffusion-xl-base/resolve/main/coreml-stable-diffusion-xl-base_split_einsum_compiled.zip
HTTP response status code: 404

Also, something seems to be wrong when trying to download Base + Refiner

Task <--->.<2> finished with error [18,446,744,073,709,551,615] Error Domain=NSURLErrorDomain Code=-1 "unknown error" UserInfo={NSErrorFailingURLStringKey=https://huggingface.co/apple/coreml-stable-diffusion-xl-base-with-refiner/resolve/main/coreml-stable-diffusion-xl-base-with-refiner_original_compiled.zip, NSErrorFailingURLKey=https://huggingface.co/apple/coreml-stable-diffusion-xl-base-with-refiner/resolve/main/coreml-stable-diffusion-xl-base-with-refiner_original_compiled.zip, _NSURLErrorRelatedURLSessionTaskErrorKey=(
    "BackgroundDownloadTask <--->.<2>"
), _NSURLErrorFailingURLSessionTaskErrorKey=BackgroundDownloadTask <--->.<2>, NSLocalizedDescription=unknown error}

[Open-to-community] Benchmark swift-coreml-diffusers on different Mac hardware

Hey hey,

We are on a mission to provide a first-class, one-click solution to blazingly fast diffusers inference on Mac. In order for us to get a better idea of our framework, we'd like to get inference time benchmarks for the app.

Currently, we are explicitly looking for benchmarks on:

You can do so by following the below steps:

  1. Download the latest version of the Diffusers app from the App store.
  2. Select one of the three options in the Advanced
  3. Insert a random prompt for e.g. A Labrador playing in the fields.
  4. Run inference and make a note of the time taken for inference.

Note: Do make sure to run inference multiple times as the framework sometimes requires to prepare the weights in order to run it in the most efficient way possible.

Ping @pcuenca and @Vaibhavs10 for any queries or questions!

Happy diffusing 🧨

Generate a whole set of images at once with different seeds/prompt variants for faster evaluation

Hi 👋

it would be great if I could let the app generate let's say 10 images with different random seeds all at once/sequentially and then view them next to eachother in a grid once all is done. This would allow me to more easily choose a good seed for my prompts.

This way I could let it run in the background and evaluate my options later, instead of regenerating images with different seeds or prompts every 8 seconds.

Otherwise, really a great project, I am very thankful!

The GUI is different from Readme

screenshot-20230424-223733

Without doing any code change after cloning the code, it runs successfully, but not the same appearance as shown in Readme, I didn't find the left part.

Could anyone help on this?

Much appreciated

Contributing models / making it easier to load from a local directory

I ran an experiment on prompthero/openjourney-v2 to compile it for coreml. I also put that on huggingface to make it work with the Diffusers App. It would be really nice to make this process easier as well as let users point to their own local models rather than downloading from HG. Would you like a more robust contribution for this feature?

For reference:

Missing sidebar in app

Hello 👋
When I try to build and run this project in XCode locally, I am not seeing the sidebar in the application.

Apologies if this is a stupid question -- I am quite new to Swift development. But am I missing something or is this a bug?

Additional info:

  • 2021 MacBook Pro, M1 Max
  • macOS Sonoma 14.0 (beta)
  • XCode 15.0 (beta 5)

Example

image

Expected

image

Build failed

swift-coreml-diffusers-main/Diffusion/Common/ModelInfo.swift:190:12

Cannot convert value of type '()' to expected condition type 'Bool'

Screenshot 2023-07-30 at 17 05 08

It is only hallucinating

Installed Diffusers from Mac App Store on a MBP-15 (2018, Intel UHD Graphics 630/Radeon Pro 555X with 4GB VRAM). After downloading the SD2 model, I tried generating an image with "Labrador in the style of Vermeer" prompt and default settings. The output image was totally different from what was mentioned in the prompt. Also tried the Small Stable Diffusion Model. Same thing happened. Shall I attach screenshots?

Controls GUI Not Showing

My GUI is more basic than the one in the readme. Did I not build it correctly or do I need to be on macOS 14? I would like to run the SD XL models. I am running macOS 13.5 and guessing I could run the non-quantized model even if it would be really slow.

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.