The Spezi LLM Swift Package includes modules that are helpful to integrate LLM-related functionality in your application. The package provides all necessary tools for local LLM execution as well as the usage of remote OpenAI-based LLMs.
OpenAI LLM Chat View |
Language Model Download |
Local LLM Chat View |
You need to add the SpeziLLM Swift package to your app in Xcode or Swift package.
Important
If your application is not yet configured to use Spezi, follow the Spezi setup article to set up the core Spezi infrastructure.
As Spezi LLM contains a variety of different targets for specific LLM functionalities, please follow the additional setup guide in the respective target section of this README.
Spezi LLM provides a number of targets to help developers integrate LLMs in their Spezi-based applications:
- SpeziLLM: Base infrastructure of LLM execution in the Spezi ecosystem.
- SpeziLLMLocal: Local LLM execution capabilities directly on-device. Integration with Meta's Llama2 models.
- SpeziLLMLocalDownload: Download and storage manager of local Language Models, including onboarding views.
- SpeziLLMOpenAI: Integration with OpenAIs GPT models via using OpenAIs API service.
The section below highlights the setup and basic use of the SpeziLLMLocal and SpeziLLMOpenAI targets in order to integrate Language Models in a Spezi-based application.
Note
To learn more about the usage of the individual targets, please refer to the [DocC documentation of the package] (https://swiftpackageindex.com/stanfordspezi/spezillm/documentation).
The target enables developers to easily execute medium-size Language Models (LLMs) locally on-device via the llama.cpp framework. The module allows you to interact with the locally run LLM via purely Swift-based APIs, no interaction with low-level C or C++ code is necessary.
You can configure the Spezi Local LLM execution within the typical SpeziAppDelegate
.
In the example below, the LLMRunner
from the SpeziLLM target which is responsible for providing LLM functionality within the Spezi ecosystem is configured with the LLMLocalRunnerSetupTask
from the SpeziLLMLocal target. This prepares the LLMRunner
to locally execute Language Models.
import Spezi
import SpeziLLM
import SpeziLLMLocal
import SpeziLLMOpenAI
class TestAppDelegate: SpeziAppDelegate {
override var configuration: Configuration {
Configuration {
LLMRunner {
LLMLocalRunnerSetupTask()
}
}
}
}
Spezi will then automatically inject the LLMRunner
in the SwiftUI environment to make it accessible throughout your application.
The example below also showcases how to use the LLMRunner
to execute a SpeziLLM-based LLM
.
class ExampleView: View {
@Environment(LLMRunner.self) var runner
@State var model: LLM = LLMLlama(
modelPath: URL(string: "...") // The locally stored Language Model File in the ".gguf" format
)
var body: some View {
EmptyView()
.task {
// Returns an `AsyncThrowingStream` which yields the produced output of the LLM.
let stream = try await runner(with: model).generate(prompt: "Some example prompt")
// ...
}
}
}
Note
To learn more about the usage of SpeziLLMLocal, please refer to the [DocC documentation]: (https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillmlocal).
A module that allows you to interact with GPT-based large language models (LLMs) from OpenAI within your Spezi application.
You can configure the OpenAIModule
in the SpeziAppDelegate
as follows.
In the example, we configure the OpenAIModule
to use the GPT-4 model with a default API key.
import Spezi
import SpeziLLMOpenAI
class ExampleDelegate: SpeziAppDelegate {
override var configuration: Configuration {
Configuration {
OpenAIModule(apiToken: "API_KEY", openAIModel: .gpt4)
}
}
}
The OpenAIModule injects an OpenAIModel
in the SwiftUI environment to make it accessible throughout your application. The model is queried via an instance of Chat
from the SpeziChat package.
class ExampleView: View {
@Environment(OpenAIModel.self) var model
let chat: Chat = [
.init(role: .user, content: "Example prompt!"),
]
var body: some View {
EmptyView()
.task {
// Returns an `AsyncThrowingStream` which yields the produced output of the LLM.
let stream = try model.queryAPI(withChat: chat)
// ...
}
}
}
Note
To learn more about the usage of SpeziLLMOpenAI, please refer to the [DocC documentation] (https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillmopenai).
Contributions to this project are welcome. Please make sure to read the contribution guidelines and the contributor covenant code of conduct first.
This project is licensed under the MIT License. See Licenses for more information.