guinmoon / llmfarm_core.swift Goto Github PK
View Code? Open in Web Editor NEWSwift library to work with llama and other large language models.
License: MIT License
Swift library to work with llama and other large language models.
License: MIT License
Hey, I saw your repo, and I think that you're working on really cool stuff!
I was trying to hack out figuring out how to enable Metal on llama.cpp for Swift apps, and it looks like you were able to hack something together. Do you have any plans to bring your changes to llama.cpp, and if not, would you please share any insights into how you overcame them? I'm new to Swift, so I can't really make heads or tails of it but was able to run your repo w/ Metals successfully.
Thanks for this wrapper. Do you think it'd be possible to add support for the new Mistral model? It seems to be working according to https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF. Otherwise I'll submit a pull request by this weekend.
Hello!
The current implementation crashed when trying to load llama-2-7b.q4_K_M.gguf.
.onAppear {
Task {
// Load AI model
let url = Bundle.main.url(forResource: "llama-2-7b.Q4_K_M", withExtension: "gguf")!
let scribe = AI(_modelPath: url.path(),_chatName: "chat")
var params: ModelContextParams = .default
params.use_metal = true
do {
try scribe.loadModel(ModelInference.LLama_gguf, contextParams: params)
} catch {
// Handle the error
print("An error occurred: \(error)")
}
}
}
at line 484 in ggml-metal: ctx->buffers[ctx->n_buffers].metal = [ctx->device newBufferWithBytesNoCopy:(void *) ((uint8_t *) data + i) length:size_step_aligned options:MTLResourceStorageModeShared deallocator:nil];
Description
Please consider adding Core ML model package format support to utilize Apple Silicone Nural Engine + GPU.
Success Criteria
Utilize both ANE & GPU, not just GPU on Apple Silicon
Additional Context
List of Core ML package format models
https://github.com/likedan/Awesome-CoreML-Models
Work in progress on CoreML implementation for [whisper.cpp]. They see x3 performance improvements for some models. (ggerganov/whisper.cpp#548) you might be interested in.
You might also be interested in another implementation Swift Transformers. Example of CoreML application
https://github.com/huggingface/swift-chat
#12
Although using Commit for SPF , the library is considered an unstable-version package after relying on commit.
Thanks.
BR
buhe
llama.cpp has supported Gemma
models, please follow it. Thanks.
Hi there,
ReadMe mentions removing unsafe flags, but I am unable to do so if I add it as a Package Dependency.
Are you suggesting I have to manually build the source without the unsafe flags to use in my project?
Apologies if this sounds too simple of a question, I am figuring out how to build a Mac OS chat desktop application that interacts with a mistral GGUF model.
Thanks,
Chaks
I tried adding CI to the project https://github.com/buhe/llmfarm_core.swift/actions/runs/7651210335/job/20848583144, But, as you can see.
Any ideas , please?
I am getting this error in Xcode when trying to build the newly created project with this package imported:
The package product 'llmfarm_core' cannot be used as a dependency of this target because it uses unsafe build flags.
It doesn't work even when I'm importing the package locally and commenting out all the .unsafeFlags
lines in the Package.swift
I also tried the approach suggested here with importing a specific commit, but it didn't help
How do you add this package to the Xcode project?
Sorry, I'm not proficient with packages 🙂
Hey! Trying to import the package but still face some issues
What I do:
llmfarm_core.swift
package, specify the latest commit hashimport llmfarm_core
.onAppear() {
let maxOutputLength = 256
var total_output = 0
var input_text = "State the meaning of life."
let ai = AI(_modelPath: "llama-2-7b-chat.Q4_K_M.gguf",_chatName: "chat")
var params:ModelContextParams = .default
params.use_metal = true
try? ai.loadModel(ModelInference.LLama_gguf,contextParams: params)
ai.model.promptFormat = .LLaMa
}
Cannot find type 'ModelContextParams' in scope
Value of type 'LLMBase' has no member 'promptFormat'
How do I resolve this?
Hello
Each time I add llmfarm_core
package as a dependency I see the following error:
The package product 'llmfarm_core' cannot be used as a dependency of this target because it uses unsafe build flags.
Any idea why?
Not a real issue, but more a question.
current llama.cpp code supports loading also in lora adapters. Is this also supported by llmfarm_core.swift as I do not see that back in the LLMFarm code/application.
In case it is supported, is there an example on how to use it?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.