Support model directly from Hugging Face via CoreLLMs
(with quantization) and AWS Bedrock via BedrockLLMs
Make sure to install requirements library: boto3
, transformers
Just call the model and it will work
llm = CoreLLMs() # if not passing model name, it will automatically use Llama 3
message = [
{
"role":"user",
"message":"Hi there!"
}
]
response = llm(message)