llmR
is an R package that provides a seamless interface to various
Large Language Models (LLMs) such as OpenAI’s GPT models, Azure’s
language models, or custom local servers. It simplifies the process of
sending prompts to these models and handling their responses, including
rate limiting, retries, and error processing.
- Unified API: Interact with different LLM providers using a consistent set of functions.
- Prompt Processing: Convert chat messages into a standard format suitable for LLMs.
- Error Handling: Automatically handle errors and retry requests when rate limits are exceeded. If a response is cut due to token limits, the package will ask the LLM to complete the response.
- Custom Providers: Interrogate custom endpoints (local and online) and allow implementation of ad-hoc LLM connection functions.
- Mock Calls: Allows simulation of LLM interactions for testing purposes.
- Logging: Option to log request times for performance monitoring.
You can install the development version of llmR
from
GitHub with:
# install.packages("pkgload")
devtools::install_github("bakaburg1/llmR")
Before using llmR
, you need to set up the necessary API keys and model
identifiers for the LLM providers you intend to use. These can be set
globally using R options.
library(llmR)
# Set your API keys and model identifiers
options(llmr_openai_api_key = "your-openai-api-key")
options(llmr_openai_model_gpt = "gpt-4o")
See use_openai_llm() and use_azure_llm() documentation for more information on the required options for each provider.
Here’s a simple example of how to send a prompt to an LLM provider:
# Send a prompt to the OpenAI language model
response <- prompt_llm(
messages = c(user = "Hello there!"),
provider = "openai"
)
cat(response) # cat() allows to get a formatted output
The prompt_llm
function accept prompts in a number of formats:
- Single message: A single string message from the user.
message <- "Hello!"
- Named vector: A named vector where each element represents a message with a specified role.
messages <- c(system = "You are a helpful assistant.", user = "Hello!")
- List of messages: A list where each element is a named vector with role and content.
messages <- list(
c(role = "system", content = "You are a helpful assistant."),
c(role = "user", content = "Hello!")
)
- Batch prompting: This approach allows to send batches of prompts to process in parallel. Not all providers support this functionality and it’s not fully tested yet.
messages <- list(
list(
list(role = "system", content = "You are a helpful assistant."),
list(role = "user", content = "Hello!")
),
list(
list(role = "system", content = "You are a helpful assistant."),
list(role = "user", content = "How are you?")
)
)
Internally, the prompt_llm()
uses the process_messages()
function to
convert the input messages into a standard format (the “List of
messages” one) suitable for the LLM APIs.
You can enable logging to track the time taken for each request and the
number of tokens sent and generated by setting the llmr_log_requests
option to TRUE
.
The package implements interfaces for OpenAI, and Azure GPT models, but you can also utilize other custom providers or even your own LLM functions.
To use a custom provider you need to set up the
llmr_custom_endpoint_gpt
option with the URL of your custom provider
and set the provider
argument in prompt_llm()
or the
llmr_llm_provider
option to “custom”. Optionally, you can set the
llmr_custom_model_gpt
option to specify the model name and the
llmr_custom_api_key
option to set an API key. For example, these are
the options to use a local LLM served via Ollama:
options(
llmr_custom_endpoint_gpt = "http://localhost:11434/v1/chat/completions",
llmr_custom_model_gpt = "llama3"
)
To create a custom provider function, you just need to name it with the
following pattern: use_<custom provide>_llm
. For example
use_myProvider_llm
. The function should accept a body
argument and
return an httr
response object; see the use_openai_llm()
function
body for an example. Then you can use provider = "myProvider"
in the
prompt_llm()
function or set the llmr_llm_provider
option to
“myProvider”.
The package provides a way to simulate LLM responses for testing
purposes. You can use the prompt_llm()
function with “mock” as the
provider
argument to use the mock response or by setting the
llmr_llm_provider
option to “mock”. The llmr_mock_response
option
can be set to a custom response that will be used by the mock functions.
For example:
options(llmr_mock_response = "My test!")
prompt_llm("Hello", provider = "mock")
Will return “My test!”.
This feature is useful for unit tests of packages using llmR
functions. You can also use the use_mock_llm()
function directly to
simulate a whole mock httr
response object.
If you use llmR
in your research, please cite it as follows:
D'Ambrosio, A. (Year). llmR: R Interface to Large Language Models. URL: https://github.com/bakaburg1/llmR