An api to create local first human friendly agents in the browser or Nodejs
๐ Read the documentation
Check the ๐ป Starter template
An agent is an anthropomorphic representation of a bot. It has these basic habilities:
- Interact: the agent can perform interactions with the user and get input and feedback
- Think: interact with some language model servers to perform inference queries
- Run jobs: manage long running jobs with multiple tasks
- Remember: use it's transient or semantic memory to store data
Version | Name | Description | Nodejs | Browser |
---|---|---|---|---|
@agent-smith/body | The body | โ | โ | |
@agent-smith/brain | The brain | โ | โ | |
@agent-smith/jobs | Jobs | โ | โ | |
@agent-smith/tmem | Transient memory | โ | โ | |
@agent-smith/tmem-jobs | Jobs transient memory | โ | โ | |
@agent-smith/smem | Semantic memory | โ | โ | |
@agent-smith/tfm | Templates for models | โ | โ | |
@agent-smith/lmtask | Yaml model task | โ | โ |
- What local or remote inference servers can I use?
Actually it works with Llama.cpp, Koboldcpp and Ollama.
- Can I use this with OpenAI or other big apis?
Sorry no: this library favours local first or private remote inference servers
Quick Nodejs example:
const expert = useLmExpert({
name: "default",
localLm: "koboldcpp",
templateName: "mistral",
onToken: (t) => process.stdout.write(t),
});
const brain = useAgentBrain([expert]);
// auto discover if expert's inference servers are up
await brain.discover();
// run an inference query
const _prompt = "list the planets of the solar sytem";
await brain.think(_prompt, {
temperature: 0.2,
min_p: 0.05
});
Powered by:
- Nanostores for the state management and reactive variables
- Locallm for the inference api servers management
- Modprompt for the prompt templates management