log10-io / log10 Goto Github PK
View Code? Open in Web Editor NEWPython client library for improving your LLM app accuracy
Home Page: https://log10.io
License: MIT License
Python client library for improving your LLM app accuracy
Home Page: https://log10.io
License: MIT License
A user may have to do some initialization i.e. set up environment variables programatically. Currently, we try to get a session id on library initialization, so that fails if everything isn't ready (URL, token etc) by then.
Provide a way to wrap openai calls, and get new log10 session ids:
with log10.session():
...
At the moment we do have only a release pipeline, but it would be worth to have at least a linting pipeline to verify the code is somehow correct. Such a pipeline would run an opinionated linter, such as wemake-python-styleguide
and encourage better practices.
For example, this code
def camel_agent(
user_role: str,
assistant_role: str,
task_prompt: str,
max_turns: int,
user_prompt: str = None,
assistant_prompt: str = None,
summary_model: str = None,
llm: LLM = None,
presents a couple of problems. First of all, given the large number of arguments that have the same type, it's easy to make mistakes in inverting the error, and for those scenarios Python introduced keyword-only named arguments
def camel_agent(
* ,
user_role: str,
assistant_role: str,
task_prompt: str,
max_turns: int,
user_prompt: str = None,
assistant_prompt: str = None,
summary_model: str = None,
llm: LLM = None,
which would force client code to invoke it like so:
camel_agent(
user_role='biochemist',
assistant_role='professor',
...
)
Also, the code would not pass mypy, because None are not valid types for str. It should rather be Optional[str]
like so:
def camel_agent(
* ,
user_role: str,
assistant_role: str,
task_prompt: str,
max_turns: int,
user_prompt: Optional[str] = None,
assistant_prompt: Optional[str] = None,
summary_model: Optional[str] = None,
llm: LLM = None,
any plans to support local backends like llama.cpp, text gen. ui, etc?
Completion URLs can be printed using the DEBUG_
mode when wrapping openai, like so:
log10(openai, DEBUG_=True)
But is a bit verbose, and not easily grabbed programmatically
The current setup with a single requirements.txt
files do not provide version for dependencies. This works in the short term, but in the longer term when things stop working it would be really complicated to know what version of each dependency we were using.
The best practice should be to commit a lock file under version control, such as a pipenv.lock
. However, we could also take the opportunities to move to poetry
, which is more popular (25 k stars vs 4.7k stars compared to hatch), and a better support from tools in the ecosystem (mypy, flake8, etC)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.