An open-source, locally-run, python code interpreter (like openAI's GPT-4 Plugin: Code-Interpreter).
- primarily and mostly for fun, as it is extremely early in development.
- extremly SIMPLE,
- 100% LOCAL &
- CROSS-PLATFORM.
It leverages open source LLMs to interpret user's requests into Python code. The service is exposed through a Flask server which receives user's requests, processes them, and returns Python code.
- ๐ฅ๏ธ Backend: Python Flask (CORS for serving both the API and the HTML).
- ๐ Frontend: HTML/JS/CSS (The UI was designed 100% to personal liking but open for changes).
- โ๏ธ Engine: Llama.cpp (Inference library for Llama/GGML models).
- ๐ง Model: Llama-2 (Only models compatible with Llama.cpp).
- โ๏ธ Arbiter: LangChain (Gluing all these components together).
- ๐ Wrapper: LlamaCpp (LangChain's wrapper around Llama.cpp for loading the models).
- Clone the repo:
git clone https://github.com/itsPreto/baby-interpreter
- Navigate to the project:
cd baby-interpreter
- Install the following libraries (requirements.txt coming soon...):
pip install flask flask_cors subprocess langchain
This project is configured to use LlamaCpp and load up models for local inference.
Models can be found on HuggingFace and once downloaded you can simply place them in the \models
folder and update the MODEL
path variable in main.py
.
WIZARD_LM_V2 = "WizardLM-13B-V1.2/WizardLM-13B-V1.2-GGML-q4_0.bin"
USEFUL_CODER = "useful_coder/code-cherryLamma-2/useful-coder-ggml-q4_0.bin"
MODEL = f"./models/{USEFUL_CODER}"
- Simply execute:
python main.py
The Flask server will start and listen on port 8000. The server exposes two endpoints /generate
and /run.
-
/generate
: receives a POST request with a user's question in the body. The question is processed by the LLM, and a Python code snippet is generated and returned. -
/run
: receives a POST request with Python code in the body. The code is executed, and the output of the execution is returned.
Contributions to this project are welcome. Please create a fork of the repository, make your changes, and submit a pull request. I'll be creating a few issues for feature tracking soon!!
ALSO~~~ If anyone would like to start a Discord channel and help me manage it that would be awesome (I'm not on it that much).
This project is licensed under the MIT License.