Coder Social home page Coder Social logo

chatmeup's Introduction

Contributors LinkedIn


ChatMeUp

A simple framework to create personal ChatBots
View Demo

Table of Contents
  1. About The Project
  2. Setup

About The Project

As I was working on the creation of personalized chat-bots via fine-tuning of llama models, I developed a simple package based on streamlit that streamlines the creation of simple Q&A chatbot applications.

This is effectively a plug .gguf file for your model(whether fine-tuned or downloaded) & have fun asking it questions ๐Ÿ˜„

Currently, this is not developed to be used by other people other than myself but maybe in the future I will modify the project so that it can easily allow to do so and create documentation.

Setup

  1. Download package from github.

  2. Move to package directory just downloaded.

    cd path/to/chatmeup

  3. Create python environment. In this guide we use venv but you can use anaconda if you would like.

    python -m venv name_of_environment

  4. Activate python environment

    • For Mac:

      source name_of_environment/bin/activate

    • For Windows:

      name_of_environment\Scripts\Activate.ps1

  5. Instal package requirements

    pip3 install -r requirements.txt

  6. Define path to chatmeup as an environment variable.

    export MODULE_PATH="/path/to/chatmeup"

    You will be able to access this path in your python scripts using the following:

    import os
    # Access file paths from environment variables
    module_path = os.environ.get('MODULE_PATH')
    
  7. Create a models folder and add your fine-tuned model .gguf file to it or download the .gguf file for the model you are trying to test into it.

  8. Create a prompt template text file based on your model. You must utilize the placeholder {input} for the part of the prompt that you want to be replaced by the user input at inference. You must call the file with the name of the model. Here is an example:

    • For a model with gguf file called: llama-2-13b-chat.Q4_K_M.gguf.
    • You should create a text file called: llama-2-13b-chat.
    • The text file should contain the prompt template to be used at inference:
    [INST] <<SYS>>
    You are an helpful assistant. You answer questions cordially, and respectfully.
    <</SYS>>
    
    {input} [/INST]
    
  9. Create a json file that will store configurations for the model. This file should follow the following naming convetion: name of the model-config.json. Here is an example:

    • For a model with gguf file called: llama-2-13b-chat.Q4_K_M.gguf.
    • You should create a json file called: llama-2-13b-chat-config.
    • The json file should contain configs about the model. At the bare minimum it should contain n_threads, n_ctx, n_gpu_layers, temperature, top_p, rope_freq_base, repeat_penalty, model_path, model_template:
    {
      "n_threads": 8, 
      "n_ctx":512,
      "n_gpu_layers":0,
      "temperature": 0.9,
      "top_p":0.9,
      "rope_freq_base": 10000,
      "repeat_penalty": 1.1,
      "model_path":"/Users/uvl6686/repos/chatmeup/models/llama-2-13b-chat.Q4_K_M.gguf",
      "model_template": "/Users/uvl6686/repos/chatmeup/prompt_templates/inference/llama-2-13b-chat.txt"
    }
    
  10. Modify ChatBot.py to ensure the model you are trying to test can be selected. Here is an example:

    • For a model with gguf file called: llama-2-13b-chat.Q4_K_M.gguf.
    • Make sure the following lines in ChatBot.py present the model you are trying to test as an option.
    # Add a selectbox to the sidebar:
    model_type=st.sidebar.selectbox(
                    'Model',
                    ('llama-2-13b-chat',)
                )
    
    • If you want to test multiple models you can perform the steps 6-8 above and then simply add an additional option. Here is an example when adding the codellama-7b model:
    # Add a selectbox to the sidebar:
    model_type=st.sidebar.selectbox(
                    'Model',
                    ('llama-2-13b-chat','codellama-7b')
                )
    
  11. Change working directory to be chatmeup/src/main/streamlit:

    cd ./src/main/streamlit/

  12. Utilize the following comand to run the application locally: streamlit run ChatBot.py

You should now be able to play around with your model. Here is an example of what it should look like:

Example gif

Contact

Giorgio Di Salvo - [email protected]

(back to top)

chatmeup's People

Contributors

perifanosprometheus avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.