Coder Social home page Coder Social logo

microchain's Introduction

microchain

function calling-based LLM agents. Just that, no bloat.

Installation

pip install microchain-python

Define LLM and template

from microchain import OpenAITextGenerator, HFChatTemplate, LLM

generator = OpenAITextGenerator(
    model=MODEL_NAME,
    api_key=API_KEY,
    api_base=API_BASE,
    temperature=0.7
)

template = HFChatTemplate(CHAT_TEMPLATE)
llm = LLM(generator=generator, templates=[template])

Use HFChatTemplate(template) to use tokenizer.apply_chat_template from huggingface.

You can also use VicunaTemplate() for a classic vicuna-style prompt.

To use ChatGPT APIs you don't need to apply a template:

from microchain import OpenAIChatGenerator, LLM

generator = OpenAIChatGenerator(
    model="gpt-3.5-turbo",
    api_key=API_KEY,
    api_base="https://api.openai.com/v1",
    temperature=0.7
)

llm = LLM(generator=generator)

Define LLM functions

Define LLM callable functions as plain Python objects. Use type annotations to instruct the LLM to use the correct types.

from microchain import Function

class Sum(Function):
    @property
    def description(self):
        return "Use this function to compute the sum of two numbers"
    
    @property
    def example_args(self):
        return [2, 2]
    
    def __call__(self, a: float, b: float):
        return a + b

class Product(Function):
    @property
    def description(self):
        return "Use this function to compute the product of two numbers"
    
    @property
    def example_args(self):
        return [2, 2]
    
    def __call__(self, a: float, b: float):
        return a * b

print(Sum().help)
'''
Sum(a: float, b: float)
This function sums two numbers.
Example: Sum(a=2, b=2)
'''

print(Product().help)
'''
Product(a: float, b: float)
Use this function to compute the product of two numbers.
Example: Product(a=2, b=2)
'''

Define a LLM Agent

Register your functions with an Engine() using the register() function.

Create an Agent() using the llm and the execution engine.

Define a prompt for the LLM and include the functions documentation using engine.help().

It's always a good idea to bootstrap the LLM with examples of function calls. Do this by setting engine.bootstrap = [...] with a list of function calls to run and prepend their results to the chat history.

from microchain import Agent, Engine
from microchain.functions import Reasoning, Stop

engine = Engine()
engine.register(Reasoning())
engine.register(Stop())
engine.register(Sum())
engine.register(Product())

agent = Agent(llm=llm, engine=engine)
agent.prompt = f"""Act as a calculator. You can use the following functions:

{engine.help}

Only output valid Python function calls.

How much is (2*4 + 3)*5?
"""

agent.bootstrap = [
    'Reasoning("I need to reason step-by-step")',
]
agent.run()

Running it will output something like:

prompt:
Act as a calculator. You can use the following functions:

Reasoning(reasoning: str)
Use this function for your internal reasoning.
Example: Reasoning(reasoning=The next step to take is...)

Stop()
Use this function to stop the program.
Example: Stop()

Sum(a: float, b: float)
Use this function to compute the sum of two numbers.
Example: Sum(a=2, b=2)

Product(a: float, b: float)
Use this function to compute the product of two numbers.
Example: Product(a=2, b=2)


Only output valid Python function calls.

How much is (2*4 + 3)*5?

Running 10 iterations
>> Reasoning("I need to reason step-by-step")
The reasoning has been recorded
>> Reasoning("First, calculate the product of 2 and 4")
The reasoning has been recorded
>> Product(a=2, b=4)
8
>> Reasoning("Then, add 3 to the product of 2 and 4")
The reasoning has been recorded
>> Sum(a=8, b=3)
11
>> Reasoning("Lastly, multiply the sum by 5")
The reasoning has been recorded
>> Product(a=11, b=5)
55
>> Reasoning("So, the result of (2*4 + 3)*5 is 55")
The reasoning has been recorded
>> Stop()
The program has been stopped

You can find more examples here

microchain's People

Contributors

evangriffiths avatar galatolofederico avatar git-marco00 avatar kongzii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

microchain's Issues

Handle empty reply separately from non-empty reply that is empty once parsed by `Agent.clean_reply`

Have had an issue where the llm returns a poorly formatted function, e.g. MyFunction (without the brackets), and then Agent.clean_reply parses this and returns an empty string. The feedback the llm gets in this case (Error: empty reply, retrying here) isn't very informative, so it ends up repeating the same mistake until it aborts after max_retries are exceeded.

A suggestion is to give informative feedback to the LLM if the reply is not empty, but is empty once parsed by clean_reply.

Will put a PR up myself over the coming days if nobody else takes it ๐Ÿ˜„

Langfuse integration

Hi, we are now integrating Langfuse into our non-microchain agents, and so far, it seems like a great library for monitoring LLMs.

Would you be interested in having built-in support for Langfuse?

I'm not sure how big changes it would introduce into Microchain, but most probably it wouldn't be so bad.

If that's something you'd like, I could implement it sometime next week. If not, I'd find a way how to use it without modifying Microchain itself.

Error running the math example

prompt:
Act as a calculator. You can use the following functions:

Reasoning(reasoning: str)
Use this function for your internal reasoning.
Example: Reasoning(reasoning=The next step to take is...)

Stop()
Use this function to stop the program.
Example: Stop()

Sum(a: float, b: float)
Use this function to compute the sum of two numbers.
Example: Sum(a=2, b=2)

Product(a: float, b: float)
Use this function to compute the product of two numbers.
Example: Product(a=2, b=2)


Only output valid Python function calls.

How much is (2*4 + 3)*5?

Running 10 iterations
>> Reasoning("I need to reason step-by-step")
The reasoning has been recorded
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
[<ipython-input-9-30cf4b3f9306>](https://localhost:8080/#) in <cell line: 23>()
     21     'Reasoning("I need to reason step-by-step")',
     22 ]
---> 23 agent.run()

2 frames
[/usr/local/lib/python3.10/dist-packages/microchain/engine/agent.py](https://localhost:8080/#) in run(self, iterations)
     73                     break
     74                 tries += 1
---> 75                 reply = self.llm(history + temp_messages, stop=["\n"])
     76                 reply = self.clean_reply(reply)
     77                 if len(reply) < 2:

[/usr/local/lib/python3.10/dist-packages/microchain/models/llm.py](https://localhost:8080/#) in __call__(self, prompt, stop)
     11             prompt = template(prompt)
     12 
---> 13         return self.generator(prompt, stop=stop)

[/usr/local/lib/python3.10/dist-packages/microchain/models/generators.py](https://localhost:8080/#) in __call__(self, messages, stop)
     39             return "Error: timeout"
     40 
---> 41         output = response.choices[0].text.strip()
     42 
     43         return output

AttributeError: 'Choice' object has no attribute 'text'

ModuleNotFoundError: No module named 'microchain.models'

This is despite

Collecting microchain-python
  Downloading microchain_python-0.2-py3-none-any.whl (15 kB)
Requirement already satisfied: termcolor==2.4.0 in /usr/local/lib/python3.10/dist-packages (from microchain-python) (2.4.0)
Installing collected packages: microchain-python
Successfully installed microchain-python-0.2

what is the template?

template = HFChatTemplate(os.environ["TEMPLATE_NAME"])

what is this template? is there a sample .env for this? i cant wrap my head around this

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.