Coder Social home page Coder Social logo

nvidia / nemo-guardrails Goto Github PK

View Code? Open in Web Editor NEW
3.6K 34.0 318.0 19.71 MB

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.

License: Other

HTML 0.26% JavaScript 0.01% Python 99.44% Dockerfile 0.29%

nemo-guardrails's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nemo-guardrails's Issues

Creating new Rails and executing

Hi,
I am unable to add new rails and execute them. Even if I add new rails in the example folder, it doesn't show up in the UI for Rail selection.
In v0.1.0, I could edit the existing rails of the example folder and it behaved accordingly, but after updating to v0.2.0, the changes don't reflect anymore. For Eg, if I change the config file and knowledge-base of the 'topical_rails' folder in 'Lib\site-packages\examples' folder to behave as a different Bot, it still gives answers as the default US Job Reports bot. The Langchain prompts that are executed also are of the US Job reports, even though technically I have removed all those files from the system. Clearing the 'pycache' file also didn't help.
It is most likely that I am doing some errors.
Can someone guide me with a step-by-step process to how to create then execute a rail with examples.
Thanks in advance!

Set Colang variables from Python

Hi team! I was wondering if there is a specific function for setting Colang variables from the Python API? Similar to how we register actions with:

rails.register_action(action=action, name="name")

What blocks the multi-language use case?

Failed to use Nemo guardrail with other languages, the responses are always like I don't understand <language>, so it does recognize the language of input, but it still provides no answer.

I am wondering what stops multi-language use cases. Is it because the Colang only support English? That means even if I define a greeting flow define user express greeting in Franch define utilisateur exprimer salutation wouldn't work?

Completion vs. Chat - clarification request

Hey,
I wanted to start off by saying a huge thanks for the awesome work your team has done, it is really an amazing project.

I've been tinkering around with Nemo and I have a question I'm hoping you can help me with,

During my usage of Nemo with chat models (such as GPT-3.5/4), I have encountered an issue where the process deviates from the intended flow during bot message generation. Although user intent is being correctly identified, the bot message generated employs the language model (LLM) to create a more general bot response instead of adhering to the predefined conversational flow.

When using Nemo with a completion model (specifically davinci-003), I've observed that it faithfully follows the conversation flows. however, it appears to lack the creative element and capabilities of a language model when generating bot messages.

Could you please provide clarification on the distinctions between running Nemo in a chat model vs. a completion model? Additionally, if its not too much trouble, I would appreciate guidance on the necessary adjustments when utilizing each of these models. (reading the documentation, I couldn't figure out how to make both work properly)

Getting some insights on this would be a game-changer for me.
Thanks again for the hard work,
Ari

Suggestion: Extend llm type in hallucination.py

async def check_hallucination(
    context: Optional[dict] = None,
    llm: Optional[BaseLLM] = None,
    use_llm_checking: bool = True,
    config: Optional[RailsConfig] = None,
):
    """Checks if the last bot response is a hallucination by checking multiple completions for self-consistency.

    :return: True if hallucination is detected, False otherwise.
    """

    bot_response = context.get("last_bot_message")
    last_bot_prompt_string = context.get("_last_bot_prompt")

    if bot_response and last_bot_prompt_string:
        num_responses = HALLUCINATION_NUM_EXTRA_RESPONSES
        # Use beam search for the LLM call, to get several completions with only one call.
        # At the current moment, only OpenAI LLM engines are supported for computing the additional completions.
        if type(llm) != OpenAI:
            log.warning(
                f"Hallucination rail can only be used with OpenAI LLM engines."
                f"Current LLM engine is {type(llm).__name__}."
            )
            return False

In Langchain, they split models into two classes, for the instruction Openai models the hierarchy is BaseLLM->BaseOpenAI->OpenAI, for chat models like gpt-3.5 they have BaseLanguageModel->BaseChatModel->ChatOpenAI.

The above hallucination code limits the llm field only to the instruction model routine, I am not sure if this is designed on purpose. If not, we could extend the llm to both ways like llm: Optional[Union[BaseLLM, BaseLanguageModel]] = None,

On another side, if type(llm) != OpenAI: doesn't support sub class that inherits from OpenAI, I'd recommend if isinstance(llm, OpenAI): or if issubclass(type(llm), OpenAI). And of course, you may add ChatOpenAI here.

Any Chinese LLM supported?

May I ask if there are any major Chinese models currently supported, such as ChatGLM from Tsinghua University?

Support for Agents as Actions?

The examples show a q/a chain that finds answers over docs. I want to use an Agent Chain as described here to run google searches given a user query. However, the bot is generating "internal error" messages and I'm not sure how to get more transparency under the hood. Here's a demo script:

import os

from nemoguardrails import LLMRails, RailsConfig
from langchain.agents import Tool
from langchain.memory import ConversationBufferMemory
from langchain.utilities import SerpAPIWrapper
from langchain.agents import initialize_agent, AgentType
from dotenv import load_dotenv


load_dotenv()

YAML_CONFIG = """
models:
  - type: main
    engine: openai
    model: text-davinci-003
"""
COLANG_CONFIG = """
define user express greeting
  "hey"
  "yo"
  "hi"
  "hello"
  "what's up"

define bot express greeting
  "Hello there!"

define bot offer to help
  "How can I help you today?"

define flow
  user express greeting
  bot express greeting
  bot offer to help

define flow
  user ...
  $answer = execute qa_chain(input=$last_user_message)
  bot $answer
"""

def main():

    search = SerpAPIWrapper()
    tools = [
        Tool(
            name = "Current Search",
            func=search.run,
            description="useful for when you need to answer questions about current events or the current state of the world. the input to this should be a single search term."
        ),
    ]

    memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

    config = RailsConfig.from_content(COLANG_CONFIG, YAML_CONFIG)
    APP = LLMRails(config, verbose=True)

    agent_chain = initialize_agent(tools, APP.llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
    
    APP.register_action(agent_chain, name="qa_chain")

    history = [{"role": "user", "content": "How old was Obama when the Capitals won the Stanley Cup?"}]
    result = APP.generate(messages=history)
    print(result)

if __name__ == "__main__":
    main()

it seems that the colang logic correctly identifies that the qa_chain needs to be called, but after two turns it generates an "internal error"...

> Finished chain.
  ask general question
bot response for general question
  "I'm sorry, I don't know the answer to that question. However, I can help you find the answer if you'd like."


> Entering new AgentExecutor chain...


RESPONSE
--------------------
\```json
{
    "action": "Current Search",
    "action_input": "Age of Barack Obama when Capitals won Stanley Cup"
}
\```

> Entering new AgentExecutor chain...


RESPONSE
--------------------
\```json
{
    "action": "Current Search",
    "action_input": "Age of Barack Obama when Capitals won Stanley Cup"
}
\```{'role': 'assistant', 'content': "I'm sorry, an internal error has occurred."}

Are agents just not supported yet or is there something I can do to improve my outcomes here? Thanks

Extract canonical forms from existing chats for both user and bot

Hi everyone!

First of all,
This tool is AMAZING. it reduced the time took us building guardrails around our LLM's from few weeks to a couple of days!

I was wondering if there is any intent on enabling rerun of chat conversation and extract canonical forms from them (for both user and bot)?
That means, sending an array of messages and instead of getting the next message, just generating the context object of the conversation (getting the canonical forms of all the messages).
That would be a major step towards being able to dynamically generate the nemo config files needed and even to personalize them with just few chat examples.

Thank you again!

Async error during the call of default code

Hi!
I get an error during the call of the code from README:

from nemoguardrails import LLMRails, RailsConfig

config = RailsConfig.from_path("config.yaml")
app = LLMRails(config)

new_message = app.generate(messages=[{
    "role": "user",
    "content": "Hello! What can you do for me?"
}])

Error:

You are using the sync `generate` inside async code. You should replace with `await generate_async

But I didn't start a loop and do not know why asyncio.get_running_loop() response True

Versions:

  • nemoguardrails == 0.1.0
  • langchain == 0.0.157
  • openai == 0.27.6

Support for huggingface/text-generation-inference

This library from HF is pretty great and I get use out of it in production settings for LLMs. Would love to figure out how to integrate a system like this for LLM safety with it so I can use HF models, get dynamic batching, and be able to stream tokens with the guardrails library!

Runtime Error nemoguardrails 0.1.0

Hello guys,

I get this error when trying to do a super simple hello-world example on a Mac:
INFO:nemoguardrails.actions.action_dispatcher:Error type object 'Completion' has no attribute 'acreate' while execution generate_user_intent

Bot response is just:
I'm sorry, an internal error has occurred.

My config.yaml is the very basic / default :

Screenshot 2023-05-07 at 19 47 19

I have some similarly simple .co files
Any idea what the issue could be?

pip list | grep
langchain 0.0.137
nemoguardrails 0.1.0

Vulnerabilities in the dependent packages for nemogaurdrails package.

Hello,

Firstly thank-you for this great library.
But when i tried to do a pip install I'm seeing some vulnerable packages with high severity level are getting downloaded for example:

Package Name: starlette
Package version: 0.26.1
Issue: it has a high severity vulnerability which was fixed in pater version 0.27.0 or higher
CVE code: SNYK-PYTHON-STARLETTE-5538332

Package Name: requests
Package version: 2.28.2
Issue: it has s medium severity vulnerability which was fixed in 2.31.2
CVE Code: SNYK-PYTHON-REQUESTS-5595532

So wanted to check if there is any plan to upgrade the versions of the above modules to fix the vulnerabilities?

Thanks in advance

SageMaker LLM endpoint

Hi, is there any way to use a SageMaker LLM endpoint instead of openai for the engine?

Parameter temperature does not exist

I'm trying to integrate HuggingFace models with this framework. I'm currently using HuggingFaceHub() as my LLM using a similar structure to the example. However, I keep getting the errors:

WARNING:nemoguardrails.llm.params:Parameter temperature does not exist for WrapperLLM WARNING:langchain.callbacks.manager:Error in on_llm_end callback: 'NoneType' object has no attribute 'get' ERROR:nemoguardrails.actions.action_dispatcher:Error can only concatenate str (not "NoneType") to str while execution generate_user_intent

How well does this work?

I've looked through the docs in this repo, and the two blog posts, but haven't managed to find an evaluation of how well this works. I'm curious to know if an evaluation has been carried out, and what was found? E.g. how well does this detect jailbreaks? Presumably it doesn't catch all of them, but does it catch common ones? And how well does it map user input to the questions? Presumably sometimes a user might word things strangely, so it doesn't get mapped to the correct thing? Etc. (I'd be interested to know how you would go about benchmarking a system like this.)

Argument of type `property` is not iterable

Hi there -

I'm running into an issue where the model_kwargs for a HuggingFace pipeline are not recognised and not parsed.

From what I can tell, the model_kwargs exist but it's producing an object that isn't iterable, which looks to be a bug unless I'm misunderstanding how NeMo is parsing the parameters(?)

Here's the code:

from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from nemoguardrails import RailsConfig, LLMRails
from nemoguardrails.llm.helpers import get_llm_instance_wrapper

MODEL_ID = 'gpt2'
MODEL_ARGS = {
    'max_new_tokens': 10,
    'max_length': 30,
    'temperature': 0.0
}
CONFIG_FILE = 'config.co'

GREETING = 'Hello there!'

tokenizer =AutoTokenizer.from_pretrained(pretrained_model_name_or_path=MODEL_ID)
model = AutoModelForCausalLM.from_pretrained(
    pretrained_model_name_or_path=MODEL_ID
)
pipe = pipeline(
    task='text-generation',
    model=model,
    tokenizer=tokenizer,
    model_kwargs=MODEL_ARGS
)

llm = get_llm_instance_wrapper(
    llm_instance=HuggingFacePipeline(pipeline=pipe), 
    llm_type='HuggingFacePipeline'
)
config = RailsConfig.from_content(colang_content=CONFIG_FILE)
app = LLMRails(config, llm=llm, verbose=True)

has = hasattr(hg_pipe, 'model_kwargs')} # has = True
fetch = getattr(hg_pipe, 'model_kwargs', {}) # fetch = <property object at 0x13d4eb0b0>

new_message = app.generate(messages=[{
    'role': 'user',
    'content': GREETING
}])

Here's the stack trace, which is thrown at the point of app.generate():

Error argument of type 'property' is not iterable while execution generate_user_intent
Traceback (most recent call last):
  File ".../venv/lib/python3.10/site-packages/nemoguardrails/actions/action_dispatcher.py", line 125, in execute_action
    result = await fn(**params)
  File ".../venv/lib/python3.10/site-packages/nemoguardrails/actions/llm/generation.py", line 269, in generate_user_intent
    with llm_params(llm, temperature=self.config.lowest_temperature):
  File ".../venv/lib/python3.10/site-packages/nemoguardrails/llm/params.py", line 44, in __enter__
    elif hasattr(self.llm, "model_kwargs") and param in getattr(
TypeError: argument of type 'property' is not iterable

Here's the versions of the libraries used (with python==3.10.12):

langchain==0.0.251
transformers==4.31.0
nemoguardrails==0.4.0

Any help or guidance here would be great, especially if I'm missing something obvious!

Unable to install the package on Mac OS

Unable to install Nemo-guardrails on Mac OS

The issue seems with the package annoy==1.17.1. It fails to build the wheel for this version, however, 1.17.3 works.
I upgraded my pip and setup tools version too but the error persisted.
Can you please look into this? @drazvan @aaronp24

Python Version: 3.10.9
Mac OS: Venture 13.2

Sharing the error log below:

Collecting nemoguardrails
  Using cached nemoguardrails-0.3.0-py3-none-any.whl (13.8 MB)
Requirement already satisfied: pydantic~=1.10.6 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (1.10.11)
Requirement already satisfied: aiohttp==3.8.4 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (3.8.4)
Requirement already satisfied: langchain==0.0.167 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (0.0.167)
Requirement already satisfied: requests>=2.31.0 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (2.31.0)
Requirement already satisfied: typer==0.7.0 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (0.7.0)
Requirement already satisfied: PyYAML~=6.0 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (6.0)
Requirement already satisfied: setuptools~=65.5.1 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (65.5.1)
Collecting annoy==1.17.1 (from nemoguardrails)
  Using cached annoy-1.17.1.tar.gz (647 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: sentence-transformers==2.2.2 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (2.2.2)
Requirement already satisfied: fastapi==0.96.0 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (0.96.0)
Requirement already satisfied: starlette==0.27.0 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (0.27.0)
Requirement already satisfied: uvicorn==0.22.0 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (0.22.0)
Requirement already satisfied: httpx==0.23.3 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (0.23.3)
Requirement already satisfied: simpleeval==0.9.13 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (0.9.13)
Requirement already satisfied: typing-extensions==4.5.0 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (4.5.0)
Requirement already satisfied: Jinja2==3.1.2 in ./venv/lib/python3.10/site-packages (from nemoguardrails) (3.1.2)
Requirement already satisfied: attrs>=17.3.0 in ./venv/lib/python3.10/site-packages (from aiohttp==3.8.4->nemoguardrails) (23.1.0)
Requirement already satisfied: charset-normalizer<4.0,>=2.0 in ./venv/lib/python3.10/site-packages (from aiohttp==3.8.4->nemoguardrails) (3.1.0)
Requirement already satisfied: multidict<7.0,>=4.5 in ./venv/lib/python3.10/site-packages (from aiohttp==3.8.4->nemoguardrails) (6.0.4)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in ./venv/lib/python3.10/site-packages (from aiohttp==3.8.4->nemoguardrails) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in ./venv/lib/python3.10/site-packages (from aiohttp==3.8.4->nemoguardrails) (1.9.2)
Requirement already satisfied: frozenlist>=1.1.1 in ./venv/lib/python3.10/site-packages (from aiohttp==3.8.4->nemoguardrails) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in ./venv/lib/python3.10/site-packages (from aiohttp==3.8.4->nemoguardrails) (1.3.1)
Requirement already satisfied: certifi in ./venv/lib/python3.10/site-packages (from httpx==0.23.3->nemoguardrails) (2023.5.7)
Requirement already satisfied: httpcore<0.17.0,>=0.15.0 in ./venv/lib/python3.10/site-packages (from httpx==0.23.3->nemoguardrails) (0.16.3)
Requirement already satisfied: rfc3986[idna2008]<2,>=1.3 in ./venv/lib/python3.10/site-packages (from httpx==0.23.3->nemoguardrails) (1.5.0)
Requirement already satisfied: sniffio in ./venv/lib/python3.10/site-packages (from httpx==0.23.3->nemoguardrails) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in ./venv/lib/python3.10/site-packages (from Jinja2==3.1.2->nemoguardrails) (2.1.3)
Requirement already satisfied: SQLAlchemy<3,>=1.4 in ./venv/lib/python3.10/site-packages (from langchain==0.0.167->nemoguardrails) (2.0.18)
Requirement already satisfied: dataclasses-json<0.6.0,>=0.5.7 in ./venv/lib/python3.10/site-packages (from langchain==0.0.167->nemoguardrails) (0.5.9)
Requirement already satisfied: numexpr<3.0.0,>=2.8.4 in ./venv/lib/python3.10/site-packages (from langchain==0.0.167->nemoguardrails) (2.8.4)
Requirement already satisfied: numpy<2,>=1 in ./venv/lib/python3.10/site-packages (from langchain==0.0.167->nemoguardrails) (1.25.0)
Requirement already satisfied: openapi-schema-pydantic<2.0,>=1.2 in ./venv/lib/python3.10/site-packages (from langchain==0.0.167->nemoguardrails) (1.2.4)
Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in ./venv/lib/python3.10/site-packages (from langchain==0.0.167->nemoguardrails) (8.2.2)
Requirement already satisfied: tqdm>=4.48.0 in ./venv/lib/python3.10/site-packages (from langchain==0.0.167->nemoguardrails) (4.65.0)
Requirement already satisfied: transformers<5.0.0,>=4.6.0 in ./venv/lib/python3.10/site-packages (from sentence-transformers==2.2.2->nemoguardrails) (4.30.2)
Requirement already satisfied: torch>=1.6.0 in ./venv/lib/python3.10/site-packages (from sentence-transformers==2.2.2->nemoguardrails) (2.0.1)
Requirement already satisfied: torchvision in ./venv/lib/python3.10/site-packages (from sentence-transformers==2.2.2->nemoguardrails) (0.15.2)
Requirement already satisfied: scikit-learn in ./venv/lib/python3.10/site-packages (from sentence-transformers==2.2.2->nemoguardrails) (1.3.0)
Requirement already satisfied: scipy in ./venv/lib/python3.10/site-packages (from sentence-transformers==2.2.2->nemoguardrails) (1.11.1)
Requirement already satisfied: nltk in ./venv/lib/python3.10/site-packages (from sentence-transformers==2.2.2->nemoguardrails) (3.8.1)
Requirement already satisfied: sentencepiece in ./venv/lib/python3.10/site-packages (from sentence-transformers==2.2.2->nemoguardrails) (0.1.99)
Requirement already satisfied: huggingface-hub>=0.4.0 in ./venv/lib/python3.10/site-packages (from sentence-transformers==2.2.2->nemoguardrails) (0.16.2)
Requirement already satisfied: anyio<5,>=3.4.0 in ./venv/lib/python3.10/site-packages (from starlette==0.27.0->nemoguardrails) (3.7.1)
Requirement already satisfied: click<9.0.0,>=7.1.1 in ./venv/lib/python3.10/site-packages (from typer==0.7.0->nemoguardrails) (8.1.3)
Requirement already satisfied: h11>=0.8 in ./venv/lib/python3.10/site-packages (from uvicorn==0.22.0->nemoguardrails) (0.14.0)
Requirement already satisfied: idna<4,>=2.5 in ./venv/lib/python3.10/site-packages (from requests>=2.31.0->nemoguardrails) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in ./venv/lib/python3.10/site-packages (from requests>=2.31.0->nemoguardrails) (2.0.3)
Requirement already satisfied: exceptiongroup in ./venv/lib/python3.10/site-packages (from anyio<5,>=3.4.0->starlette==0.27.0->nemoguardrails) (1.1.2)
Requirement already satisfied: marshmallow<4.0.0,>=3.3.0 in ./venv/lib/python3.10/site-packages (from dataclasses-json<0.6.0,>=0.5.7->langchain==0.0.167->nemoguardrails) (3.19.0)
Requirement already satisfied: marshmallow-enum<2.0.0,>=1.5.1 in ./venv/lib/python3.10/site-packages (from dataclasses-json<0.6.0,>=0.5.7->langchain==0.0.167->nemoguardrails) (1.5.1)
Requirement already satisfied: typing-inspect>=0.4.0 in ./venv/lib/python3.10/site-packages (from dataclasses-json<0.6.0,>=0.5.7->langchain==0.0.167->nemoguardrails) (0.9.0)
Requirement already satisfied: filelock in ./venv/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence-transformers==2.2.2->nemoguardrails) (3.12.2)
Requirement already satisfied: fsspec in ./venv/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence-transformers==2.2.2->nemoguardrails) (2023.6.0)
Requirement already satisfied: packaging>=20.9 in ./venv/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence-transformers==2.2.2->nemoguardrails) (23.1)
Requirement already satisfied: greenlet!=0.4.17 in ./venv/lib/python3.10/site-packages (from SQLAlchemy<3,>=1.4->langchain==0.0.167->nemoguardrails) (2.0.2)
Requirement already satisfied: sympy in ./venv/lib/python3.10/site-packages (from torch>=1.6.0->sentence-transformers==2.2.2->nemoguardrails) (1.12)
Requirement already satisfied: networkx in ./venv/lib/python3.10/site-packages (from torch>=1.6.0->sentence-transformers==2.2.2->nemoguardrails) (3.1)
Requirement already satisfied: regex!=2019.12.17 in ./venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers==2.2.2->nemoguardrails) (2023.6.3)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in ./venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers==2.2.2->nemoguardrails) (0.13.3)
Requirement already satisfied: safetensors>=0.3.1 in ./venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers==2.2.2->nemoguardrails) (0.3.1)
Requirement already satisfied: joblib in ./venv/lib/python3.10/site-packages (from nltk->sentence-transformers==2.2.2->nemoguardrails) (1.3.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in ./venv/lib/python3.10/site-packages (from scikit-learn->sentence-transformers==2.2.2->nemoguardrails) (3.1.0)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in ./venv/lib/python3.10/site-packages (from torchvision->sentence-transformers==2.2.2->nemoguardrails) (9.5.0)
Requirement already satisfied: mypy-extensions>=0.3.0 in ./venv/lib/python3.10/site-packages (from typing-inspect>=0.4.0->dataclasses-json<0.6.0,>=0.5.7->langchain==0.0.167->nemoguardrails) (1.0.0)
Requirement already satisfied: mpmath>=0.19 in ./venv/lib/python3.10/site-packages (from sympy->torch>=1.6.0->sentence-transformers==2.2.2->nemoguardrails) (1.3.0)
Building wheels for collected packages: annoy
  Building wheel for annoy (setup.py) ... error
  error: subprocess-exited-with-error
  
  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> [18 lines of output]
      /Users/bhavishya.pandit/PycharmProjects/AI Security/venv/lib/python3.10/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
        warnings.warn(
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build/lib.macosx-10.9-universal2-cpython-310
      creating build/lib.macosx-10.9-universal2-cpython-310/annoy
      copying annoy/__init__.py -> build/lib.macosx-10.9-universal2-cpython-310/annoy
      copying annoy/__init__.pyi -> build/lib.macosx-10.9-universal2-cpython-310/annoy
      copying annoy/py.typed -> build/lib.macosx-10.9-universal2-cpython-310/annoy
      running build_ext
      building 'annoy.annoylib' extension
      creating build/temp.macosx-10.9-universal2-cpython-310
      creating build/temp.macosx-10.9-universal2-cpython-310/src
      /usr/bin/clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -arch arm64 -arch x86_64 -g "-I/Users/bhavishya.pandit/PycharmProjects/AI Security/venv/include" -I/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10 -c src/annoymodule.cc -o build/temp.macosx-10.9-universal2-cpython-310/src/annoymodule.o -D_CRT_SECURE_NO_WARNINGS -fpermissive -march=native -O3 -ffast-math -fno-associative-math -DANNOYLIB_MULTITHREADED_BUILD -std=c++14 -mmacosx-version-min=10.12
      clang: error: the clang compiler does not support '-march=native'
      error: command '/usr/bin/clang' failed with exit code 1
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for annoy
  Running setup.py clean for annoy
Failed to build annoy
**ERROR: Could not build wheels for annoy, which is required to install pyproject.toml-based projects**

Thank you!

Bots don't reply to me in Japanese.

When I ask the bot a question in Japanese, it responds with "I'm sorry, I don't know the answer to that question.".

I want to be able to converse with the bot in Japanese.
What settings do I need to change to achieve this?
I would like to report this issue on Github.

For example, I ask "先月の失業率を教えてください。". This mean "What was last month's unemployment rate?"
Bot response "I'm sorry, I don't know the answer to that question."
image

Can long flows be recognized now?

Can long flows be recognized now? I have noticed that when the flow is Q->A->Q->A->Q->A->Bey, it always gets stuck in a QA loop and never triggers the Bey intent.

NeMo-Guardrails does not work with many other LLM providers

This issue somewhat overlaps with #27. However, I've chosen to create this issue because it was mentioned in the thread of #27 that support for other LLM models would be added by the end of May. Support was seemingly added with commit e849ee9, but I am unable to use several of the engines due to various bugs.

NeMo-Guardrails doesn't seem to work with the following engines:

  • huggingface_pipeline
  • huggingface_textgen_inference
  • gpt4all

I am not confirming whether it works properly with any other engines. I have only tested with these three engines and failed to interact with the model for each engine.

I have included my configuration and output for gpt4all only. Attempting to use the other two engines above also causes similar issues. If you are able to construct a chatbot with guardrails using any of these engines, please let me know and I will re-evaluate.

Here are my configurations and code:

./config/colang_config.co:

define user express greeting
  "hi"

define bot remove last message
  "(remove last message)"

define flow
  user ...
  bot respond
  $updated_msg = execute check_if_constitutional
  if $updated_msg != $last_bot_message
    bot remove last message
    bot $updated_msg

# Basic guardrail against insults.
define flow
  user express insult
  bot express calmly willingness to help

./config/yaml_config.yaml

models:
  - type: main
    engine: gpt4all
    model: gpt4all-j-v1.3-groovy

./demo_guardrails.py

from nemoguardrails.rails import LLMRails, RailsConfig

def demo():
  # In practice, a folder will be used with the config split across multiple files.
  config = RailsConfig.from_path("./config")
  rails = LLMRails(config)

  # For chat
  new_message = rails.generate(messages=[{
      "role": "user",
      "content": "Hello! What can you do for me?"
  }])

  print("RESPONSE:", new_message)

if __name__ == "__main__":
    demo()

After running demo_guardrails.py with the above configurations, I receive the following output:

Traceback (most recent call last):
  File "/Users/efkan/anaconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/efkan/anaconda3/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
    cli.main()
  File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
    run()
  File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
    runpy.run_path(target, run_name="__main__")
  File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/Users/efkan/.vscode/extensions/ms-python.python-2023.10.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
    exec(code, run_globals)
  File "/Users/efkan/Desktop/repos/ibm-repos-private/demo_guardrails.py", line 17, in <module>
    demo()
  File "/Users/efkan/Desktop/repos/ibm-repos-private/demo_guardrails.py", line 6, in demo
    rails = LLMRails(config)
  File "/Users/efkan/anaconda3/lib/python3.10/site-packages/nemoguardrails/rails/llm/llmrails.py", line 79, in __init__
    self._init_llm()
  File "/Users/efkan/anaconda3/lib/python3.10/site-packages/nemoguardrails/rails/llm/llmrails.py", line 143, in _init_llm
    self.llm = provider_cls(**kwargs)
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for GPT4All
__root__
  Model.__init__() got an unexpected keyword argument 'n_parts' (type=type_error)

It seems that the code fails when initializing the GPT4All model. Please let me know if I've missed anything. Thanks!

Update Langchain version

Looks like NeMo-guard rails still require langchain version 0.0.167.
Currently langchain 0.0.232 is active.

Can the langchain version requirements be updated?

Creating the config and colang files from python code itself

Hi,

Thank you for the opensource tool. I want to create the config files (instruction, sample conversation) and the colang files (having canonical definition and flow definition) programmatically from python code itself.
My aim is to take these as inputs using python and generate the config and colang files.
Is this possible using nemoguardrails python-api and how to do this?

AttributeError: 'NoneType' object has no attribute '_run_module'

Hi,

Thank you for creating this tool!

I am trying to run a simple rails which is already present in the code just to get started.
As I do not have OpenAI API key I was trying with huggingface model.

But while executing I am getting error from runhouse module. This error has originated at the time of

rails = LLMRails(config)

ERROR STACK

File ~/softwares/nemo-guardrails/nemoguardrails/rails/llm/llmrails.py:79, in LLMRails.__init__(self, config, llm, verbose)
     76 self.runtime = Runtime(config=config, verbose=verbose)
     78 # Next, we initialize the LLM engine.
---> 79 self._init_llm()
     80 self.runtime.register_action_param("llm", self.llm)
     82 # Next, we initialize the LLM Generate actions and register them.

File ~/softwares/nemo-guardrails/nemoguardrails/rails/llm/llmrails.py:132, in LLMRails._init_llm(self)
    129         if hasattr(provider_cls, "model"):
    130             kwargs["model"] = main_llm_config.model
--> 132 self.llm = provider_cls(**kwargs)

File ~/miniconda3/envs/llm_eval/lib/python3.9/site-packages/langchain/llms/self_hosted_hugging_face.py:188, in SelfHostedHuggingFaceLLM.__init__(self, **kwargs)
    176 """Construct the pipeline remotely using an auxiliary function.
    177 
    178 The load function needs to be importable to be imported
    179 and run on the server, i.e. in a module and not a REPL or closure.
    180 Then, initialize the remote inference function.
    181 """
    182 load_fn_kwargs = {
    183     "model_id": kwargs.get("model_id", DEFAULT_MODEL_ID),
    184     "task": kwargs.get("task", DEFAULT_TASK),
    185     "device": kwargs.get("device", 0),
    186     "model_kwargs": kwargs.get("model_kwargs", None),
    187 }
--> 188 super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)

File ~/miniconda3/envs/llm_eval/lib/python3.9/site-packages/langchain/llms/self_hosted.py:167, in SelfHostedPipeline.__init__(self, **kwargs)
    163 remote_load_fn = rh.function(fn=self.model_load_fn).to(
    164     self.hardware, reqs=self.model_reqs
    165 )
    166 _load_fn_kwargs = self.load_fn_kwargs or {}
--> 167 self.pipeline_ref = remote_load_fn.remote(**_load_fn_kwargs)
    169 self.client = rh.function(fn=self.inference_fn).to(
    170     self.hardware, reqs=self.model_reqs
    171 )

File ~/miniconda3/envs/llm_eval/lib/python3.9/site-packages/runhouse/rns/function.py:418, in Function.remote(self, *args, **kwargs)
    416 def remote(self, *args, **kwargs):
    417     warnings.warn("`remote()` is deprecated, use `run()` instead")
--> 418     run_obj = self.run(*args, **kwargs)
    419     return run_obj.name

File ~/miniconda3/envs/llm_eval/lib/python3.9/site-packages/runhouse/rns/function.py:441, in Function.run(self, run_name, *args, **kwargs)
    438 # Use the run_name if provided, otherwise create a new one using the Function's name
    439 run_name = run_name or Run._create_new_run_name(self.name)
--> 441 run_obj = self._call_fn_with_ssh_access(
    442     fn_type="remote", run_name=run_name, args=args, kwargs=kwargs
    443 )
    445 logger.info(f"Submitted remote call to cluster for {run_name}")
    446 return run_obj

File ~/miniconda3/envs/llm_eval/lib/python3.9/site-packages/runhouse/rns/function.py:492, in Function._call_fn_with_ssh_access(self, fn_type, resources, run_name, args, kwargs)
    489 if not isinstance(env_vars, dict):
    490     env_vars = _env_vars_from_file(env_vars)
--> 492 res = self.system._run_module(
    493     relative_path,
    494     module_name,
    495     fn_name,
    496     fn_type,
    497     resources,
    498     env_name,
    499     env_vars,
    500     run_name,
    501     args,
    502     kwargs,
    503 )
    504 return res

AttributeError: 'NoneType' object has no attribute '_run_module'

Code snippet

changed this in config.yml

  - type: main
    engine: self_hosted_hugging_face
    model: decapoda-research/llama-7b-hf

models:

Code to call LLM

from nemoguardrails import LLMRails, RailsConfig
import os

os.environ["HUGGINGFACEHUB_API_TOKEN"] = "HUGINGTOKEN"
def generate_message():
    # Give the path to the folder containing the rails
    config = RailsConfig.from_path("./examples/topical_rail")
    rails = LLMRails(config)

    # Define role and question to be asked
    new_message = rails.generate(messages=[{
        "role": "user",
        "content": "How can you help me?"
    }])
    print(new_message)


if __name__ == '__main__':
    generate_message()

I have tried to debug. It seems that in Runhouse library, the self.system variable is getting None value.
Can you please help in resolving this issue?

Method `RailsConfig.from_path` only reads from the directory but the docstring implies it can read from `.co` file also

The doc string suggests the following:

Supports loading a from a single file, or from a directory.

But in the actual code, this path is validated for being a directory or being a yaml. A direct path to a config file is not handled:

# Otherwise, if it's a folder, we iterate through all files.
if config_path.endswith(".yaml") or config_path.endswith(".yml"):
with open(config_path) as f:
raw_config = yaml.safe_load(f.read())
elif os.path.isdir(config_path):

In fact, even in the README.md example, the it looks like we need to pass the path to the file (which is what I did) but it doesn't work:

config = RailsConfig.from_path("path/to/config")

I can make a PR to fix the documentation or a PR that also handles a direct path to the config file. I just need to know what is the preferred way.

The third options is I misunderstood that the config file should be YAML.

Is it possible to use this without langchain?

I have a specific use case that would be enabled, but it doesn't support langchain and probably won't for the foreseeable future.

It seems this has a hard dependency on langchain in the requirements.txt, but I'd like to know if that can be removed
or swapped out.

Setting variables in Colang scripts

Hi, more of a Colang question (let me know if there's another place for these).

I'm trying to set a variable in a simple example script which looks like:

define user express greeting
    "hello"
    "hi"
    "how are you"

define user give name
    "James"
    "Julio"
    "Mary"
    "Putu"

define bot name greeting
    "Hey $name!"

define flow greeting
    user express greeting
    if $name
        bot name greeting
    else
        bot express greeting
        bot ask name

define flow give name
    user give name
    $name = $last_user_message
    bot name greeting

Here I'd expect $name to be set to the same value as $last_user_message, but it is set to None and returns "Hey None!"

Any idea how this should work? I saw set in the Colang Language Reference doc, but that doesn't seem to do anything. I understand I can use actions to do this, but I was hoping for a simpler way if possible, any idea on how I should set the $name variable here?

langchain error on Ubuntu 20.04 (python 3.9)

Traceback (most recent call last):
File "/home/lightonh/NeMo-Guardrails/examples/demo_chain_as_action.py", line 76, in
demo()
File "/home/lightonh/NeMo-Guardrails/examples/demo_chain_as_action.py", line 71, in demo
result = app.generate(messages=history)
File "/home/lightonh/NeMo-Guardrails/nemoguardrails/rails/llm/llmrails.py", line 162, in generate
return asyncio.run(self.generate_async(prompt=prompt, messages=messages))
File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/home/lightonh/NeMo-Guardrails/nemoguardrails/rails/llm/llmrails.py", line 121, in generate_async
new_events = await self.runtime.generate_events(events)
File "/home/lightonh/NeMo-Guardrails/nemoguardrails/flows/runtime.py", line 144, in generate_events
next_events = await self._process_start_action(events)
File "/home/lightonh/NeMo-Guardrails/nemoguardrails/flows/runtime.py", line 251, in _process_start_action
parameters = inspect.signature(fn).parameters
File "/usr/lib/python3.9/inspect.py", line 3113, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped)
File "/usr/lib/python3.9/inspect.py", line 2862, in from_callable
return _signature_from_callable(obj, sigcls=cls,
File "/usr/lib/python3.9/inspect.py", line 2328, in _signature_from_callable
if _signature_is_builtin(obj):
File "/usr/lib/python3.9/inspect.py", line 1875, in _signature_is_builtin
obj in (type, object))
File "pydantic/main.py", line 911, in pydantic.main.BaseModel.eq
File "/home/lightonh/.virtualenvs/NeMo-Guardrails/lib/python3.9/site-packages/langchain/chains/base.py", line 249, in dict
_dict["_type"] = self._chain_type
File "/home/lightonh/.virtualenvs/NeMo-Guardrails/lib/python3.9/site-packages/langchain/chains/base.py", line 38, in _chain_type
raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.

Older LangChain Dependency ( 0.0.137 ) breaks asynchronous Execution

I am using langchain (^0.0.141) and it works perfectly for our see needs by creating a asyncio.create_task and a asyncio.Queue() to listen from.

But NeMo-Guardrails==0.1.0 requires langchain==0.0.137 which throws an unimplemented error every time the coroutine is called.

ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='Task-12' coro=<Chain.acall() done, defined at path/to/.venv/lib/python3.10/site-packages/langchain/chains/base.py:120> exception=NotImplementedError()>
Traceback (most recent call last):
  File "path/to/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 154, in acall
    raise e
  File "path/to/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 148, in acall
    outputs = await self._acall(inputs)
  File "path/to/.venv/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 103, in _acall
    docs = await self._aget_docs(new_question, inputs)
  File "path/to/.venv/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 150, in _aget_docs
    docs = await self.retriever.aget_relevant_documents(question)
  File "path/to/.venv/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 236, in aget_relevant_documents
    docs = await self.vectorstore.asimilarity_search(
  File "path/to/.venv/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 84, in asimilarity_search
    raise NotImplementedError
NotImplementedError

Sentence Embeddings

First of all, amazing stuff here. And it's open source?? Icing on the cake.

Forgive me if it's a stupid question, as I haven't fully read the repo yet. But in theory, is it possible to replace the KNN with a sentence embeddings based search?

I feel like this would be better at understanding the guardrails and put less emphasis on the requirements of high quality of the examples given

Thanks!

Jupyter async error

Hi, I tried running the following in a Jupyter notebook (in VS Code):

elif mode == "chain_with_guardrails":
  history = [
    {"role": "user", "content": query}
  ]
  result = app.generate(messages=history)

However I got the following error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[79], line 15
     11 elif mode == "chain_with_guardrails":
     12   history = [
     13     {"role": "user", "content": query}
     14   ]
---> 15   result = app.generate(messages=history)
     19 print(result)

File [~/anaconda3/envs/nemo/lib/python3.10/site-packages/nemoguardrails/rails/llm/llmrails.py:157](https://vscode-remote+ssh-002dremote-002bsagemaker-002dnotebook.vscode-resource.vscode-cdn.net/home/ec2-user/SageMaker/NeMo-Guardrails/rag-demo/~/anaconda3/envs/nemo/lib/python3.10/site-packages/nemoguardrails/rails/llm/llmrails.py:157), in LLMRails.generate(self, prompt, messages)
    154     loop = None
    156 if loop and loop.is_running():
--> 157     raise RuntimeError(
    158         "You are using the sync `generate` inside async code. "
    159         "You should replace with `await generate_async(...)."
    160     )
    162 return asyncio.run(self.generate_async(prompt=prompt, messages=messages))

RuntimeError: You are using the sync `generate` inside async code. You should replace with `await generate_async(...).

Why does the system sometimes enter multiple similar LLMchains in conversation?

Issue:

Sometimes the system enters two LLMchains that execute the similar actions in conversation using 'generate' function. Is this expected or not?

Log for a single conversation:

--------------------------  user ask:  -------------------------
Who are you?

�[1m> Entering new LLMChain chain...�[0m

�[1m> Finished chain.�[0m
�[42m�[97mask for bot's identity

bot respond about identity
  "I am an AI assistant designed to help you with anything you need. What can I assist you with today?"�[0m


�[1m> Entering new LLMChain chain...�[0m

�[1m> Finished chain.�[0m
�[42m�[97mbot respond with its name and purpose�[0m


�[1m> Entering new LLMChain chain...�[0m

�[1m> Finished chain.�[0m
�[42m�[97m"I am an AI assistant designed to help you with anything you need. My name is Homelander."�[0m
--------------------------  new message:  -------------------------
{'role': 'assistant', 'content': 'I am an AI assistant designed to help you with anything you need. My name is Homelander.'}

Log for a three rounds chat (appended in history):

-------------------------- 1st round user ask:  -------------------------
Hello there


�[1m> Entering new LLMChain chain...�[0m

�[1m> Finished chain.�[0m
�[42m�[97mexpress greeting�[0m

-------------------------- 1st round bot response:  -------------------------
{'role': 'assistant', 'content': 'Hey there!\nHow are you doing?'}

-------------------------- 2nd round user ask:  -------------------------
I feel very bad today.

�[1m> Entering new LLMChain chain...�[0m

�[1m> Finished chain.�[0m
�[42m�[97mexpress negative emotion
bot express empathy
  "I'm sorry to hear that. Is there anything I can do to help you feel better?"�[0m


�[1m> Entering new LLMChain chain...�[0m

�[1m> Finished chain.�[0m
�[42m�[97mbot express empathy�[0m


�[1m> Entering new LLMChain chain...�[0m

�[1m> Finished chain.�[0m
�[42m�[97m"I'm sorry to hear that. Is there anything I can do to help you feel better?"�[0m

-------------------------- 2nd round bot response:  -------------------------
{'role': 'assistant', 'content': "I'm sorry to hear that. Is there anything I can do to help you feel better?"}

-------------------------- 3rd round user ask:  -------------------------
What can you do for me?


�[1m> Entering new LLMChain chain...�[0m

�[1m> Finished chain.�[0m
�[42m�[97mask about capabilities
bot respond about capabilities
  "As I mentioned earlier, I can do anything! Specifically, I can help you with tasks such as scheduling appointments, setting reminders, providing information on various topics, and even ordering food or making reservations. Is there anything specific you need help with?"�[0m


�[1m> Entering new LLMChain chain...�[0m

�[1m> Finished chain.�[0m
�[42m�[97mbot inform capabilities�[0m

-------------------------- 3rd round bot response:  -------------------------
{'role': 'assistant', 'content': 'I am a AI Homelander, I do whatever I want!'}

Repro steps:

  1. Create a config.yml that only contains sample_conversation like the configuration guide shows.
  2. Create a hello.co following the hello world example
  3. Change the instance of def bot inform capabilities to "I am a AI Homelander, I do whatever I want!"
  4. Define app and run:
config = RailsConfig.from_path("path/to/config/folder")
app = LLMRails(config=config, 
                    llm=ChatOpenAI(temperature=0, openai_api_key="<my openai key>"), 
                    verbose=True)
history = []
user_message = "<my input>"
history.append({"role": "user", "content": user_message})
bot_message = app.generate(messages=history)
print(bot_message)
history.append(bot_message)

Concerns:

  1. Only the 1st round user ask in the three rounds chat cases entered one LLMchain, the rest all entered two LLMchains.
  2. Two LLMchains execute similar actions, but not exactly same. For example, the 2rd round user ask, one chain is express negative emotion + bot express empathy, while another is only bot express empathy.
  3. The responses in the two chains could be very different. For example, the 3rd round user ask, one chain responses very officially about responsibility, while the another one responses nothing, but the interesting thing is, the returned bot_message does uses the defined homelander material.
  4. I tried just using Langchian ChatOpenAI with ConversationChain, only one chain is entered in each chat, so it doesn't look like an issue on the Langchain side.

Package and versions:

  • Python 3.10
  • Langchain 0.0.169

Can you please help identify the expected/unexpected behaviors above?

Event loop is closed

I've run the demo py file. It can return the message I want correctly, but always raises a RuntimeError: "Event loop is closed". How dose it happen and what can I do?

Fact check: Evidence is empty

Hello,

I would like to use NEMO for check facts but I am having some problems to understand why the "relevant_chunks" is always empty in my case when I run the check_facts:

COLANG_CONFIG = """
# Main Flow
define flow
    user ...
    $answer = execute qa_chain(query=$last_user_message)
    bot provide $answer
    $accurate = execute check_facts
    if not $accurate
        bot remove last message
        bot inform answer unknown

define bot remove last message
  "(remove last message)"
"""

def _get_qa_chain(llm, filters):
    """Initializes a QA chain using the jobs report.

    It uses OpenAI embeddings.
    """

    qa_chain = RetrievalQA.from_chain_type(
        llm=llm,
        chain_type="stuff",
        retriever=pinecone_langchain.as_retriever(search_kwargs={'filter': filters, "k": 8}),
    )

    return qa_chain


def demo():
    """Demo of using a chain as a custom action."""
    config = RailsConfig.from_content(COLANG_CONFIG, YAML_CONFIG)
    app = LLMRails(config, verbose=True)

    # Filters
    filters = ...

    # Create and register the chain directly as an action.
    qa_chain = _get_qa_chain(app.llm, filters)
    app.register_action(qa_chain, name="qa_chain")

    # mode = "chain"
    # mode = "chain_with_guardrails"
    mode = "chat_with_guardrails"

    if mode == "chain":
        user_message = input("> ")
        current_implementation_response = current_implementation(user_message, filters)
        result = qa_chain.run(user_message)
        print(result)

    elif mode == "chain_with_guardrails":
        user_message = input("> ")
        current_implementation_response = current_implementation(user_message, filters)
        history = [
            {"role": "user", "content": user_message}
        ]
        result = app.generate(messages=history)
        print(f"\033[92mWith NEMO---> {result['content']}\033[0m")

    elif mode == "chat_with_guardrails":
        history = []
        while True:
            user_message = input("> ")
            current_implementation_response = current_implementation(user_message, filters)

            history.append({"role": "user", "content": user_message})
            bot_message = app.generate(messages=history)
            history.append(bot_message)

            # We print bot messages in green.
            print(f"\033[92mWith NEMO---> {bot_message['content']}\033[0m")
            print("---------")

If I print the context received in the check_facts() function, I can see how the relevant_chunks is empty:

{'last_user_message': 'Is XXX better than XXX?', 'last_bot_message': 'It depends on what "better" means to you. Both XXX and XXX collect data from the same public data sources. XXX and XXX have different ...', 'answer': 'It depends on what "better" means to you. Both XXX and XXX collect data from the same public data sources.... 'relevant_chunks': ''}

Since the relevant_chunks is empty, evidence is empty so check_facts always returns True

Events history cache clarifications

Hi guys, thanks for the latest updates!

We are building a chatbot with NeMo at the core.
Again, my humble kudos to you! In the past few months, we've been testing pretty much every tool out there suggested to help build secure LLM apps. Building our product with NeMo at the core was by far the most comfortable yet rewarding way to go :)

I'm trying to understand several things regarding the behavior of the event's history cache. Does the generate function expect to receive a history of all the messages thus far (how does it scale?)? Only the last n messages? Or just the latest one? I've noticed that it's an incremental process, regardless of how many messages are in the history. The app generates an event in response to the last user message and keeps accumulating them. This made me think that we should only send the last user message. However, in

it seems like you are suggesting to keep accumulating the messages and send all of them to the generate function. When doing so, the keys of the cache chain all the user messages together along with the last message (delimited by ":"). This feels like it wasn't the intention.

What's the best way to approach this?

I also noticed this comment regarding the cache:

We keep a cache of the events history associated with a sequence of user messages.
TODO: when we update the interface to allow returning a "state object", this
should be removed.

Anything on what's coming next and how to prepare for that change?
Any production best practices estimation?

Again, thank you so much!
Bar.

.co file doesn't work in server mode

I want the bot to block some words and not answer. In chat mode, the .co file works. But in server mode, it doesn't work.... I'm sure I have selected the correct configuration file directory.

Enable/disable rails in different scenarios

Hi, I was wondering if there was any way which rails can be enabled or disabled for flows. if there isn't, I believe that it may be helpful if rails had a certain indicator in colang files, maybe just define rail flow instead of just flow, or maybe define rail ( flow is implied ). then those rails can be raised or lowered in flow definitions. with this feature, compliance heavy industries will have a much easier time ensuring compliance with there chatbots.

currently, this is possible using context variables, and external logic, but it basically involves loading all the possible rail configs, which takes quite a while, and I imagine is not great for performance ( I haven't actually checked if there are performance impacts yet. )

How to use streaming response?

First of all, thank you for doing a great library development.
It's very cool and easy to use and we would like to incorporate it into our chatbot projects!

But I would like to use the ability to generate answers in real time. Equivalent to openai stream option.
How to use it ...?

How to use Nested flows

Hi, I am looking for use cases where one flow can call another flow. How can we do that? Will it be supported in the future?

Installation fails on windows 64-bit with 'ERROR: Could not build wheels for annoy, which is required to install pyproject.toml-based projects' error

Installation of nemoguardrails fails on windows 10 64-bit with 'ERROR: Could not build wheels for annoy, which is required to install pyproject.toml-based projects'. Here are more details about the env for reproducibility.

Python Env - Anaconda Distribution with Python 3.11.3
OS - Windows 10 Enterprise 64-bit
Full Error -

"""
pip install nemoguardrails
Collecting nemoguardrails
Using cached nemoguardrails-0.3.0-py3-none-any.whl (13.8 MB)
Collecting pydantic~=1.10.6 (from nemoguardrails)
Downloading pydantic-1.10.12-cp311-cp311-win_amd64.whl (2.1 MB)
---------------------------------------- 2.1/2.1 MB 12.2 MB/s eta 0:00:00
Collecting aiohttp==3.8.4 (from nemoguardrails)
Downloading aiohttp-3.8.4-cp311-cp311-win_amd64.whl (317 kB)
---------------------------------------- 317.2/317.2 kB 20.5 MB/s eta 0:00:00
Collecting langchain==0.0.167 (from nemoguardrails)
Using cached langchain-0.0.167-py3-none-any.whl (809 kB)
Collecting requests>=2.31.0 (from nemoguardrails)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting typer==0.7.0 (from nemoguardrails)
Using cached typer-0.7.0-py3-none-any.whl (38 kB)
Requirement already satisfied: PyYAML~=6.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from nemoguardrails) (6.0)
Collecting setuptools~=65.5.1 (from nemoguardrails)
Using cached setuptools-65.5.1-py3-none-any.whl (1.2 MB)
Collecting annoy==1.17.1 (from nemoguardrails)
Using cached annoy-1.17.1.tar.gz (647 kB)
Preparing metadata (setup.py) ... done
Collecting sentence-transformers==2.2.2 (from nemoguardrails)
Using cached sentence-transformers-2.2.2.tar.gz (85 kB)
Preparing metadata (setup.py) ... done
Collecting fastapi==0.96.0 (from nemoguardrails)
Using cached fastapi-0.96.0-py3-none-any.whl (57 kB)
Collecting starlette==0.27.0 (from nemoguardrails)
Using cached starlette-0.27.0-py3-none-any.whl (66 kB)
Collecting uvicorn==0.22.0 (from nemoguardrails)
Using cached uvicorn-0.22.0-py3-none-any.whl (58 kB)
Collecting httpx==0.23.3 (from nemoguardrails)
Using cached httpx-0.23.3-py3-none-any.whl (71 kB)
Collecting simpleeval==0.9.13 (from nemoguardrails)
Using cached simpleeval-0.9.13-py2.py3-none-any.whl (15 kB)
Collecting typing-extensions==4.5.0 (from nemoguardrails)
Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Requirement already satisfied: Jinja2==3.1.2 in c:\users\archanadixit\anaconda3\lib\site-packages (from nemoguardrails) (3.1.2)
Requirement already satisfied: attrs>=17.3.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from aiohttp==3.8.4->nemoguardrails) (22.1.0)
Requirement already satisfied: charset-normalizer<4.0,>=2.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from aiohttp==3.8.4->nemoguardrails) (2.0.4)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\archanadixit\anaconda3\lib\site-packages (from aiohttp==3.8.4->nemoguardrails) (6.0.2)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in c:\users\archanadixit\anaconda3\lib\site-packages (from aiohttp==3.8.4->nemoguardrails) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from aiohttp==3.8.4->nemoguardrails) (1.8.1)
Requirement already satisfied: frozenlist>=1.1.1 in c:\users\archanadixit\anaconda3\lib\site-packages (from aiohttp==3.8.4->nemoguardrails) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in c:\users\archanadixit\anaconda3\lib\site-packages (from aiohttp==3.8.4->nemoguardrails) (1.2.0)
Requirement already satisfied: certifi in c:\users\archanadixit\anaconda3\lib\site-packages (from httpx==0.23.3->nemoguardrails) (2023.5.7)
Collecting httpcore<0.17.0,>=0.15.0 (from httpx==0.23.3->nemoguardrails)
Using cached httpcore-0.16.3-py3-none-any.whl (69 kB)
Collecting rfc3986[idna2008]<2,>=1.3 (from httpx==0.23.3->nemoguardrails)
Using cached rfc3986-1.5.0-py2.py3-none-any.whl (31 kB)
Requirement already satisfied: sniffio in c:\users\archanadixit\anaconda3\lib\site-packages (from httpx==0.23.3->nemoguardrails) (1.2.0)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from Jinja2==3.1.2->nemoguardrails) (2.1.1)
Requirement already satisfied: SQLAlchemy<3,>=1.4 in c:\users\archanadixit\anaconda3\lib\site-packages (from langchain==0.0.167->nemoguardrails) (1.4.39)
Collecting dataclasses-json<0.6.0,>=0.5.7 (from langchain==0.0.167->nemoguardrails)
Using cached dataclasses_json-0.5.13-py3-none-any.whl (26 kB)
Requirement already satisfied: numexpr<3.0.0,>=2.8.4 in c:\users\archanadixit\anaconda3\lib\site-packages (from langchain==0.0.167->nemoguardrails) (2.8.4)
Requirement already satisfied: numpy<2,>=1 in c:\users\archanadixit\anaconda3\lib\site-packages (from langchain==0.0.167->nemoguardrails) (1.24.3)
Collecting openapi-schema-pydantic<2.0,>=1.2 (from langchain==0.0.167->nemoguardrails)
Using cached openapi_schema_pydantic-1.2.4-py3-none-any.whl (90 kB)
Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from langchain==0.0.167->nemoguardrails) (8.2.2)
Requirement already satisfied: tqdm>=4.48.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from langchain==0.0.167->nemoguardrails) (4.65.0)
Collecting transformers<5.0.0,>=4.6.0 (from sentence-transformers==2.2.2->nemoguardrails)
Downloading transformers-4.31.0-py3-none-any.whl (7.4 MB)
---------------------------------------- 7.4/7.4 MB 14.8 MB/s eta 0:00:00
Collecting torch>=1.6.0 (from sentence-transformers==2.2.2->nemoguardrails)
Downloading torch-2.0.1-cp311-cp311-win_amd64.whl (172.3 MB)
---------------------------------------- 172.3/172.3 MB 6.5 MB/s eta 0:00:00
Collecting torchvision (from sentence-transformers==2.2.2->nemoguardrails)
Downloading torchvision-0.15.2-cp311-cp311-win_amd64.whl (1.2 MB)
---------------------------------------- 1.2/1.2 MB 4.2 MB/s eta 0:00:00
Requirement already satisfied: scikit-learn in c:\users\archanadixit\anaconda3\lib\site-packages (from sentence-transformers==2.2.2->nemoguardrails) (1.2.2)
Requirement already satisfied: scipy in c:\users\archanadixit\anaconda3\lib\site-packages (from sentence-transformers==2.2.2->nemoguardrails) (1.10.1)
Requirement already satisfied: nltk in c:\users\archanadixit\anaconda3\lib\site-packages (from sentence-transformers==2.2.2->nemoguardrails) (3.7)
Collecting sentencepiece (from sentence-transformers==2.2.2->nemoguardrails)
Downloading sentencepiece-0.1.99-cp311-cp311-win_amd64.whl (977 kB)
---------------------------------------- 977.5/977.5 kB 7.7 MB/s eta 0:00:00
Collecting huggingface-hub>=0.4.0 (from sentence-transformers==2.2.2->nemoguardrails)
Using cached huggingface_hub-0.16.4-py3-none-any.whl (268 kB)
Requirement already satisfied: anyio<5,>=3.4.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from starlette==0.27.0->nemoguardrails) (3.5.0)
Requirement already satisfied: click<9.0.0,>=7.1.1 in c:\users\archanadixit\anaconda3\lib\site-packages (from typer==0.7.0->nemoguardrails) (8.0.4)
Collecting h11>=0.8 (from uvicorn==0.22.0->nemoguardrails)
Using cached h11-0.14.0-py3-none-any.whl (58 kB)
Requirement already satisfied: idna<4,>=2.5 in c:\users\archanadixit\anaconda3\lib\site-packages (from requests>=2.31.0->nemoguardrails) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\archanadixit\anaconda3\lib\site-packages (from requests>=2.31.0->nemoguardrails) (1.26.16)
Requirement already satisfied: colorama in c:\users\archanadixit\anaconda3\lib\site-packages (from click<9.0.0,>=7.1.1->typer==0.7.0->nemoguardrails) (0.4.6)
Collecting marshmallow<4.0.0,>=3.18.0 (from dataclasses-json<0.6.0,>=0.5.7->langchain==0.0.167->nemoguardrails)
Using cached marshmallow-3.20.1-py3-none-any.whl (49 kB)
Collecting typing-inspect<1,>=0.4.0 (from dataclasses-json<0.6.0,>=0.5.7->langchain==0.0.167->nemoguardrails)
Using cached typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)
Requirement already satisfied: filelock in c:\users\archanadixit\anaconda3\lib\site-packages (from huggingface-hub>=0.4.0->sentence-transformers==2.2.2->nemoguardrails) (3.9.0)
Requirement already satisfied: fsspec in c:\users\archanadixit\anaconda3\lib\site-packages (from huggingface-hub>=0.4.0->sentence-transformers==2.2.2->nemoguardrails) (2023.3.0)
Requirement already satisfied: packaging>=20.9 in c:\users\archanadixit\anaconda3\lib\site-packages (from huggingface-hub>=0.4.0->sentence-transformers==2.2.2->nemoguardrails) (23.0)
Requirement already satisfied: greenlet!=0.4.17 in c:\users\archanadixit\anaconda3\lib\site-packages (from SQLAlchemy<3,>=1.4->langchain==0.0.167->nemoguardrails) (2.0.1)
Requirement already satisfied: sympy in c:\users\archanadixit\anaconda3\lib\site-packages (from torch>=1.6.0->sentence-transformers==2.2.2->nemoguardrails) (1.11.1)
Requirement already satisfied: networkx in c:\users\archanadixit\anaconda3\lib\site-packages (from torch>=1.6.0->sentence-transformers==2.2.2->nemoguardrails) (2.8.4)
Requirement already satisfied: regex!=2019.12.17 in c:\users\archanadixit\anaconda3\lib\site-packages (from transformers<5.0.0,>=4.6.0->sentence-transformers==2.2.2->nemoguardrails) (2022.7.9)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers<5.0.0,>=4.6.0->sentence-transformers==2.2.2->nemoguardrails)
Downloading tokenizers-0.13.3-cp311-cp311-win_amd64.whl (3.5 MB)
---------------------------------------- 3.5/3.5 MB 7.6 MB/s eta 0:00:00
Collecting safetensors>=0.3.1 (from transformers<5.0.0,>=4.6.0->sentence-transformers==2.2.2->nemoguardrails)
Downloading safetensors-0.3.1-cp311-cp311-win_amd64.whl (263 kB)
---------------------------------------- 263.7/263.7 kB 15.8 MB/s eta 0:00:00
Requirement already satisfied: joblib in c:\users\archanadixit\anaconda3\lib\site-packages (from nltk->sentence-transformers==2.2.2->nemoguardrails) (1.2.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from scikit-learn->sentence-transformers==2.2.2->nemoguardrails) (2.2.0)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from torchvision->sentence-transformers==2.2.2->nemoguardrails) (9.4.0)
Requirement already satisfied: mypy-extensions>=0.3.0 in c:\users\archanadixit\anaconda3\lib\site-packages (from typing-inspect<1,>=0.4.0->dataclasses-json<0.6.0,>=0.5.7->langchain==0.0.167->nemoguardrails) (0.4.3)
Requirement already satisfied: mpmath>=0.19 in c:\users\archanadixit\anaconda3\lib\site-packages (from sympy->torch>=1.6.0->sentence-transformers==2.2.2->nemoguardrails) (1.2.1)
Building wheels for collected packages: annoy, sentence-transformers
Building wheel for annoy (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [22 lines of output]
C:\Users\ArchanaDixit\anaconda3\Lib\site-packages\setuptools_init_.py:84: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!

          ********************************************************************************
          Requirements should be satisfied by a PEP 517 installer.
          If you are using pip, you can try `pip install --use-pep517`.
          ********************************************************************************

  !!
    dist.fetch_build_eggs(dist.setup_requires)
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build\lib.win-amd64-cpython-311
  creating build\lib.win-amd64-cpython-311\annoy
  copying annoy\__init__.py -> build\lib.win-amd64-cpython-311\annoy
  copying annoy\__init__.pyi -> build\lib.win-amd64-cpython-311\annoy
  copying annoy\py.typed -> build\lib.win-amd64-cpython-311\annoy
  running build_ext
  building 'annoy.annoylib' extension
  error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for annoy
Running setup.py clean for annoy
Building wheel for sentence-transformers (setup.py) ... done
Created wheel for sentence-transformers: filename=sentence_transformers-2.2.2-py3-none-any.whl size=125960 sha256=2edcca3561e70321af43ae689d0e6a66608d6657bfba4b64130a109a0a80f48d
Stored in directory: c:\users\archanadixit\appdata\local\pip\cache\wheels\ff\27\bf\ffba8b318b02d7f691a57084ee154e26ed24d012b0c7805881
Successfully built sentence-transformers
Failed to build annoy
ERROR: Could not build wheels for annoy, which is required to install pyproject.toml-based projects
"""

On Adding Custom Actions

Hello Guys,

I'm trying to add a custom action to a custom flow. Lets say i want to extract some fulfilment-table like information from the user interaction. While this table is not complete, i want to loop in the flow.

My flow co looks like this :

define flow invoice
   user asks about invoice details
   $fulfilment_invoice_details = execute check_fulfilment()
   bot responds about invoice details
   while $fulfilment_invoice_details
     bot asks about fulfilment credentials
     user responds to fulfilment credentials
     $fulfilment_invoice_details = execute check_fulfilment()
   bot confirms invoice details
   user says goodbye

my action definition looks like this (draft):

@action()
async def check_fulfilment(context: Optional[dict] = None, llm: Optional[BaseLLM] = None, ):
    bot_response = context.get("last_bot_message")

    chain = create_extraction_chain(llm, fulfilment_table_invoice)
    output = chain.predict_and_parse(text=bot_response)["data"]

    fulfilment_table_result = f"The Fulfilment Table result is : {output}"
    
   # if all details in fulfilment table return True

    return False

I instantiate the rails like this :

config = RailsConfig.from_path(path_to_nemo_bot) rails = LLMRails(config, verbose=True) rails.register_action(check_fulfilment, name="check_fulfilment")

However, no matter what i try, i can not get the system to go into the action code and run the extraction chain.
I'm a super rookie with Nemo Guardrails, so any help would be much appreciated.
I dont think i missed anything? Is there a better way to extract fulfilment table like information (name / email / account ids etc)?

Running with Azure supported?

I've been trying to use Nemo-Guardrails with Azure but keep getting the dreaded "openai.error.InvalidRequestError: Resource not found" error. Outside of Nemo-Guardrails, my Azure credentials work with openai python api as well as langchain Azure apis (both AzureOpenAI and AzureChatOpenAI).

I tried the helloworld example and ran the cli command: nemoguardrails chat --config=config/hello_world. Also tried running the basic usage example for python api. Have tried setting engine parameter in config.yml to openai and azure.

Checking the engine parameter in the config from the documentation- engine: the LLM provider; currently, only "openai" is supported.

But also from the documentation- You can use any LLM provider that is supported by LangChain, e.g., ai21, aleph_alpha, anthropic, anyscale, azure ..

So, is Azure supported?

My specifics-
Python 3.10.11
nemoguardrails==0.3.0
openai==0.27.7
langchain==0.0.167
conda 22.9.0
macOS Ventura(M1 Max chip)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.