Coder Social home page Coder Social logo

Comments (11)

chilicrabcakes avatar chilicrabcakes commented on August 18, 2024 4

Hey, I found a workaround for this. It can be solved using LangChain in the Python API. You can define your LLM in LangChain using the Azure Chat OpenAI object and pass that as an argument to the LLMRails class.

For example,

from langchain.chat_models import AzureChatOpenAI
llm = AzureChatOpenAI(<your-parameters-here>)

from nemoguardrails import LLMRails, RailsConfig

config = RailsConfig.from_path("path/to/config")

app = LLMRails(config, llm=llm)
new_message = app.generate(messages=[{
    "role": "user",
    "content": "Hello! What can you do for me?"
}])

An additional error I was noticing here is that app.generate kept telling me that I can't run synchronous calls inside async code, not really sure why. I got it running by replacing app.generate with app.generate_async but that's probably not the best way to solve this problem.

from nemo-guardrails.

drazvan avatar drazvan commented on August 18, 2024 2

Ok, I think I'm starting to understand the issue. Can't test, so bear with me. Can you try adding the following:

app = LLMRails(config=config, llm=chat_model)
app.runtime.register_action_param("llm", chat_model)

from nemo-guardrails.

ansumanparija007 avatar ansumanparija007 commented on August 18, 2024 1

@tanujjain @chilicrabcakes facing a similar issue while trying to work with guardrails .
Parameter temperature does not exist for NoneType
Error 'NoneType' object has no attribute 'agenerate_prompt' while execution check_jailbreak
Traceback (most recent call last):
File "C:\Python311\Lib\site-packages\nemoguardrails\actions\action_dispatcher.py", line 125, in execute_action
result = await fn(**params)
^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\nemoguardrails\actions\jailbreak_check.py", line 50, in check_jailbreak
check = await llm_call(llm, prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\nemoguardrails\actions\llm\utils.py", line 31, in llm_call
result = await llm.agenerate_prompt(

Any suggestions ,have you faced this issue

from nemo-guardrails.

ansumanparija007 avatar ansumanparija007 commented on August 18, 2024 1

Hi @drazvan i am try in to use the following config to connect to azure open ai-
chat_model = AzureChatOpenAI(
openai_api_type="azure",
openai_api_version="2023-03-15-preview",
openai_api_key=azure_openai_key,
deployment_name=azure_openai_model,
openai_api_base=azure_openai_endpoint
)
app = LLMRails(config=config, llm=chat_model)

new_message = app.generate(messages=[{
"role": "user",
"content": "What's the latest fashion trend?"
}])

i hope currently guradrails supports azure open ai ,pls let me know

from nemo-guardrails.

drazvan avatar drazvan commented on August 18, 2024

@ansumanparija007 : looks like llm is set to None in your case. Can you provide more details on the config?

from nemo-guardrails.

ansumanparija007 avatar ansumanparija007 commented on August 18, 2024

any luck with this @drazvan ?

from nemo-guardrails.

drazvan avatar drazvan commented on August 18, 2024

I can't test this directly as I don't have an Azure Key. Can you confirm the chat_model instance works correctly? i.e. completion = chat_model("some text"). And if it does, can you share the complete error stack trace? Thanks.

from nemo-guardrails.

ansumanparija007 avatar ansumanparija007 commented on August 18, 2024

Yeah chat_model works fine .but while working with guardrails following error is thrown -
Parameter temperature does not exist for NoneType
Error 'NoneType' object has no attribute 'agenerate_prompt' while execution check_jailbreak
Traceback (most recent call last):
File "C:\Users\testcopilot2\guardrails\chatbot-guardrails\env\Lib\site-packages\nemoguardrails\actions\action_dispatcher.py", line 125, in execute_action
result = await fn(**params)
^^^^^^^^^^^^^^^^^^
File "C:\Users\testcopilot2\guardrails\chatbot-guardrails\env\Lib\site-packages\nemoguardrails\actions\jailbreak_check.py", line 50, in check_jailbreak
check = await llm_call(llm, prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\testcopilot2\guardrails\chatbot-guardrails\env\Lib\site-packages\nemoguardrails\actions\llm\utils.py", line 31, in llm_call
result = await llm.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'agenerate_prompt'
new_message: {'role': 'assistant', 'content': "I'm sorry, an internal error has occurred."}

from nemo-guardrails.

ansumanparija007 avatar ansumanparija007 commented on August 18, 2024

thanks @drazvan this error is gone.ll try to play with it and let you know ..

from nemo-guardrails.

ansumanparija007 avatar ansumanparija007 commented on August 18, 2024

Hi @drazvan while running hallucination cheack it says Hallucination rail can only be used with OpenAI LLM engines.Current LLM engine is AzureChatOpenAI. Any ETA when it will be supported for other LLM models?

from nemo-guardrails.

ishaan-jaff avatar ishaan-jaff commented on August 18, 2024

Hi @drazvan @ansumanparija007 @tanujjain I believe we can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm

TLDR:
We allow you to use any LLM as a drop in replacement for gpt-3.5-turbo.
If you don't have access to the LLM you can use the LiteLLM proxy to make requests to the LLM

You can use LiteLLM in the following ways:

With your own API KEY:

This calls the provider API directly

from litellm import completion
import os
## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-key" # 
os.environ["COHERE_API_KEY"] = "your-key" # 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

Using the LiteLLM Proxy with a LiteLLM Key

this is great if you don’t have access to claude but want to use the open source LiteLLM proxy to access claude

from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

from nemo-guardrails.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.