Coder Social home page Coder Social logo

Comments (8)

austinmw avatar austinmw commented on August 18, 2024 2

Hi, does setting model: gpt-4 work? I set this, but when I ask the LLM what model it is, it replies, "I am an AI language model developed by OpenAI. More specifically, I'm powered by gpt-3.5-turbo."

from nemo-guardrails.

baravit avatar baravit commented on August 18, 2024 1

+1.
I'll add that it seems like even simple things including "context" messages / register_prompt_context action, are not working when running Nemo with main chat model (gpt-3.5/4)..

from nemo-guardrails.

drazvan avatar drazvan commented on August 18, 2024

Hi @aitamar and @baravit!

In theory, the same guardrails configuration should work with both completion and chat models. The completion mode works better (e.g. text-davinci-003) because that's what we started with. We still need to experiment some more with chat models, to figure out the best way to prompt them. We're also actively testing a way of prompting them with a single call, rather than making three calls. So, in the next month or so, we should see some improvements.

That being said. Can you share some quick examples here? I'd like to debug this a bit with you if you have time. Just the config + the dialog is enough. And just in case you're not doing this already, you should use nemoguardrails chat --config=path/to/config --verbose to debug.

Thanks!

from nemo-guardrails.

baravit avatar baravit commented on August 18, 2024

Hi @drazvan!
Thank you for your quick reply:)
We would be honored to have a debug session with you!

In the meantime, here is an example of Nemo breaking out of a flow after generating user intent:

from nemoguardrails import LLMRails, RailsConfig
from dotenv import load_dotenv


load_dotenv()

YAML_CONFIG = """
models:
  - type: main
    engine: openai
    model: gpt-3.5-turbo
"""
COLANG_CONFIG = """
define user ask question
  "What is the meaning of life?"

define flow
  user express greeting
  bot express greeting

  
define flow
  priority 1000
  user ask question
  bot reply "Yo Yo Yo"

"""

def main():
    config = RailsConfig.from_content(COLANG_CONFIG, YAML_CONFIG)
    APP = LLMRails(config, verbose=True)

    history = [
        {"role": "user", "content": "What is the meaning of life?"}
    ]
    result = APP.generate(messages=history)
    print(result)

if __name__ == "__main__":
    main()

Just run it with gpt-3.5-turbo and then switch to text-davinci-003..

Regarding the context message and register_prompt_context, I've tested these in an isolated environment and they seem to work just fine. I need to conduct further testing on the issue within our infrastructure. I'll keep you posted if anything comes up.

We are familiar with the debugging options and use them quite often, but our backend includes some other core-logic components between Nemo and the client. Therefore, in some cases, we simply can't employ that approach:)

Thank you again for your help!
Bar.

from nemo-guardrails.

alvaroNicasource avatar alvaroNicasource commented on August 18, 2024

+1, I'm having the same issue.
model: gpt-3.5-turbo

from nemo-guardrails.

baravit avatar baravit commented on August 18, 2024

Hi @drazvan, hope you are doing well.
Following up on this, wanted to ask if there is anything new regarding our issue? Were you able to reproduce it?
From our latest research, it seems like the problem occurs when sending nemo messages that it doesn't have the cached event for (i.e server restart / send last n messages), as pointed out in our other discussion here: #63

Also in that discussion, you told us about the state feature that you guys are working on..
Wanted to see if there are any updates regarding it.

Thank you again for all your work, let us know if there is anything you need from us or if we can help in any way.

Bar.

from nemo-guardrails.

sinderwing avatar sinderwing commented on August 18, 2024

I get 2x "Parameter temperature does not exist for OpenAIChat" when trying to use chat models like gpt-4.

openai==0.28.1
nemoguardrails==0.5.0

config.yaml

models:
- type: main
  engine: openai
  model: gpt-4
# Load a guardrails configuration from the specified path.
config = RailsConfig.from_path("config")
rails = LLMRails(config)

def ask(prompt):
    completion = rails.generate(
        messages=[
            {"role": "context", "content": {"name": user}},
            {"role": "user", "content": prompt}
            ]
    )
    response = completion['content'].replace("__User__", user)

    return f"User:\n{prompt}\n----------\nSystem:\n{response}"

subject = "the British prime minister"
prompt = f"Give me one common critique regarding {subject}"

print(ask(prompt))

Returns

Parameter temperature does not exist for OpenAIChat
Parameter temperature does not exist for OpenAIChat
User:
Give me one common critique regarding the British prime minister
----------
System:
One common critique ...

Trying to use guardrails to prevent it from discussing politics as it's a common default objective.

Solved by bypassing the NeMo yaml by adding

from langchain.chat_models import ChatOpenAI
rails = LLMRails(config, llm=ChatOpenAI(model='gpt-4'))

from nemo-guardrails.

drazvan avatar drazvan commented on August 18, 2024

@Johnnil: can you check again with 0.6.0 which was released yesterday? I've just tested and it seems the issue is no longer there.

from nemo-guardrails.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.