Comments (5)
There are a handful of critical things that cause problems with langchain. The main being Auto-GPT proceeds many of their implementations, and their architecture choices do not fit our use case.
I'll pass a few of the comments around our discussions. I've omitted some parts and edited others so it is more suitable for the public. I've also removed meaningful context around these discussions, but if you'd like to join our discord or schedule a chat to get that context, I'd be happy to facilitate.
I have spent much time lately using and evaluating Lang Chain. On the surface, it seems pretty cool, and on a surface level, it is if you're ok with the trade-off of using canned examples vs. process time. It's pretty easy to get something basic setup, it runs slow, though, and once you get beyond the basic stuff, it gets overly complicated and embodies some of the worst parts of objective programming. Between classes of the same type, there needs to be more consistency. It's a customizable framework if the customization you want is an example or use case they provided. In theory, you could build from the ground up using the custom classes, but if you're going to do that, what do you need the framework for? The same goes for all of its classes. And it's not quick or easy to spring up an agent for temporary use. It's a pain in the ass to do from a programming perspective. It also acts as a wrapper for some of the other packages in 'integrates,' in doing so, you lose the ability to use some of the original usability of the original packages. Lastly, it has a lot of shit going on under the hood, probably too fucking much, and its documentation is [not excellent]
It's been decided that our arch won't be Langchain dependent, but sections of it may optionally use langchain primitives if it makes sense
Overall, langchain doesn't fit our use case or architecture for Auto-GPT. Adding it as a core dependency is a non-starter for us, but the tool you built here IS useful and would enable a lot of confidence in supporting local LLMs for our Auto-GPT.
from nemo-guardrails.
Hi @ntindle! What do you mean by "it doesn't support langchain"? Are you saying that for your use case you are not allowed to have langchain
as a dependency?
from nemo-guardrails.
Yeah exactly! We're working on a project that can't use langchain as a dependency.
Our use case would be wrapping responses from various local and non local LLMs to apply some level of safety to users interacting with various LLMs
from nemo-guardrails.
Unfortunately, it is not possible to remove the langchain dependency. However, the core nemoguardrails
functionality only uses a limited set of features from langchain
e.g. PromptTemplate, LLMChain and LLM implementations. We are currently not aware of any vulnerabilities from langchain
that would propagate to nemoguardrails
. Is there something in particular about langchain
that makes it unusable for your use case?
from nemo-guardrails.
Let me clarify a bit, Auto-GPT isn't built to use langchain's llm functionalities, but does use other functionalities from langchain such as the tools
from nemo-guardrails.
Related Issues (20)
- Issue with azure openAI Api key HOT 3
- AzureOpenAI is not supported HOT 6
- NeMo-Guardrails breaks Langfuse integration / Runnable config is ignored in LLM call
- Problems integrating Guardrails in Langchain / Answer quality and streaming HOT 1
- How to save and load conversation history ? HOT 1
- How to initialize the Context variable in the UI HOT 2
- Sagemaker endpoint not integrated
- UI not working - ASGI exception HOT 8
- Support for Text Embeddings Inference HOT 1
- Value Error for GPT 3.5
- Input guardrails error
- Running several Rails until completion regardless of the outcome? HOT 1
- groq not supported - would help having fast inference HOT 7
- RunnableRails performance weirdness HOT 3
- Add support to register custom embedding models
- Support for custom embedding model
- Instance of embedding search provider in action HOT 3
- Change "Failed to register" action message from DEBUG to ERROR
- `generate_events` fails with colang 2.0 HOT 7
- [BUG] llm_params from GenerationOptions are not considered during generate_user_intent in generation.py when single_call is False HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nemo-guardrails.