flowiseai / flowise Goto Github PK
View Code? Open in Web Editor NEWDrag & drop UI to build your customized LLM flow
Home Page: https://flowiseai.com
License: Apache License 2.0
Drag & drop UI to build your customized LLM flow
Home Page: https://flowiseai.com
License: Apache License 2.0
Describe the bug
Trying to use Vercel, I can see the site for one second and after that, I got an empty website. I've tried with Chrome, Firefox and Safari
To Reproduce
Steps to reproduce the behavior:
Expected behavior
See the site
Additional context
I've hosted the site with the Vercel hobby plan and the pro plan. The deployment doesn't have any issues.
One weird thing is I can't see any logs: No active logs yet. Push changes to see results.
Once you've made a chatflow, and export it to JSON, is there some tool that can use and embed it on a website? I guess what I'm wondering is the use case for building this, thanks
I would like to request that Open Graph tags be added to website to improve the way it is shared on social media. These tags help provide information to social media platforms like Facebook, Twitter, and LinkedIn, and can improve the way our website appears when it is shared.
I would like to add the following Open Graph tags to our website:
<meta property="og:title" content="Website Title">
<meta property="og:description" content="Website description">
<meta property="og:image" content="https://example.com/image.jpg">
<meta property="og:url" content="https://example.com">
The og:title
tag would be used to specify the title of the website, the og:description
tag would be used to provide a brief description of the website, the og:image
tag would be used to specify the image to use when the website is shared, and the og:url
tag would be used to specify the URL of the website.
Please let me know if you have any questions or concerns about adding these Open Graph tags. I believe that this change would improve the way, the website is shared on social media and help increase online presence.
Describe the bug
When we use the PDF document loader, we have the following error :
UnknownErrorException: The browser/environment lacks native support for critical functionality used by the PDF.js library (e.g. Path2D and/or ReadableStream); please use a legacy-build instead.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The PDF document is loaded
Screenshots
Setup
Can you integrate Azure Open AI also
Is there any guide or example on how to use the Huggingface node that is included? I can't select more than the gpt2 model in the dropdown. and I can't input anything without it disappearing? I tried to adjust it in the code manually but I got an error with no code while fetching after a while of loading.
https://youtu.be/EsI_7L0fzKk?t=42
This shows a Web Browser template, which would be awesome, but it's not showing for me. I just updated my clone to the latest version, via git pull and re-ran yarn install and build as well as deleting the docker container and composing it again. I dont have the web browser node to add manually either. Am I missing something?
Can you make a login page in UI that i can only acces the website beacause the website is open so anyone have see the api key
Describe the bug
When I create a chatflow, the name is saved correctly. When I re-edit the chatflow and change the name, the new name is not saved, and the old name still shows in the Chatflows pane.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
See above
Screenshots
N/A
Setup
Additional context
N/A
Describe the feature you'd like
It seems that RequestsGet and RequestPosts tools not supported now.
How can I add support to these two tools.
Describe the bug
In the API endpoint the Input Config checkbox is not appearing.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A checkbox with a label that says "Input Config" should appear.
Setup
Getting this error
Error: Node mrlkAgentLLM not found
at /usr/local/lib/node_modules/flowise/dist/index.js:106:23
at Layer.handle [as handle_request] (/usr/local/lib/node_modules/flowise/node_modules/express/lib/router/layer.js:95:5)
at next (/usr/local/lib/node_modules/flowise/node_modules/express/lib/router/route.js:144:13)
at Route.dispatch (/usr/local/lib/node_modules/flowise/node_modules/express/lib/router/route.js:114:3)
at Layer.handle [as handle_request] (/usr/local/lib/node_modules/flowise/node_modules/express/lib/router/layer.js:95:5)
at /usr/local/lib/node_modules/flowise/node_modules/express/lib/router/index.js:284:15
at param (/usr/local/lib/node_modules/flowise/node_modules/express/lib/router/index.js:365:14)
at param (/usr/local/lib/node_modules/flowise/node_modules/express/lib/router/index.js:376:14)
at Function.process_params (/usr/local/lib/node_modules/flowise/node_modules/express/lib/router/index.js:421:3)
at next (/usr/local/lib/node_modules/flowise/node_modules/express/lib/router/index.js:280:10)
sorry , how do i import the json schema into python?
do i use your library?
Describe the feature you'd like
Could you please add support for
Additional context
I got LangChain + OpenAI + Pinecone working for conversational Q&A retrieval against enterprise knowledge base, but would like to use open source and locally run alternative components (llama.cpp for embedding and LLM, Weaviate for vector DB). Thus my enterprise data will be on premise. Thank you.
The task is to add functionality to Flowise that allows for the conversation to be hooked to an in-memory agent or a vector dB (with upserts). This means that the conversation should be able to interact with an in-memory agent or a database that can save and retrieve data as needed.
Describe the feature you'd like
According to the source code, I saw the nodes of tools were developed individually and most codes are similar. I guess you can import all the tools from LangChain, and create a new Node automatically by parsing the required parameters and their types from the tool's constructor. Ideally, you only need to design an icon for each tool.
Describe the bug
I am trying to create a flow with the Conversational Retriever QA chain. I tried multiple document loaders but each one of them gave me a ReferenceError: Blob is not defined.
To Reproduce
Steps to reproduce the behavior:
you can try to create any flow with any document loader.
Expected behavior
for me to be able to chat with the LLM using the loaded documents.
Screenshots
here is the flow I am using.
Setup
Additional context
These are the console logs:
at Csv_DocumentLoaders.init (/home/mohdhd/.npm/_npx/8b95a19c5c9c8708/node_modules/flowise-components/dist/nodes/documentloaders/Csv/Csv.js:43:22)
at buildLangchain (/home/mohdhd/.npm/_npx/8b95a19c5c9c8708/node_modules/flowise/dist/utils/index.js:203:72)
at async /home/mohamed/.npm/_npx/8b95a19c5c9c8708/node_modules/flowise/dist/index.js:235:44
Event: Object deleted
User:
User type: Initiator
Application name: node.exe
Application path: C:\Program Files\nodejs
Component: File Anti-Virus
Result description: Deleted
Type: Software that may cause harm
Name: Hoax.JS.ExtMsg.a
Precision: Exactly
Threat level: Medium
Object type: File
Object name: _postinstall.js
Object path: C:\Users\xxx\AppData\Roaming\npm\node_modules\flowise\node_modules\es5-ext
MD5 of an object: 7DE8D84BD9ECC1D0904048956C94817B
Describe the bug
When the webpage is not opened in full-screen mode in the browser, all components of the webpage cannot be clicked.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
All components should be clicked
Setup
Docker build (Dockerfile in the project root) fails with:
=> [ 8/10] RUN yarn install 208.6s
=> [ 9/10] COPY . . 0.3s
=> ERROR [10/10] RUN yarn build 1.0s
------
> [10/10] RUN yarn build:
#15 0.630 yarn run v1.22.19
#15 0.750 $ turbo run build
#15 0.945 thread 'main' panicked at 'Failed to execute turbo.: Os { code: 2, kind: NotFound, message: "No such file or directory" }', crates/turborepo/src/main.rs:50:10
#15 0.945 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
#15 0.980 error Command failed with exit code 101.
#15 0.980 info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
------
executor failed running [/bin/sh -c yarn build]: exit code: 101
Describe the bug
I pulled a new Docker image to test the #6 fix, but i'm now unable to link en Vector Store (Pinecone or Chroma) to a Conversational Retrieval QA Chain
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Link as the previous version
Screenshots
Setup
Additional context
Add any other context about the problem here.
Describe the bug
From the exception it seems like a bug on pinecone side, but I did not find a solution online, so maybe you can help... I followed your tutorial on youtube 1:1 (pdf, openai embeddings, pinecone, conversational QA chain)
To Reproduce
Expected behavior
Should work without exception
Setup
Additional context
I am blown away by this simple and concise UI and the 2 minute youtube tutorial... keep it up!
Describe the bug
Getting Cannot read properties of undefined (reading 'data')
error when using the Chroma Vector Store in any combination, e.g. in AutoGPT
To Reproduce
Steps to reproduce the behavior:
Setup
Are there any additional steps required to set up Chroma with Flowise? Not quite sure based on the onboarding
Describe the bug
The example CURL code is missing a backslash to escape the newline
To Reproduce
Open the API Endpoint dialog, open the CURL tab, and choose an API key
Screenshots
Example output, missing a backslash after how are you?"}'
curl http://myip:3000/api/v1/prediction/myendpoint \
-X POST \
-d '{"question": "Hey, how are you?"}'
-H "Authorization: Bearer myapikey"
Setup
Describe the feature you'd like
I would like my new flow to be exposed as an API so I can use my app in different environments.
Additional context
NA
Describe the bug
Loading in a 48M PDF file is producing the following error:
Failed to save chatflow: <title>Error</title>
PayloadTooLargeError: request entity too large
at readStream (/home/kristian/.nvm/versions/node/v18.12.1/lib/node_modules/flowise/node_modules/raw-body/index.js:156:17)
at getRawBody (/home/Kristian/.nvm/versions/node/v18.12.1/lib/node_modules/flowise/node_modules/raw-body/index.js:109:12)
at read (/home/Kristian/.nvm/versions/node/v18.12.1/lib/node_modules/flowise/node_modules/body-parser/lib/read.js:79:3)
at jsonParser (/home/Kristian/.nvm/versions/node/v18.12.1/lib/node_modules/flowise/node_modules/body-parser/lib/types/json.js:135:5)
at Layer.handle [as handle_request] (/home/Kristian/.nvm/versions/node/v18.12.1/lib/node_modules/flowise/node_modules/express/lib/router/layer.js:95:5)
at trim_prefix (/home/Kristian/.nvm/versions/node/v18.12.1/lib/node_modules/flowise/node_modules/express/lib/router/index.js:328:13)
at /home/Kristian/.nvm/versions/node/v18.12.1/lib/node_modules/flowise/node_modules/express/lib/router/index.js:286:9
at Function.process_params (/home/Kristian/.nvm/versions/node/v18.12.1/lib/node_modules/flowise/node_modules/express/lib/router/index.js:346:12)
at next (/home/Kristian/.nvm/versions/node/v18.12.1/lib/node_modules/flowise/node_modules/express/lib/router/index.js:280:10)
at expressInit (/home/Kristian/.nvm/versions/node/v18.12.1/lib/node_modules/flowise/node_modules/express/lib/middleware/init.js:40:5)
To Reproduce
Steps to reproduce the behavior:
Use PDF file to load the PDF, when clicking save on the flow the error is produced.
Expected behavior
I expected the document to be loaded for use by the flow.
Additional context
Add any other context about the problem here.
Describe the feature you'd like
In order to have a better overview of the workflow, especially with many elements or on smaller screens, it would be nice to be able to collapse each element and to have a button to collapse all / open all elements at once.
Describe the bug
When renaming a flow, it's not always saving the new name correctly.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
After clicking the green "Save" icon, the flow's name should be updated. This appears to only be happening once you click the "Back" arrow in the top left of the page.
Describe the feature you'd like
To be able to use all this system locally, so we can use local models like Wizard-Vicuna and not having to share our data with OpenAI or other sites or clouds.
Maybe an option to avoid having to do a full local LLM implementation is to make it communicate with Oobabooga with it's API, not sure thought, but I suspect it's similar to talking with ChatGPT.
Will this be implemented at some point?
Thanks!
In the terminal:
Entering new agent_executor chain...
This is not a math question.
Action: None
Action Input: None
This is a question that requires a response.
Action: None
Action Input: None
I should provide a response.
Action: None
Action Input: None
I should provide a response with words.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling in a positive way.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling in a positive way.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling in a positive way.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling in a positive way.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling in a positive way.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling in a positive way.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling in a positive way.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling in a positive way.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling in a positive way.
Action: None
Action Input: None
I should provide a response with words that express how I am feeling in a positive way.
Action: None
Action Input: None
Finished chain.
Describe the bug
when I run the project I cannot setup my API keys
To Reproduce
Go to API keys
Expected behavior
I should be able to set value to the API key
It's creating an issue when trying to use chatbot
Error- nodeInstance.run is not a function
i am getting issues from this code
const nodeInstanceFilePath = this.nodesPool.componentNodes[nodeToExecuteData.name].filePath as string
const nodeModule = await import(nodeInstanceFilePath)
const nodeInstance = new nodeModule.nodeClass()
const result = await nodeInstance.run(nodeToExecuteData, incomingInput.question, { chatHistory: incomingInput.history })
file path - packages/server/src/index.ts
Description
when trying to run website QnA example and I get "Request failed with status code 429"
To Reproduce
-open example
-input text
-get error
}
[llm/error] [1:chain:agent_executor > 2:chain:llm_chain > 3:llm:openai] [113.11s] LLM run errored with error: "Request failed with status code 429"
[chain/error] [1:chain:agent_executor > 2:chain:llm_chain] [113.11s] Chain run errored with error: "Request failed with status code 429"
[chain/error] [1:chain:agent_executor] [113.11s] Chain run errored with error: "Request failed with status code 429"
`
Expected
receive answer from openai api
Setup
The task is to modify the Flowise text input area to enable the following functionalities:
Downloading the conversation in several formats such as CSV, text, JSON, etc.
Resizing and moving the conversation area anywhere in the app.
Enabling the deletion of individual conversation elements.
is there a way to add multiple files/urls?
Very nice UI!
Would be great if the VectorDB QA Chain node included the parameter returnSourceDocuments
(i.e.: setting the returnSourceDocuments
to true
when calling the chain).
I don't see the opportunity to identify a user. I will want multiple users to access my flow.
Is it currently possible? If not can the function be added?
I use the GitHub template and the chat does not understand my question. Seems to not be scraping the repo and uploading to pinecone. All api keys are correct. Any idea why this is not working?
I am not sure if I configure the project properly but for autogpt using the marketplace template, I don't see the in progress messages ie the reasoning, thinking etc in the chat message window. Is the autogpt template meant to only display the final outcome?
since there are chatgpt-3.5-turbo (the most cheap one) and getp-4 32k (the most powerful one), why not put it as LLM?
or could you add chatgpt-3.5-turbo and gpt4 to openai LLM definition? thanks!
When using the AI Plugin node, this is the type of response I receive:
Langchain: "It seems like we need to provide an API key to access the Polygon API. I'll check if we have an API key for Polygon."
Do plugins require API Key to run?
Will someone tell me where to store the API keys?
Describe the feature you'd like
LangChain can be used in the browser. I think this whole project could work without a server.
Describe the bug
Chat bot ignores plugins or have an issue locating API Key for plugin
Error: Failed to fetch API spec from https://webreader.webpilotai.com/openapi.yaml with status 403
May 20 01:49:25 AM at AIPluginTool.fromPluginUrl (/opt/render/project/src/node_modules/langchain/dist/tools/aiplugin.cjs:48:19)
May 20 01:49:25 AM at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
May 20 01:49:25 AM at async AIPlugin.init (/opt/render/project/src/packages/components/dist/nodes/tools/AIPlugin/AIPlugin.js:25:26)
May 20 01:49:25 AM at async buildLangchain (/opt/render/project/src/packages/server/dist/utils/index.js:208:50)
May 20 01:49:25 AM at async App.processPrediction (/opt/render/project/src/packages/server/dist/index.js:414:40)
May 20 01:49:25 AM at async /opt/render/project/src/packages/server/dist/index.js:263:13
To Reproduce
Steps to reproduce the behavior:
Expected behavior
I expect the chat agent to use the plugin. However, it either ignores the plugin or attempts to find the API key for the plugin. There's no where to enter API Key.
Setup
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.