Comments (11)
Your logs confirm the CORS error as expected, you should pull the latest commit of the main branch and build the docker container. It introduces breaking changes so your command should be replaced with the following instead:
docker build --build-arg OLLAMA_API_BASE_URL='' -t ollama-webui .
docker run -d -p 3000:8080 --name ollama-webui --restart always ollama-webui
Also make sure to run the following command to serve Ollama, as mentioned here, which should solve your issue:
OLLAMA_HOST=0.0.0.0 OLLAMA_ORIGINS=* ollama serve
Thanks.
from open-webui.
Thanks! Works now!
from open-webui.
Hi there,
Browsers often send an OPTIONS request to verify CORS, which might be the case here. Were there any other issues caused by the OPTIONS request?
Thanks.
from open-webui.
Well, I get that asking something trying to generate something, but it seems to stop there so nothing else happens.
from open-webui.
Could you please provide any specific error messages or logs that you encounter during the process? This information would greatly assist me in diagnosing the problem more accurately and providing you with the appropriate guidance.
Thanks.
from open-webui.
How do I get better logs? I tried typing something, pressed enter, and then nothing.
from open-webui.
Hi there,
To obtain more detailed logs, you can use the following command for Docker:
docker logs ollama-webui
Additionally, it would be helpful for diagnosing your issue if you could provide a screenshot of your console logs from your browser's developer tools. This will allow us to examine any client-side errors or issues that might not be visible in the server logs.
Please feel free to share the Docker logs and the browser console logs screenshot, and we'll do our best to assist you in resolving the problem.
Thanks.
from open-webui.
@Chillance Part of the reason that browsers invented CORS Preflight Requests (the OPTIONS issue) is to prevent people from stumbling into security issues.
Do you already have the access protected with an API token or HTTP Basic Auth?
Check out ollama/ollama#849 (comment) and the CORS section at https://webinstall.dev/caddy.
from open-webui.
Tested, Working Example
See #10
from open-webui.
I don't do anything particular but just staring the ollama serve. And, on the same machine I run this in docker.
docker run --network=host -p 3000:3000 --name ollama-webui --restart always ollama-webui
I actually got chatbot-ollama (other repo) working fine. But here I can see this in the console log:
e87e0c1f-4d67-4015-959a-0e2b59659483
2.fb1b6367.js:52 submitPrompt
192.168.1.11/:1 Access to fetch at 'http://192.168.1.11:11434/api/generate' from origin 'http://192.168.1.11:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
start.93b882e2.js:1POST http://192.168.1.11:11434/api/generate net::ERR_FAILED
window.fetch @ start.93b882e2.js:1
R @ 2.fb1b6367.js:52
await in R (async)
re @ 2.fb1b6367.js:58
start.93b882e2.js:1Uncaught (in promise) TypeError: Failed to fetch at window.fetch (start.93b882e2.js:1:1402) at R (2.fb1b6367.js:52:108120)
from open-webui.
And
docker logs ollama-webui
only returns:
Listening on 0.0.0.0:3000
from open-webui.
Related Issues (20)
- Arabic Text Alignment Issue with Llama3 Model in Ollama WebUI HOT 1
- When two pages use two different models, the two models cannot be started at the same time. HOT 1
- Persistently Store OpenAI and OpenAI-Compatible API Keys Across Server Restarts HOT 2
- 接入其他厂商的AI产品 HOT 1
- Write error when adding a LiteLLM Model (Snap) HOT 1
- GGUF/HuggingFace import is broken on ollama v0.1.35+ HOT 5
- Some visual feedback is needed (progress bar?) while uploading document to be used for RAG
- Cannot log in HOT 2
- Custom system prompts HOT 2
- allow customize model display name HOT 1
- To run multiple OpenAI-compatible end points in the same front-end.
- feat: Support for sequential model integration for multi-layered language translation and specialized tasks
- Support for HuggingFace TGI HOT 1
- bug: Uncaught TypeError when (not) annotating LLM responses in Open WebUI, causes page freeze until refresh HOT 1
- BUG: Multi-Model Sequential Response Generation
- Add one-click deploy buttons
- feat: RAG zotero pipeline
- bug: whitelisting does not save
- enhancement: unified control for multi ollama servers HOT 1
- feat: Negative Prompts and Prefilled Beginning of Response
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from open-webui.