Coder Social home page Coder Social logo

open-webui / open-webui Goto Github PK

View Code? Open in Web Editor NEW
17.5K 94.0 1.8K 45.41 MB

User-friendly WebUI for LLMs (Formerly Ollama WebUI)

Home Page: https://openwebui.com

License: MIT License

JavaScript 0.29% Dockerfile 0.52% Shell 1.03% CSS 0.92% HTML 0.21% Svelte 62.01% TypeScript 10.38% Python 24.31% Batchfile 0.11% Smarty 0.15% Makefile 0.05%
ollama ollama-webui llm webui self-hosted llm-ui llm-webui llms rag chromadb

open-webui's Introduction

Open WebUI (Formerly Ollama WebUI) πŸ‘‹

GitHub stars GitHub forks GitHub watchers GitHub repo size GitHub language count GitHub top language GitHub last commit Hits Discord

Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation.

Open WebUI Demo

Features ⭐

  • πŸ–₯️ Intuitive Interface: Our chat interface takes inspiration from ChatGPT, ensuring a user-friendly experience.

  • πŸ“± Responsive Design: Enjoy a seamless experience on both desktop and mobile devices.

  • ⚑ Swift Responsiveness: Enjoy fast and responsive performance.

  • πŸš€ Effortless Setup: Install seamlessly using Docker or Kubernetes (kubectl, kustomize or helm) for a hassle-free experience.

  • 🌈 Theme Customization: Choose from a variety of themes to personalize your Open WebUI experience.

  • πŸ’» Code Syntax Highlighting: Enjoy enhanced code readability with our syntax highlighting feature.

  • βœ’οΈπŸ”’ Full Markdown and LaTeX Support: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.

  • πŸ“š Local RAG Integration: Dive into the future of chat interactions with the groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using # command in the prompt. In its alpha phase, occasional issues may arise as we actively refine and enhance this feature to ensure optimal performance and reliability.

  • πŸ” RAG Embedding Support: Change the RAG embedding model directly in document settings, enhancing document processing. This feature supports Ollama and OpenAI models.

  • 🌐 Web Browsing Capability: Seamlessly integrate websites into your chat experience using the # command followed by the URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions.

  • πŸ“œ Prompt Preset Support: Instantly access preset prompts using the / command in the chat input. Load predefined conversation starters effortlessly and expedite your interactions. Effortlessly import prompts through Open WebUI Community integration.

  • πŸ‘πŸ‘Ž RLHF Annotation: Empower your messages by rating them with thumbs up and thumbs down, followed by the option to provide textual feedback, facilitating the creation of datasets for Reinforcement Learning from Human Feedback (RLHF). Utilize your messages to train or fine-tune models, all while ensuring the confidentiality of locally saved data.

  • 🏷️ Conversation Tagging: Effortlessly categorize and locate specific chats for quick reference and streamlined data collection.

  • πŸ“₯πŸ—‘οΈ Download/Delete Models: Easily download or remove models directly from the web UI.

  • πŸ”„ Update All Ollama Models: Easily update locally installed models all at once with a convenient button, streamlining model management.

  • ⬆️ GGUF File Model Creation: Effortlessly create Ollama models by uploading GGUF files directly from the web UI. Streamlined process with options to upload from your machine or download GGUF files from Hugging Face.

  • πŸ€– Multiple Model Support: Seamlessly switch between different chat models for diverse interactions.

  • πŸ”„ Multi-Modal Support: Seamlessly engage with models that support multimodal interactions, including images (e.g., LLava).

  • 🧩 Modelfile Builder: Easily create Ollama modelfiles via the web UI. Create and add characters/agents, customize chat elements, and import modelfiles effortlessly through Open WebUI Community integration.

  • βš™οΈ Many Models Conversations: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.

  • πŸ’¬ Collaborative Chat: Harness the collective intelligence of multiple models by seamlessly orchestrating group conversations. Use the @ command to specify the model, enabling dynamic and diverse dialogues within your chat interface. Immerse yourself in the collective intelligence woven into your chat environment.

  • πŸ—¨οΈ Local Chat Sharing: Generate and share chat links seamlessly between users, enhancing collaboration and communication.

  • πŸ”„ Regeneration History Access: Easily revisit and explore your entire regeneration history.

  • πŸ“œ Chat History: Effortlessly access and manage your conversation history.

  • πŸ“¬ Archive Chats: Effortlessly store away completed conversations with LLMs for future reference, maintaining a tidy and clutter-free chat interface while allowing for easy retrieval and reference.

  • πŸ“€πŸ“₯ Import/Export Chat History: Seamlessly move your chat data in and out of the platform.

  • πŸ—£οΈ Voice Input Support: Engage with your model through voice interactions; enjoy the convenience of talking to your model directly. Additionally, explore the option for sending voice input automatically after 3 seconds of silence for a streamlined experience.

  • πŸ”Š Configurable Text-to-Speech Endpoint: Customize your Text-to-Speech experience with configurable OpenAI endpoints.

  • βš™οΈ Fine-Tuned Control with Advanced Parameters: Gain a deeper level of control by adjusting parameters such as temperature and defining your system prompts to tailor the conversation to your specific preferences and needs.

  • πŸŽ¨πŸ€– Image Generation Integration: Seamlessly incorporate image generation capabilities using options such as AUTOMATIC1111 API (local), ComfyUI (local), and DALL-E, enriching your chat experience with dynamic visual content.

  • 🀝 OpenAI API Integration: Effortlessly integrate OpenAI-compatible API for versatile conversations alongside Ollama models. Customize the API Base URL to link with LMStudio, Mistral, OpenRouter, and more.

  • ✨ Multiple OpenAI-Compatible API Support: Seamlessly integrate and customize various OpenAI-compatible APIs, enhancing the versatility of your chat interactions.

  • πŸ”‘ API Key Generation Support: Generate secret keys to leverage Open WebUI with OpenAI libraries, simplifying integration and development.

  • πŸ”— External Ollama Server Connection: Seamlessly link to an external Ollama server hosted on a different address by configuring the environment variable.

  • πŸ”€ Multiple Ollama Instance Load Balancing: Effortlessly distribute chat requests across multiple Ollama instances for enhanced performance and reliability.

  • πŸ‘₯ Multi-User Management: Easily oversee and administer users via our intuitive admin panel, streamlining user management processes.

  • πŸ”— Webhook Integration: Subscribe to new user sign-up events via webhook (compatible with Google Chat and Microsoft Teams), providing real-time notifications and automation capabilities.

  • πŸ›‘οΈ Model Whitelisting: Admins can whitelist models for users with the 'user' role, enhancing security and access control.

  • πŸ“§ Trusted Email Authentication: Authenticate using a trusted email header, adding an additional layer of security and authentication.

  • πŸ” Role-Based Access Control (RBAC): Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators.

  • πŸ”’ Backend Reverse Proxy Support: Bolster security through direct communication between Open WebUI backend and Ollama. This key feature eliminates the need to expose Ollama over LAN. Requests made to the '/ollama/api' route from the web UI are seamlessly redirected to Ollama from the backend, enhancing overall system security.

  • 🌐🌍 Multilingual Support: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors!

  • 🌟 Continuous Updates: We are committed to improving Open WebUI with regular updates and new features.

πŸ”— Also Check Out Open WebUI Community!

Don't forget to explore our sibling project, Open WebUI Community, where you can discover, download, and explore customized Modelfiles. Open WebUI Community offers a wide range of exciting possibilities for enhancing your chat interactions with Open WebUI! πŸš€

How to Install πŸš€

Note

Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on Open WebUI Documentation is ready to assist you.

Quick Start with Docker 🐳

Warning

When using Docker to install Open WebUI, make sure to include the -v open-webui:/app/backend/data in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.

Tip

If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either :cuda or :ollama. To enable CUDA, you must install the Nvidia CUDA container toolkit on your Linux/WSL system.

If Ollama is on your computer, use this command:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

If Ollama is on a Different Server, use this command:

To connect to Ollama on another server, change the OLLAMA_BASE_URL to the server's URL:

docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

After installation, you can access Open WebUI at http://localhost:3000. Enjoy! πŸ˜„

Open WebUI: Server Connection Error

If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the --network=host flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: http://localhost:8080.

Example Docker Command:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Other Installation Methods

We offer various installation alternatives, including non-Docker methods, Docker Compose, Kustomize, and Helm. Visit our Open WebUI Documentation or join our Discord community for comprehensive guidance.

Troubleshooting

Encountering connection issues? Our Open WebUI Documentation has got you covered. For further assistance and to join our vibrant community, visit the Open WebUI Discord.

Keeping Your Docker Installation Up-to-Date

In case you want to update your local Docker installation to the latest version, you can do it with Watchtower:

docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

In the last part of the command, replace open-webui with your container name if it is different.

Moving from Ollama WebUI to Open WebUI

Check our Migration Guide available in our Open WebUI Documentation.

What's Next? 🌟

Discover upcoming features on our roadmap in the Open WebUI Documentation.

Supporters ✨

A big shoutout to our amazing supporters who's helping to make this project possible! πŸ™

Platinum Sponsors 🀍

  • We're looking for Sponsors!

Acknowledgments

Special thanks to Prof. Lawrence Kim and Prof. Nick Vincent for their invaluable support and guidance in shaping this project into a research endeavor. Grateful for your mentorship throughout the journey! πŸ™Œ

License πŸ“œ

This project is licensed under the MIT License - see the LICENSE file for details. πŸ“„

Support πŸ’¬

If you have any questions, suggestions, or need assistance, please open an issue or join our Open WebUI Discord community to connect with us! 🀝

Star History

Star History Chart

Created by Timothy J. Baek - Let's make Open WebUI even more amazing together! πŸ’ͺ

open-webui's People

Contributors

7a6ac0 avatar anthonycucci avatar anuraagdjain avatar asedmammad avatar axodouble avatar bhulston avatar bjornjorgensen avatar carlos-err406 avatar changchiyou avatar cheahjs avatar coolaj86 avatar dannyl1u avatar djismgaming avatar dnviti avatar duhow avatar explorigin avatar fbirlik avatar fusseldieb avatar jannikstdl avatar justinh-rahb avatar lainedfles avatar lucasew avatar marclass avatar officialsahyaboutorabi avatar patrice-gaudicheau avatar pazoff avatar que-nguyen avatar thatonecalculator avatar tjbck avatar yousecjoe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-webui's Issues

REQ: Role-Based Access Control (RBAC)

I'd like to deploy this in an enterprise environment. To enable proper security, I would like the ability to assign function access, such as the ability to download a model, upload a model, change settings, etc. to various roles (OllamaAdmin, OllamaUser, etc.).

GitHub Releases for Build version of this chat app?

I'm looking to create an installer for ollama for Webi (https://webinstall.dev), and some sort of UI like this (to make it more immediately accessible to people installing it).

Would you please release the pre-built version via GitHub Releases so that it can be used with just ollama and caddy without all of node?

Automating Releases

I could help you get it so that basically when you git tag v1.1.1 && git push --tags a .github/workflows automatically builds and uploads it to the Releases section using gh

Pulling model in Models section spawns a never ending stream of Pulling Manfiest notifications that animated down screen

Basically as the title says.. I go in to Models, insert image (llama2:7b) and click the download button. It pops up a short notification saying Pulling Manifest. Then, that same notification keeps showing up and animating down the screen. Like I guess 1000s of them. They just keep on showing up and looking like a waterfall of them down the middle of the page. I expected to see an animated download indicator right below the input box.

When I reload the page.. no model is downloaded and it everything is back to normal. I verified server url is correct.

Screenshot 2023-11-21 222709

So tried again (in between writing this) and turns out a) my docker volume setup was not allowing ollama to write to it, so I got that working. but b) even with it working it STILL shows this falling down icon as per the image above. It DOES however.. once it stops, indicate the model is done and I was able to close the dialog and select the model and see it work.

feat: option to toggle generateChatTitle

Is your feature request related to a problem? Please describe.
Running ollama on low-end hardware causes issues as the UI is issuing a request to generate chat titles, blocking future chats from responding until the title generation has finished. It would be great if the chat title generation could be toggled off for these low-end systems.

Describe the solution you'd like
A constant that can be set to false for generateChatTitle: GENERATE_CHAT_TITLE=false.

REQ: Multi-user support

I'd like to deploy this on a server. To do this effectively, I would need chat history and preferences to be associated with the logged-in user.

Parameter Generator

Some form of setting parameter generator, along the same idea as they have in automatic1111 for Stable Diffusion.
eg. With this prompt, generate all variations of answers from temperature 5 to 8 with .2 increments
You may also allow for the same idea using a selection of the installed models.
In the crudest form, you could just run the results down the page. You could also have a dedicated popup window or a result area in the chat with horizontal scrolling that could display the results nicely.
I think this would be a stand-out feature. I haven't used many other chat clients locally, so maybe it's been done already.

Add option to remember and switch between chats, and an option to export/import chats

Great project 😎

I don't know if this feature (two features?) is already built-in but if it is it's not obvious ...

The problem
It's a problem to continue previous chats, as well as to switch topics. Also if you want to save a chat for future reference.

The solution
Some UI that allows you to go back to previous and switch between chats (like in the ChatGPT UI), and an Export button for saving conversations to md files.

Alternatives
A workaround would be to manually copy paste the whole chat and save them to a file and then refeed them to the LLM to continue.

Use Tailwindcss-typography to assist with markdown styling

Issue: Markdown Elements Lack Styling in Application

Summary:

The application successfully renders Markdown into HTML elements, but lacks any styling for them. For example, # headings are converted to <h1> elements but appear unstyled.

Suggested Solution:

To resolve this, consider integrating the @tailwindcss/typography plugin, which is fully compatible with the already-implemented Tailwind CSS framework. This plugin will provide out-of-the-box typography styling that can be easily customized.

Additional Information:

I've had a positive experience using this plugin for similar functionality.

Current Problem

Math not rendering correctly

Describe the bug
Mathjax/Katex/Latex/Math is not rendering correctly in ollama-webui, specifically matrices seem to be rendering wrong.

To Reproduce

Ask Ollama to calculate any math equation, for this example i asked it to:

calculate the inner product of these two vectors:
∣ψ1β€‹βŸ©=(34i​)
∣ψ2⟩=(2i2)∣ψ2β€‹βŸ©=(2i2​)

Or ask it to repeat this exact answer:

To calculate the inner product of the two vectors, we use the formula for the inner product of two complex vectors:

\[\langle\psi_1|\psi_2\rangle = (3, -4i) \cdot \begin{pmatrix} 2i \\ 2 \end{pmatrix}\]

Here, the complex conjugate of the first vector is (3, 4i). The second vector is (2i, 2). Using the dot product formula for complex vectors, we get:

\[\langle\psi_1|\psi_2\rangle = 3 * 2i + (-4i) * 2 = 6i - 8i = -2i\]

So, the inner product of the two given vectors is \(-2i\).

Expected behavior
The rendering should show the matrices/vectors in this case and display the katex notation correctly.

I am not sure what the cause of this is because the HTML output of both example look the same to me so some outer div might be the cause,

Screenshots

Ollama Example:
image

In ChatGPT the same output looks like this:
image

$amount bug on mobile

On android mobile (any browser), a dollar sign on cost related topics are being escaped so that the result doesnt make sense.

Please see screenshot attached.
Screenshot_2023-11-22-10-22-09-67_40deb401b9ffe8e1df2f1cc5ba480b12

feat: custom url with chatid for chats

Describe the solution you'd like
like chatgpt basically. i want to be able to copy the url for a chat and return to it

Describe alternatives you've considered
no alts

Additional context
Add any other context or screenshots about the feature request here.

Make copying text easier while text is still generated

Describe the bug
If you try to copy text while text is still generated the selection gets reset/lost (because the view gets re-rendered?)

To Reproduce
Steps to reproduce the behavior:

  1. send a message
  2. output starts to show token by token
  3. select text in the output to copy
  4. selected text gets deselected when the next token gets output

Expected behavior
text selection stays stable/selected

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: macOS
  • Browser: Safari 17.1, Chrome 119

No model listed when accessing web page from network.

Describe the bug
When connecting to the web page locally, webpage opens and Select a model shows the list of all models to choose from. When I access the web page from the network, the webpage opens up but no models list.

To Reproduce
Steps to reproduce the behavior:

  1. ollama-webui % npm run dev

  2. ➜ Local: http://localhost:5173/
    ➜ Network: http://192.168.254.55:5173/
    ➜ Network: http://192.168.254.54:5173/
    ➜ press h to show help

  3. When I open the local webpage. webpage opens and models can be selected.

  4. When I open the Network webpage, with either ip address, the web page opens but no model shows.

  5. See error
    FullSizeRender

Expected behavior
I should see the full list of models

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Mac OS Sonoma 14.1
  • Browser Brave
  • Version 1.60

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Any guidance would be appreciated.

Issue with light theme

Describe the bug
The bars for each parameters in the advanced settings section is absent and is not appearing as it should as shown in the dark theme.

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'Settings'
  2. Click on 'Advance'
  3. Check the parameter bars.

Expected behavior
Bars as shown in the dark theme:

image

Screenshots
On Firefox:

image

On Chromium:

image

Desktop (please complete the following information):

  • OS: Linux (NixOS)
  • Browser: Firefox, Chromium
  • Version: 119.0 and 118.0.5993.117 respectively

feat: voice input

I would like to be able to use my voice as an input.
I don't really need the text to speech from the ai.
Just being able to talk to it.

use case is for language learning.

Inteface with a local whisper model.
Add a microphone button next to the input box.
When clicked, you would hear a sound to start recording.
It would then live transcribe your text in the chatbox.
After 2 seconds of silence it would send the prompt to ollama.

This project is able to interface with a local whisper do voice to text in a web app.
https://github.com/mayeaux/generate-subtitles

feat: improve stylization of code blocks

Is your feature request related to a problem? Please describe.
Currently the Copy Code button below the code block can be unintuitive to find.

Describe the solution you'd like
ChatGPT Style code blocks, with the top left declaring the language and the top right having a copy code button.

Before Click:
image

After Click:
image

Additional context

This Feature would improve readability and usablity of the interaction in a ChatGPT Style UI (which is what most user are most likely already used to)

Here is a bare codepen mockup of how this could work (with hardcoded css styling because i was lazy and unsure what your current exact styling practices are):

image

Code
function copyCode(text, button) {
  navigator.clipboard.writeText(text).then(() => {
    const originalText = button.textContent;
    button.textContent = 'Copied!';
    setTimeout(() => {
      button.textContent = originalText;
    }, 1000);
  }).catch((error) => {
    alert(`Copy failed: ${error}`);
  });
}


 function transformCodeBlocks() {
  let blocks = document.querySelectorAll('pre');
  blocks.forEach((block) => {
    let code = block.querySelector('code');
    let text = code.innerText;

    let parentDiv = document.createElement('div');
    parentDiv.style.backgroundColor = '#343541';
    parentDiv.style.overflowX = 'auto';
    parentDiv.style.display = 'flex';
    parentDiv.style.flexDirection = 'column';
    parentDiv.style.borderRadius = '8px'

    let codeDiv = document.createElement('div');
    codeDiv.style.display = 'flex';
    codeDiv.style.justifyContent = 'space-between';
    codeDiv.style.alignItems = 'center';

    let langDiv = document.createElement('div');
    langDiv.textContent = code.className;
    langDiv.style.color = 'white';
    langDiv.style.margin = '8px';

    let button = document.createElement('button');
    button.textContent = 'Copy Code';
    button.style.background = 'none';
    button.style.border = 'none';
    button.style.margin = '8px';
    button.style.cursor = 'pointer';
    button.style.color = '#ddd';
    button.addEventListener('click', () => copyCode(text, button));

    codeDiv.appendChild(langDiv);
    codeDiv.appendChild(button);

    let pre = document.createElement('pre');
    pre.textContent = text;
    pre.style.margin = '0px';
    pre.style.padding = '8px'
    pre.style.backgroundColor = 'black';
    pre.style.color = 'white';

    parentDiv.appendChild(codeDiv);
    parentDiv.appendChild(pre);

    block.parentNode.replaceChild(parentDiv, block);
  });
}

transformCodeBlocks();

Where is `index.html` after `npm run build`?

I'm not a Svelte dev.

I ran npm run build and I saw that ./build/ was generated.

I was expecting this to be the directory to serve from my webserver.

However, I can't find index.html there.... or anywhere:

fd -uuu | rg '.html$'
src/app.html
node_modules/tslib/tslib.html
node_modules/tslib/tslib.es6.html
node_modules/@sveltejs/kit/src/core/config/default-error.html

How do I create the directory that I serve with my webserver?

feat: changing user profile image

Is your feature request related to a problem? Please describe.
Currently, it lacks the functionality for users to customize their avatars or profile images. This limitation restricts personalization and user engagement, as avatars can be a key element of user identity and expression.

Describe the solution you'd like
I propose adding a feature that allows users to change their avatars or profile images within the interface. Users should be able to upload, select, or edit their avatars, giving them more control over their visual representation.

Describe alternatives you've considered
An alternative to this feature would be to rely solely on external platforms for profile image management, but this could lead to inconsistencies in user experience and may raise privacy concerns.

Additional context
User avatars or profile images can play a significant role in enhancing the user experience and creating a sense of identity within the chatbot community. Allowing users to change their avatars directly within the webui not only provides a personalized touch but also makes the it more engaging and user-centric.

feat: role-based access control w/ multi-user support

Description:
I'd like to request a user system, with simply antonymous as default, I like to have ui's like this web facing so i may use this on the go, or maybe even share it with friends, however no barrier for entry this allows anyone who discovers the url to spam requests, so I wish for a user system with a disable-able anonymous default, to not intrude on users who do not with to partake in this feature.

Alternatives:
A simple password would do fine for the initial problem, but would not be preferable.

feat: server side API calls

It would be great if the API calls from the web ui could be made server side. Right now if I have ollama and ollama-webui in a Docker stack, the web ui communicates with the ollama api externally from outside the stack. Ideally it would instead communicate to the API inside the stack.

I see this as a possible solution: Provide an optional docker environment value to make the communication server-side and if configured, this removes the option to configure the API url in web configuration page.

This is a great looking project so far! Thanks!

UI shows result much slower than it is generated

Describe the bug
The UI looks like it is loading tokens in from the server one at a time, but it's actually much slower than the model is running. Sometimes it speeds up a bit and loads in entire paragraphs at a time, but mostly it runs painfully slowly even after the server has finished responding

In the console logs I see it took 19.5 seconds to generate the response:

ollama        | llama_print_timings:        load time =    1102.04 ms
ollama        | llama_print_timings:      sample time =     284.30 ms /  1027 runs   (    0.28 ms per token,  3612.33 tokens per second)
ollama        | llama_print_timings: prompt eval time =     273.78 ms /   146 tokens (    1.88 ms per token,   533.28 tokens per second)
ollama        | llama_print_timings:        eval time =   18724.47 ms /  1026 runs   (   18.25 ms per token,    54.79 tokens per second)
ollama        | llama_print_timings:       total time =   19506.92 ms

And in the network console in the browser, I see that the chunked response streamed in over the course of 21 seconds.
However, the UI took several minutes to display the full prompt. During that time there was no further network traffic until the automatic prompt for the chat title.

To Reproduce
Steps to reproduce the behavior:

  1. Run a prompt

Expected behavior
When the server is finished streaming the prompt to the client, the full prompt should be displayed.

Screenshots
image

Desktop (please complete the following information):

  • OS: Linux Mint 21.1
  • Browser: Vivaldi (Chromium based)
  • Version: Not sure, but here's my docker compose file:
version: '3.3'
services:
   ollama-webui:
       ports:
           - '3000:8080'
       container_name: ollama-webui
       image: ollamawebui/ollama-webui
   ollama:
       volumes:
           - './ollama:/root/.ollama'
       ports:
           - '11434:11434'
       environment:
           - 'OLLAMA_ORIGINS=*'
       container_name: ollama
       image: ollama/ollama
       deploy:
         resources:
           reservations:
             devices:
               - driver: nvidia
                 count: 1
                 capabilities: [gpu]

REQ: OAuth2 and Azure Active Directory support

I'd like to deploy this on a server in an enterprise. To enable proper security, I would like to perform approval and authentication using a directory service like OAuth2 or Azure Active Directory (OAuth2 is an established standard; Microsoft Azure AD preferred).

403 OPTIONS "/api/generate"

Hi!

Just tested this and noticed it does a OPTIONS "/api/generate" request. From what I can tell this doesn't exist in latest ollama code...

Thoughts?

The /api/tags route doesn't load

Describe the bug

The model list remains empty. When I look into the network tab, I see the /api/tags route fails

http://myhost:11434/api/tags
Failed to load resource: net::ERR_CONNECTION_REFUSED

However, the error doesn't always appear:

  • When loading the page from localhost on Ubuntu with Chrome, it works
  • When loading by IP address on macbook, it fails in all 3 browsers (Chrome, FF, Edge). It even fails with VS Code port forwarding, loading as http://0.0.0.0:3000/ on Mac.

To Reproduce
Steps to reproduce the behavior:

  1. Load thee root page
  2. The model list is empty

Expected behavior
Model list should load from all browsers and loaded on 0.0.0.0 or IP

I tried

docker build --build-arg PUBLIC_API_BASE_URL='http://192.168.0.185:11434/api' --build-arg OLLAMA_API_BASE_URL='http://192.168.0.185:11434/api' -t ollama-webui .

and

docker build --build-arg OLLAMA_API_BASE_URL='http://192.168.0.185:11434/api' -t ollama-webui .

Import documents

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
A button that allows user to import documents into the conversation, similar to how ChatGPT, Perplexity or Claude do.

Describe alternatives you've considered
Turning the documents into text and pasting them, but that's painful.

Additional context
Nothing comes to mind.

docker network access error

Hi,

when running the docker build command via the run.sh script, I get the following error messages:

grafik

grafik

When accessing the web-gui I get an "500 Internal Error".

Any advice how to fix this?

Best regards,
Alexander

Differents images same container

Hi,

I try to run ollama and (this awesome) ollama-webui on the same container using docker compose, but I fail.
The aim is to run ollama as an API using a dedicated hostname routed by the traefik reverse proxy and an ui on another dedicated hostname also routed by traefik.

Here is my env file. *.docker is now resolved locally on 127.0.0.1 since I would like to make it work before using a real public hostname and tld.

.env
APP_PROJECT=ollama
APP_DOMAIN=ollama.docker

OLLAMA_HOST=0.0.0.0
OLLAMA_ORIGINS=*
OLLAMA_ENDPOINT="http://api.${APP_DOMAIN}"

And here is my docker-compose file. The project folder container an ollama folder (where are stored ssh keys and models) and an ollama-webui which is a clone of this repository.

docker-compose.yml
version: '3'
services:

  ollama:
    container_name: ${APP_PROJECT}-api
    hostname: ${APP_PROJECT}-api
    image: ollama/ollama
    env_file:
      - .env
    volumes:
      - ./ollama:/root/.ollama
    command: serve
    entrypoint: ['ollama']
    labels:
      - "traefik.http.routers.${APP_PROJECT}.rule=Host(`api.${APP_DOMAIN}`)"
      - "traefik.http.services.${APP_PROJECT}-service.loadbalancer.server.port=11434"

  ollama-webui:
    container_name: ${APP_PROJECT}-webui
    image: ${APP_PROJECT}-webui
    build:
      context: ./ollama-webui/
      dockerfile: Dockerfile
    env_file:
      - .env
    labels:
      - "traefik.http.routers.${APP_PROJECT}-webui.rule=Host(`${APP_DOMAIN}`)"
      - "traefik.http.services.${APP_PROJECT}-webui-service.loadbalancer.server.port=3000"

networks:
  default:
      name: traefik-network
      external: true

Everything seems to run as I get a response from ollama:

$ curl http://api.ollama.docker/
Ollama is running
$ curl http://api.ollama.docker/api/tags
{"models":[{"name":"llama2:latest","modified_at":"2023-10-22T09:27:20.059632774Z","size":3791737648,"digest":"7da22eda89ac1040639e351c0407c590221d8bc4f5ccdf580b85408d024904a3"},{"name":"mistral:latest","modified_at":"2023-10-22T09:37:00.598077313Z","size":4108916688,"digest":"8aa307f73b2622af521e8f22d46e4b777123c4df91898dcb2e4079dc8fdf579e"}]}

And ollama-webui displays its user interface, but it cannot request ollama using the OLLAMA_ENDPOINT as it resolve api.ollama.docker to 127.0.0.1

ollama-webui  | http://api.ollama.docker
ollama-webui  | TypeError: fetch failed
ollama-webui  |     at fetch (file:///app/build/shims.js:20346:13)
ollama-webui  |     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ollama-webui  |     at async load (file:///app/build/server/chunks/2-91f95e04.js:5:18)
ollama-webui  |     at async load_server_data (file:///app/build/server/index.js:1930:18)
ollama-webui  |     at async file:///app/build/server/index.js:3301:18 {
ollama-webui  |   cause: Error: connect ECONNREFUSED 127.0.0.1:80
ollama-webui  |       at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1595:16) {
ollama-webui  |     errno: -111,
ollama-webui  |     code: 'ECONNREFUSED',
ollama-webui  |     syscall: 'connect',
ollama-webui  |     address: '127.0.0.1',
ollama-webui  |     port: 80
ollama-webui  |   }
ollama-webui  | }

I tried adding the host-gateway directive

  ollama-webui:
    ...
    extra_hosts:
        - "host.docker.internal:host-gateway"

bit it did not work.

I tried with different endpoints, calling service name, container_name or container hostname directly but nothing works.

ollama-webui  | ollama-api
ollama-webui  | TypeError: Failed to parse URL from ollama-api/api/tags
ollama-webui  |     at fetch (file:///app/build/shims.js:20346:13)
ollama-webui  |     at async load (file:///app/build/server/chunks/2-91f95e04.js:5:18)
ollama-webui  |     at async load_server_data (file:///app/build/server/index.js:1930:18)
ollama-webui  |     at async file:///app/build/server/index.js:3301:18 {
ollama-webui  |   [cause]: TypeError: Invalid URL
ollama-webui  |       at new URL (node:internal/url:783:36)
ollama-webui  |       at new Request (file:///app/build/shims.js:13465:22)
ollama-webui  |       at fetch (file:///app/build/shims.js:14461:22)
ollama-webui  |       at fetch (file:///app/build/shims.js:20344:20)
ollama-webui  |       at load (file:///app/build/server/chunks/2-91f95e04.js:5:24)
ollama-webui  |       at load_server_data (file:///app/build/server/index.js:1930:42)
ollama-webui  |       at file:///app/build/server/index.js:3301:24 {
ollama-webui  |     code: 'ERR_INVALID_URL',
ollama-webui  |     input: 'ollama-api/api/tags'
ollama-webui  |   }
ollama-webui  | }

ollama-webui  | http://ollama-api
ollama-webui  | TypeError: fetch failed
ollama-webui  |     at fetch (file:///app/build/shims.js:20346:13)
ollama-webui  |     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ollama-webui  |     at async load (file:///app/build/server/chunks/2-91f95e04.js:5:18)
ollama-webui  |     at async load_server_data (file:///app/build/server/index.js:1930:18)
ollama-webui  |     at async file:///app/build/server/index.js:3301:18 {
ollama-webui  |   cause: Error: connect ECONNREFUSED 172.18.0.6:80
ollama-webui  |       at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1595:16) {
ollama-webui  |     errno: -111,
ollama-webui  |     code: 'ECONNREFUSED',
ollama-webui  |     syscall: 'connect',
ollama-webui  |     address: '172.18.0.6',
ollama-webui  |     port: 80
ollama-webui  |   }
ollama-webui  | }

Can you help ? Do you have any clue for me, please ?

For debugging purpose, I run everything locally on my macbook pro m1 max using sonoma 14.0 and docker 24.0.6. Hopefully it will then run live on debian 11.8 and docker 24.0.4

Option to stop auto-scroll

Is your feature request related to a problem? Please describe.
When I'm waiting for the full answer to appear I often like to scroll backwards and read the start of the answer. Currently I cannot scroll back because as soon as I do, the UI scrolls me to the bottom again!

Describe the solution you'd like
Allow me to scroll backwards. Only engage "auto keep to the bottom" mode if I have actually scrolled to the bottom of the page. Alternatively add an option to turn off auto scroll to the bottom.

See ChatGPT for the behaviour I am describing, which ChatGPT has already implemented.

docker compose webui connection issue

Describe the bug
webui has connection pbls., not showing models when ollama server runs on docker.
true for running webui on docker or cli.

To Reproduce
Steps to reproduce the behavior:
docker-compose.yaml

version: '3.6'

services:
  ollama-api:
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    image: ollama/ollama:latest
    pull_policy: always
    container_name: ollama
    tty: true # enable colorized logs
    restart: unless-stopped
    environment:
      - OLLAMA_ORIGINS="*"
    ports:
      - 11434:11434
    volumes:
      - /var/lib/ollama:/root
      - /srv/models/ollama:/root/.ollama/models


  ollama-webui:
    restart: unless-stopped
    build:
      context: .
      args:
        OLLAMA_API_BASE_URL: 'http://ollama-api:11434/api'
      dockerfile: Dockerfile
    image: ollama-webui:latest
    container_name: ollama-webui
    extra_hosts:
      - "host.docker.internal:host-gateway"
    ports:
      - 3000:8080

Expected behavior
webui connects to ollama-api via internal docker routing.

Server:

  • OS: ubu22.04
  • Browser: ungoogled-chromium

Additional context
via setup & build I have permutated many potenial URLs.
all (localhost, 0.0.0.0, VPN-IP) fail to connect-test, except using the LAN-IP.
but still shows no models.

ollama list works normal.
curl from another host via VPN also works.

in the browser console log:
IP: 10.11.1.x is VPN

Access to fetch at 'http://10.11.1.17:11434/api/tags' from origin 'http://gulag:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
start.48f4feda.js:1     GET http://10.11.1.17:11434/api/tags net::ERR_FAILED
window.fetch @ start.48f4feda.js:1
$ @ 2.23ce0af1.js:316
h @ 2.23ce0af1.js:312
M @ 2.23ce0af1.js:316
2.23ce0af1.js:316 TypeError: Failed to fetch
    at window.fetch (start.48f4feda.js:1:1402)
    at $ (2.23ce0af1.js:316:30653)
    at h (2.23ce0af1.js:312:40212)
    at HTMLButtonElement.M (2.23ce0af1.js:316:343)
2.23ce0af1.js:316 null
192.168.178.17:11434/api/tags:1     Failed to load resource: net::ERR_CONNECTION_TIMED_OUT
2.23ce0af1.js:316 TypeError: Failed to fetch
    at window.fetch (start.48f4feda.js:1:1402)
    at $ (2.23ce0af1.js:316:30653)
    at 2.23ce0af1.js:316:26530
    at v (scheduler.c37d1d9b.js:1:101)
    at Array.map (<anonymous>)
    at index.43b4ac03.js:4:2077
    at z (scheduler.c37d1d9b.js:1:1869)
    at _e (index.43b4ac03.js:4:3168)
    at new oe (app.4662c1c7.js:1:5070)
    at Pe (start.48f4feda.js:1:8373)
2.23ce0af1.js:316 null

IP: 192.168.178.x is LAN

start.48f4feda.js:1     GET http://192.168.178.17:11434/tags net::ERR_CONNECTION_TIMED_OUT
window.fetch @ start.48f4feda.js:1
$ @ 2.23ce0af1.js:316
h @ 2.23ce0af1.js:312
M @ 2.23ce0af1.js:316
2.23ce0af1.js:316 TypeError: Failed to fetch
    at window.fetch (start.48f4feda.js:1:1402)
    at $ (2.23ce0af1.js:316:30653)
    at h (2.23ce0af1.js:312:40212)
    at HTMLButtonElement.M (2.23ce0af1.js:316:343)
2.23ce0af1.js:316 null
2.23ce0af1.js:316 []
2.23ce0af1.js:316 2e01853d-2963-421b-8629-e4cfef86baca

Doesn't work behind cloudflare + reverse proxy

Describe the bug
When accessed through LAN, webui works properly. When accessing it through cloudflare + reverse proxy, like I access all the applications running on my virtualization NAS, webui seems to be unable to find ollama by its address. (If I had to guess, something in the webui doesn't use the Ollama Server URL variable and instead tries to resolve the ollama's address by itself, which fails because it gets my domain instead of localhost.)

To Reproduce

  • Run with docker-compose up from the cloned git repo.
  • In your reverse proxy, create a record for accessing the webui from outside internet
  • open webui both over LAN and using the domain name

Expected behavior
Both instances of webui work the same

Actual behavior
webui over LAN can find the ollama's endpoint, and you can download models / make prompts
webui over domain name can's find the ollama's endpoint, the default value of Ollama Server URL is http://:11434/api which cannot be reached, values using localhost or 127.0.0.1 don't work either, and you can't download models / select downloaded ones

Screenshots
image
image
image
image
image

Desktop (please complete the following information):

  • Ubuntu server 22.04
  • Chrome
  • Docker & docker-compose
  • Cloudflare
  • nginx reverse proxy

Doesnt load unless page is reloaded multiple times

Im having a weird issue.... The page loads perfect but when i type an input it freezes on the response. I have to reload the page two or three times. Once it gives me a response it works flawlessly... If i could dm you i can send you the domain its hosted at and you can try it for yourself to see what you think...

Add Support for Text File Upload

Is your feature request related to a problem? Please describe.
Currently, the project lacks the capability to handle text file uploads, such as markdown and PDF files, for the purpose of summarization and Q&A. This limitation restricts users from efficiently extracting information from documents.

Describe the solution you'd like
I propose the addition of a feature that allows users to upload text files in formats like markdown, PDF. ollama-webui should be able to process these files to provide summaries and answer user queries based on the content within the uploaded documents. This enhancement would significantly improve the versatility and usefulness of the ollama-webui.

Describe alternatives you've considered
An alternative to supporting file uploads could be manual copying and pasting of text content into the chat interface, but this approach is time-consuming and error-prone. Another alternative is integrating with external document management systems, but this might be less user-friendly and may introduce privacy concerns.

Additional context
In many use cases, users need to extract specific information or summaries from existing documents in their possession. Adding support for text file uploads, especially in common formats like markdown and PDF, would greatly enhance the capabilities and make it a valuable tool for content processing and information retrieval. This feature aligns with the growing demand for efficient document processing and information extraction tools in various domains.

Ollama server inaccessible when requested from different network

When hosting ollama on host1, and ollama-webui on host1, and attempting to access via host2 on a seperate network, the webui will refuse to connect to ollama, while working on host1.

To Reproduce

  • Install & Run Ollama
  • Install & Run Ollama WebUI
  • Port Forward
  • Attempt to access from any client other than the locally on server
    It would be expected to function appropriately, however it for whatever reason pretends as if it cannot connect to the Ollama server, despite working just fine locally. Even if you port forward Ollama and point it to the public Ollama endpoint (verified) it will still not work whatsoever.

image
image

Server:

  • OS: Arch Linux
  • Version: v1.0.0-alpha

Client:

  • OS: Windows 11
  • Browser: Chrome Developer Channel

Cannot build

Describe the bug
I just pulled the repositry and build the image but it fails

To Reproduce
Steps to reproduce the behavior:

  1. git pull
  2. docker compose build
 => [ollama-webui internal] load build definition from Dockerfile                                                                                                                                                                                         0.0s
 => => transferring dockerfile: 357B                                                                                                                                                                                                                      0.0s
 => [ollama-webui internal] load .dockerignore                                                                                                                                                                                                            0.0s
 => => transferring context: 2B                                                                                                                                                                                                                           0.0s
 => [ollama-webui] resolve image config for docker.io/docker/dockerfile:1                                                                                                                                                                                 1.6s
 => CACHED [ollama-webui] docker-image://docker.io/docker/dockerfile:1@sha256:ac85f380a63b13dfcefa89046420e1781752bab202122f8f50032edf31be0021                                                                                                            0.0s
 => [ollama-webui internal] load metadata for docker.io/library/node:alpine                                                                                                                                                                               0.5s
 => [ollama-webui 1/8] FROM docker.io/library/node:alpine@sha256:df76a9449df49785f89d517764012e3396b063ba3e746e8d88f36e9f332b1864                                                                                                                         0.0s
 => [ollama-webui internal] load build context                                                                                                                                                                                                            0.0s
 => => transferring context: 4.86kB                                                                                                                                                                                                                       0.0s
 => CACHED [ollama-webui 2/8] WORKDIR /app                                                                                                                                                                                                                0.0s
 => CACHED [ollama-webui 3/8] RUN echo                                                                                                                                                                                                                    0.0s
 => CACHED [ollama-webui 4/8] RUN echo                                                                                                                                                                                                                    0.0s
 => CACHED [ollama-webui 5/8] COPY package.json package-lock.json ./                                                                                                                                                                                      0.0s
 => ERROR [ollama-webui 6/8] RUN npm ci                                                                                                                                                                                                                  74.5s
------
 > [ollama-webui 6/8] RUN npm ci:
74.43 npm notice
74.43 npm notice New patch version of npm available! 10.2.0 -> 10.2.1
74.43 npm notice Changelog: <https://github.com/npm/cli/releases/tag/v10.2.1>
74.43 npm notice Run `npm install -g [email protected]` to update!
74.43 npm notice
74.43 npm ERR! code E500
74.43 npm ERR! 500 Internal Server Error - GET https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-6.7.4.tgz - KV GET failed: 401 Unauthorized
74.43
74.43 npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2023-10-30T20_12_35_508Z-debug-0.log
------
failed to solve: process "/bin/sh -c npm ci" did not complete successfully: exit code: 1

Expected behavior
Build and run the webui

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: macOS 14.1
  • Browser: Firefox
  • Version: 121

UI - model list dropdown darkmode display issue

When using darkmode, the dropdown menu is displayed in white with barely legible grey text . This occurs on my windows 10 desktop for chrome, brave, and mozilla firefox browsers. This also occurs on my linux mint laptop for brave but not for firefox. Screenshot attached. All browsers are using the most up to date versions. I am accessing the web-ui remotely from an ubuntu linux server (22.04.3) running the ollama-webui docker .

ollama web ui - issue for github

cant find image

Hi All

I wanted to try it with the command docker run -d -p 3000:8080 --name ollama-webui:latest --restart always ollama-webui
but then this happens

Unable to find image 'ollama-webui:latest' locally
docker: Error response from daemon: pull access denied for ollama-webui, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'

Usage in Kubernetes

Hello,

I'm trying to use ollama-webui in Kubernetes but I can't figure how to make it work.

When I'm on the webui, I can list the models and launch a conversation but once I send a question, I don't receive any answer and the backend (ollama server pod) does not seems to consume any CPU.

I don't know if it's related but there is no Kubernetes equivalent of 'host.docker'internal' so, maybe the problem lies here ?

Beside that, I know that I've correctly configure my OLLAMA_ENDPOINT env variable because if I change it to a bad value I can not list the models.

Thanks for your help.

feat: light mode support

Is your feature request related to a problem? Please describe.
Currently, the UI only supports a default dark mode, which can be limiting for users who prefer a lighter interface or need to accommodate different lighting conditions. This feature request addresses the need for a more versatile user experience.

Describe the solution you'd like
I would like to request the addition of a light mode to the UI. This mode should provide a visually appealing, well-contrasted, and user-friendly interface with lighter color schemes. Users should be able to switch between dark and light modes based on their preferences.

Describe alternatives you've considered
One alternative to adding a light mode could be allowing users to customize the color scheme themselves. However, this could be more complex to implement and might lead to inconsistencies in the UI. Another alternative could be providing predefined themes, but it might not cater to all user preferences.

Additional context
As technology advances, users have come to expect more control over their user interface, including the ability to choose between light and dark modes. Adding a light mode to the UI will enhance user accessibility and satisfaction. It's also in line with current UI/UX best practices and can be a valuable addition for users who prefer a lighter, more vibrant design.

docker fetch failed: CERT_HAS_EXPIRED

Describe the bug
Running the webui the model drop down is empty. Docker logs indicate a TypeError:fetch failed, CODE:'CERT_HAS_EXPIRED'

To Reproduce
Running the docker with ollama server on a different machine.

docker run -d -p 3000:3000 --add-host=host.docker.internal:host-gateway -e OLLAMA_ENDPOINT="http://myserver.org" --name ollama-webui --restart always ollama-webui

connected to localhost:3000, UI appears as expected. Models drop down box is empty.
docker logs:

Listening on 0.0.0.0:3000
http://myserver.org
TypeError: fetch failed
    at fetch (file:///app/build/shims.js:20346:13)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async load (file:///app/build/server/chunks/2-ee0c419c.js:5:18)
    at async load_server_data (file:///app/build/server/index.js:1930:18)
    at async file:///app/build/server/index.js:3301:18 {
  cause: Error: certificate has expired
      at TLSSocket.onConnectSecure (node:_tls_wrap:1669:34)
      at TLSSocket.emit (node:events:515:28)
      at TLSSocket._finishInit (node:_tls_wrap:1080:8)
      at ssl.onhandshakedone (node:_tls_wrap:866:12) {
    code: 'CERT_HAS_EXPIRED'
  }
}

Ollama functions fine remotely with a langchain python program.
Not sure if this requires a code change or documentation change.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.