Coder Social home page Coder Social logo

jakobhoeg / nextjs-ollama-llm-ui Goto Github PK

View Code? Open in Web Editor NEW
372.0 7.0 65.0 5.87 MB

Fully-featured, beautiful web interface for Ollama LLMs - built with NextJS. Deploy with a single click.

Home Page: https://nextjs-ollama-xi.vercel.app/

License: MIT License

JavaScript 0.85% TypeScript 97.26% CSS 1.89%
ai chatbot llm mistral nextjs ollama openai tailwindcss localstorage offline

nextjs-ollama-llm-ui's Introduction

Fully-featured & beautiful web interface for Ollama LLMs

GitHub Repo stars

Get up and running with Large Language Models quickly, locally and even offline. This project aims to be the easiest way for you to get started with LLMs. No tedious and annoying setup required!

Features ✨

  • Beautiful & intuitive UI: Inspired by ChatGPT, to enhance similarity in the user experience.
  • Fully local: Stores chats in localstorage for convenience. No need to run a database.
  • Fully responsive: Use your phone to chat, with the same ease as on desktop.
  • Easy setup: No tedious and annoying setup required. Just clone the repo and you're good to go!
  • Code syntax highligting: Messages that include code, will be highlighted for easy access.
  • Copy codeblocks easily: Easily copy the highlighted code with one click.
  • Download/Pull & Delete models: Easily download and delete models directly from the interface.
  • Switch between models: Switch between models fast with a click.
  • Chat history: Chats are saved and easily accessed.
  • Light & Dark mode: Switch between light & dark mode.

Preview

ollama-Original.MOV

Requisites ⚙️

To use the web interface, these requisites must be met:

  1. Download Ollama and have it running. Or run it in a Docker container. Check the docs for instructions.
  2. Node.js (18+) and npm is required. Download

Deploy your own to Vercel or Netlify in one click ✨

Deploy with Vercel Deploy to Netlify Button

You'll need to set your OLLAMA_ORIGINS environment variable on your machine that is running Ollama:

OLLAMA_ORIGINS="https://your-app.vercel.app/"

Installation to run locally 📖

To install and run a local environment of the web interface, follow the instructions below.

1. Clone the repository to a directory on your pc via command prompt:

git clone https://github.com/jakobhoeg/nextjs-ollama-llm-ui

2. Open the folder:

cd nextjs-ollama-llm-ui

3. Rename the .example.env to .env:

mv .example.env .env

4. If your instance of Ollama is NOT running on the default ip-address and port, change the variable in the .env file to fit your usecase:

NEXT_PUBLIC_OLLAMA_URL="http://localhost:11434"

5. Install dependencies:

npm install

6. Start the development server:

npm run dev

5. Go to localhost:3000 and start chatting with your favourite model!

Upcoming features

This is a to-do list consisting of upcoming features.

  • ⬜️ Ability to send an image in the prompt to utilize vision language models.
  • ⬜️ Ability to regenerate responses
  • ⬜️ Import and export chats
  • ⬜️ Voice input support
  • ✅ Code syntax highlighting

Tech stack

NextJS - React Framework for the Web

TailwindCSS - Utility-first CSS framework

shadcn-ui - UI component built using Radix UI and Tailwind CSS

shadcn-chat - Chat components for NextJS/React projects

Framer Motion - Motion/animation library for React

Lucide Icons - Icon library

Helpful links

Medium Article - How to launch your own ChatGPT clone for free on Google Colab. By Bartek Lewicz.

nextjs-ollama-llm-ui's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

nextjs-ollama-llm-ui's Issues

bug: pull model text parsing

It appears that the pull model field doesn't trim the input text. When I put in "dolphin-mixtral " it didn't find anything, but "dolphin-mixtral" did. I can fix this if I get time but it's probably faster for one the devs familiar with the project to find that part of the code

bug: deleting chat doesn't always work

If you try to delete a chat whilst not having that chat selection it will not actually delete, it will just select that chat. Then you have to try to delete it again at which point it would actually delete since it is selected

feat: add inline code blocks

A lot of times the model will do something like this example or `example` for things such as commands, but it seems that this UI doesn't support this. Should be pretty easy to add and would make it look much better

No model availables

Hello,

I can pull models, but nextjs-ollama-llm-ui can't detect anyone.
Ollama and ollama-webui can detect models but not nextjs-ollama-llm-ui, could you please fix it ?

image

bug: issues pulling model (CORS)

I'm running the ollama server on localhost:11434. I go to "Settings" and try to pull a model as no models are listed in the dropdown (even though tinydolphin is pulled and working on localhost). However, the app pukes on this as follows:
⨯ TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11576:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async globalThis.fetch (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:57569) at async POST (webpack-internal:///(rsc)/./src/app/api/model/route.ts:7:22) at async /home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:63809 at async eU.execute (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:53964) at async eU.handle (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:65062) at async doRender (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1333:42) at async cacheEntry.responseCache.get.routeKind (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1555:28) at async DevServer.renderToResponseWithComponentsImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1463:28) at async DevServer.renderPageComponent (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1856:24) at async DevServer.renderToResponseImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:1894:32) at async DevServer.pipeImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:911:25) at async NextNodeServer.handleCatchallRenderRequest (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/next-server.js:271:17) at async DevServer.handleRequestImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/base-server.js:807:17) at async /home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/dev/next-dev-server.js:331:20 at async Span.traceAsyncFn (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/trace/trace.js:151:20) at async DevServer.handleRequest (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/dev/next-dev-server.js:328:24) at async invokeRender (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/router-server.js:163:21) at async handleRequest (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/router-server.js:342:24) at async requestHandlerImpl (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/router-server.js:366:13) at async Server.requestListener (/home/ubuntu/nextjs-ollama-llm-ui/node_modules/next/dist/server/lib/start-server.js:140:13) { cause: Error: connect ECONNREFUSED ::1:11434 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: '::1', port: 11434 } }

Screenshot 2024-02-13 at 10 10 48 Screenshot 2024-02-13 at 10 00 10

Scroll is not working

Hi there, Thank you so much for doing this!

I downloaded and ran the app, it works well but I've found a small problem. The scroll is not working. Once within a chat, when the content is too large, you can not scroll up to read the previous messages.

Installation Issue: Specific Node Version Required?

Following the instructions on a system with:
`
ubuntu@ollama:~/nextjs-ollama-llm-ui$ node -v

v12.22.9
ubuntu@ollama:/nextjs-ollama-llm-ui$ nvm -v
0.39.3
ubuntu@ollama:
/nextjs-ollama-llm-ui$ npm -v
8.5.1
`

executing: npm install

throws about 50:
`

ubuntu@ollama:~/nextjs-ollama-llm-ui$ npm install
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package:
'@langchain/[email protected]',
npm WARN EBADENGINE required: { node: '>=18' },
npm WARN EBADENGINE current: { node: 'v12.22.9', npm: '8.5.1' }
npm WARN EBADENGINE }
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package: '@langchain/[email protected]',
npm WARN EBADENGINE required: { node: '>=18' },
npm WARN EBADENGINE current: { node: 'v12.22.9', npm: '8.5.1' }
npm WARN EBADENGINE }
`

Is there a specific version of node to be using? Any one else run into this?

feat: possibility to put carriage return at prompt with Enter key

Thanks for this Web UI ! I like it.

When I press [Enter] key at prompt, it is directly sent to Ollama. I see that it is possible to insert a carriage return by [Shift]+[Enter] but it is not natural to press this combination when composing the prompt. I often send it before I've written all my text.

If [Enter] could add a carriage return and [Ctrl]+[Enter] send the prompt, it would be great. A checkbox to choose the behaviour of [Enter] key would be magic ^^

Thanks a lot

How do I run it like in the video demostration in the main page?

In the README the first thing you're shown is a video demo of NextJS-Ollama running like a standalone app, but after installation I'm prompted to run it on localhost:3000 on my web browser; I'm using Fedora 36, 64bits. How can be used like it's shown in the README?

Version Requirements: 18.17+

Just a quick note that you must use 18.17+ for the code to work. Using 18.9 for example, fails.

You are using Node.js 18.9.1. For Next.js, Node.js version >= v18.17.0 is required.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.