Coder Social home page Coder Social logo

bavarder / bavarder Goto Github PK

View Code? Open in Web Editor NEW
244.0 244.0 18.0 2.31 MB

Chit-chat with an AI

Home Page: https://bavarder.codeberg.page

License: GNU General Public License v3.0

Meson 6.62% CSS 1.26% Python 89.87% Shell 0.61% Nix 1.64%
chat gnome gpt gpt-3 gpt-35-turbo libadwaita linux openai pygobject

bavarder's Introduction

Bavarder
Bavarder

Chit-chat with an AI

Download on Flathub


Translation status Packaging status bavarder

Please do not theme this app

Preview

Usage

Documentation is available here

Installation

Flatpak

You can either use your GNOME Software and search for "Bavarder" or you can run

flatpak install io.github.Bavarder.Bavarder

Latest

You can download a flatpak from the latest commit here. Run

curl -s -o bavarder.flatpak https://codeberg.org/api/packages/Bavarder/generic/Bavarder/164/bavarder.flatpak && flatpak install --user bavarder.flatpak -y 

From Source

Flatpak-builder

Clone the repo and run flatpak-builder

git clone https://codeberg.org/Bavarder/Bavarder # or https://github.com/Bavarder/Bavarder
cd Bavarder
flatpak-builder --install --user --force-clean repo/ build-aux/flatpak/io.github.Bavarder.Bavarder.json

Meson

git clone https://codeberg.org/Bavarder/Bavarder # or https://github.com/Bavarder/Bavarder
cd Bavarder
meson setup build # Configure the build environment in subdirectory 'build'
meson compile -C build
meson check -C build
meson install -C build
chmod 0755 /usr/local/bin/bavarder # Fix binary permissions

Others

You can see more install methods on the website

Contribute

The GNOME Code of Conduct is applicable to this project

See SEEN.md for a list of articles and posts about Bavarder

Translate

Translation status

You can translate Bavarder using Codeberg Translate

Mirrors

About the name

Bavarder is a french word, the definiton of Bavarder is "Parler abondamment de choses sans grande portée" (Talking a lot about things that don't matter) (Larousse) which can be translated by Chit-Chat (informal conversation about matters that are not important). For non-french speakers, Bavarder can be hard to speak, it's prounouced as [bavaʀde]. Hear here

See also

A tool for generating pictures with AI (GNOME app)

bavarder's People

Contributors

0xmrtt avatar albanobattistella avatar anatoly136ua avatar daudix avatar elgandoz avatar galegovski avatar ingrownmink4 avatar ioghjog avatar kbdharun avatar keyiflerolsun avatar ktaf avatar maymage avatar mumu-lhl avatar nikrtyd avatar oktay454 avatar osiixy avatar peterdavehello avatar phaerrax avatar ppybasic avatar rene-coty avatar sjdonado avatar soumyadghosh avatar thepoladov13 avatar vabame avatar vistaus avatar weblate avatar xosecalvo avatar yakushabb avatar yangyangdaji avatar yuttct avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

bavarder's Issues

[Feature request] Add ability to open a log and log all prompts/responses to it.

Is your feature request related to a problem? Please describe.
When using bavader for reasearch, continualy having to cut and paste responses to a text file is a pain and makes the task disjoint.
Describe the solution you'd like
Be able to have a start session button, with the ability to set an associated file path, and have all interactions logged to that file. Clearly display the path at the top when logging is active.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Text invisible in output

Describe the bug
Unable to see output text after request. The output can be copied, but the text itself is invisible in the window. The problem occurs regardless of the providers
To Reproduce
Steps to reproduce the behavior:

  1. Input question
  2. Send question
  3. No text in output window

Expected behavior
Visible text in output window

Screenshots
If applicable, add screenshots to help explain your problem.
image

Environement
Please past the content of About > Troubleshooting > Debugging Information

io.github.Bavarder.Bavarder 0.2.0
Environment: pop:GNOME
Gtk: 4.10.3
Python: 3.10.6
OS: Linux 6.2.6-76060206-generic #202303130630168132977822.04~d824cd4 SMP PREEMPT_DYNAMIC Wed A
Providers: ['baichat', 'catgpt', 'huggingchat', 'openaigpt35turbo', 'starcoder']

ROADMAP

  • Add support for formating
    • Code
    • Table
  • Add multi chat ui
  • Add ability to switch backends
    • Add GPT-3 - OpenAI - Require API key
    • Add GPT-4 - OpenAI
    • Add Hugging Chat
  • Add delta feature (if backend support it)
  • Add Text-To-Speech
  • Add Speech-To-Text
  • Add file support (#12)

AttributeError when using DialoGPT

Exception in thread Thread-8 (thread_run):
Traceback (most recent call last):
File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
self.run()
File "/usr/share/bavarder/bavarder/main.py", line 67, in __run
self.__run_backup()
File "/usr/lib/python3.11/threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "/usr/share/bavarder/bavarder/main.py", line 1160, in thread_run
response = self.providers[self.provider].ask(self.prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/bavarder/bavarder/provider/hfdialogpt.py", line 30, in ask
if self.authorization:
^^^^^^^^^^^^^^^^^^
AttributeError: 'HuggingFaceDialoGPTLargeProvider' object has no attribute 'authorization'

To Reproduce
Steps to reproduce the behavior:

  1. Select DialoGPT from providers
  2. Send Any Message
io.github.Bavarder.Bavarder 0.2.3
Environment: KDE
Gtk: 4.10.3
Python: 3.11.3
OS: Linux 6.3.5-zen1-1-zen #1 ZEN SMP PREEMPT_DYNAMIC Tue, 30 May 2023 13:43:47 +0000
Providers: ['alpacalora', 'baichat', 'catgpt', 'hfdialogpt', 'hfgoogleflant5xxl', 'hfgoogleflanu12', 'hfgpt2', 'hfgpt2large', 'hfopenassistantsft1pythia12b', 'huggingchat', 'openaigpt35turbo', 'local', 'openaicustom', 'openaigpt4', 'openaitextdavinci003']
Use Theme: True
Use Text View: True
Clear After Send: False
Close All Without Dialog: False
Current Provider: hfdialogpt

[Bug] Ctrl+backspace clears entire input box

Describe the bug
Pressing ctrl+backspace clears the whole input box, not just the word to the left of the cursor.

To Reproduce
Steps to reproduce the behavior:

  1. Click on input field
  2. Type 'Foo Bar Baz'
  3. Press Ctrl+Backspace
  4. The entire input box gets cleared.

Expected behavior
'Baz' gets deleted, so we are left with 'Foo Bar ' with the cursor at the end.

Environement

io.github.Bavarder.Bavarder 0.1.7
Environment: GNOME
Gtk: 4.10.3
Python: 3.10.6
OS: Linux 6.2.14-300.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon May  1 00:55:28 UTC 2023
Providers: ['baichat', 'huggingchat']

Additional context

Preference window do not seem to save the preferences

I am trying the local model option and providing it the local openAI server endpoint and it do not save the preferences (it doesn't even need a model name or an API key) and complains about providers.

Instead of using GPT4ALL with outdated formats like ggml.bin, best is to use an openAI compatible endpoint(servers can be launched either though ollama or llama-cpp or any openai compatible servers) on the /v1/chat/completions using openai python library(just changing the base_url param is needed) interface so that the server handles the chat_format instead of the client(bavarder in this case) which can work with any endpoint, not just local.

I could just fork your repo but I am not comfortable with GTK and libadwaita, so I am opening an issue here. :)

Hugging Chat Infinite loading spinner

Environment: mobile L5 phone, wayland, gtk, gnome3.38, bavarder v0.1.6, flatpak install, pureos10.

Steps to repeat:

  1. in reply field enter question or textr and submit
  2. go back to reply field and in next line enter question and submit
  3. quickly keep doing 1-2, repeatedly or roughly about 6-8 times using different text

Observe the spinner shows constant loading and no more user interaction can be achieved. App has to be closed and reopened for it to work again. Changing chat provider doesnt remove the loading fault.

At a minimum what it should be doing is cancel all processes that are running or havent finished when switching chat provider. Additionally a failsafe option maybe good where timeout is reached if there is no communication from chat after say 3min it cancels ongoing request. Also maybe additionally asynchronous send receive would be a good feature because then a response that may take an hour to be received would still be received whioe user is doing other things.

Screenshot:
bavarder

Debug
{'baichat': '{}', 'catgpt': '{}', 'huggingchat': '{}', 'openaigpt35turbo': '{"api_key": null}', 'openaigpt4': '{"api_key": null}'} ['baichat', 'catgpt', 'huggingchat', 'openaigpt35turbo', 'openaigpt4'] Loading provider baichat Loading provider catgpt Loading provider huggingchat Loading provider openaigpt35turbo Loading provider openaigpt4 {'baichat': '{}', 'catgpt': '{}', 'huggingchat': '{}', 'openaigpt35turbo': '{"api_key": null}', 'openaigpt4': '{"api_key": null}'} {'baichat': '{}', 'catgpt': '{}', 'huggingchat': '{}', 'openaigpt35turbo': '{"api_key": null}', 'openaigpt4': '{"api_key": null}'} {'baichat': '{}', 'catgpt': '{}', 'huggingchat': '{}', 'openaigpt35turbo': '{"api_key": null}', 'openaigpt4': '{"api_key": null}'} {'baichat': '{}', 'catgpt': '{}', 'huggingchat': '{}', 'openaigpt35turbo': '{"api_key": null}', 'openaigpt4': '{"api_key": null}'} {'baichat': '{}', 'catgpt': '{}', 'huggingchat': '{}', 'openaigpt35turbo': '{"api_key": null}', 'openaigpt4': '{"api_key": null}'} huggingchat Setting selected provider to 2 {0: <bavarder.provider.baichat.BAIChatProvider object at 0xffffb0043880>, 1: <bavarder.provider.catgpt.CatGPTProvider object at 0xffffa5ab30a0>, 2: <bavarder.provider.huggingchat.HuggingChatProvider object at 0xffffa5ab30d0>, 3: <bavarder.provider.openaigpt35turbo.OpenAIGPT35TurboProvider object at 0xffffa5ab3d60>, 4: <bavarder.provider.openaigpt4.OpenAIGPT4Provider object at 0xffffa5ab3dc0>} Saving providers data... {'baichat': '{}', 'catgpt': '{}', 'huggingchat': '{}', 'openaigpt35turbo': '{"api_key": null}', 'openaigpt4': '{"api_key": null}'}

Add support for image generation.

As Bavarder got a better UI like a chating app.
So now it is possible to add image generation in this app along with text conversation.
So Now you can merge imaginer and bavarder into one app.
That would be great.

GPT 3.5 Turbo forgets the API key

Describe the bug
The API key for GPT 3.5 Turbo is lost when you close the app.

To Reproduce
Steps to reproduce the behavior:

  1. Select GPT 3.5 Turbo and enter an API key
  2. Close the app
  3. Open the app
  4. The previously entered API key is gone

Expected behavior
Remember the API key.

Desktop (please complete the following information):

  • OS: Fedora 38

Crashes on opening setting.

Describe the bug
I am using "flatpak" version. So when try to open setting window got stacked and not responding popup rises and then app got crashed.
To Reproduce
Steps to reproduce the behavior:

  1. Click on 'Hamburger menu'
  2. Click on 'Settings'

Do we need a special API key from a non-free account for using Chat GPT4 in Bavarder?

Describe the bug
I cannot use ChatGPT 4 in Bavarder with my basic API key from my free openAI account.
Is it the expected behavior for Bavarder?
Do we need to use here a special API key by paying a $20/month ChatGPT-plus account?
If this is the case, then this should be stated for instance in the section Preferences/Providers/OpenAI GPT 4 (below entering the API key).

** Suggestion of modification**
You could add for instance a question mark symbol on the left side of the edit Api key icon (the pen)

Only three lines for the anwser

The application only propose incomplete answer, on two or three lines.

On Linux Mint, from the depot (Application / Software manager)

Screen

Support Ollamas API for the local provider option

Is your feature request related to a problem? Please describe.
No.

Describe the solution you'd like
I would like to connect to my local Ollama API to interface with the models I have downloaded.

Describe alternatives you've considered
I use oterm TUI at the moment, but it would be nice to have a native GNOME app to interface with Ollama.

Llama2 support

Llama2 is a new model developed by Microsoft and Facebook. The project could benefit from adding support for it. It wouldn't be hard because of the open and accessible nature of the model.

Bavarder flatpak is not working

Describe the bug
The flatpak of Bavarder doesn't work, and startup at all in Ubuntu 23.04
To Reproduce
Steps to reproduce the behavior:

  1. Go to Flathub
  2. Click on Bavarder
  3. Install it & Run it
  4. Nothing, only says Force Quit

Expected behavior
It should have worked perfectly

Environement
Cannot access it

Additional context
I am using x11 with Nvidia in Ubuntu 23.04. Seeing this, I created a snap for this app, which works perfectly, and is built at par to the flatpak. Would you like to publish it in the store on your behalf? I will maintain anyways.

Screenshots of the snap

image

[Feature Request] Ability to resize the input and output boxes

Is your feature request related to a problem? Please describe.
My input is often significantly shorter than my output, usually just being a simple question. This leads to a large amount of dead space that could instead be taken up by displaying more output rather than having to scroll to read the rest of the response:
image

Describe the solution you'd like
I'd suggest having the ability to click on the space between the input and output boxes and drag up and down to resize the boxes. This way, you can increase or decrease the size of the boxes when necessary. Here is a rough example showing what I mean:

bavrs

Add a button to stop generating

Problem

Hugging chat tends to give very lengthy answers, but there is no way to stop it and choose another model. The only way is to close the application.

Possible solution

A button to stop receiving the answer.

If it is a problem because one could send too many requests you could block the user from sending other requests until the answer is finished, but let the user switch model to try again if he is not happy with the answer.

Hugging Chat doesnt work in flathub version

Cant provide screenshot yet.

Well when i switch to hugging chat and press send it spins forever

maybe its because hugging face got another model.
i hope i post the issue in right one
anyway the os is linux mint 21.1 vera

Uncomplete Answer for Long Answer Questions

Bavarder Outputs only a Part of the Answer

To Reproduce
Ask any Question that requires a long answer in the Prompt.

Expected behavior
Deduce a complete answer.

Environment
OS: Manjaro Linux
OS Version: 23.0.1 Uranos
Installed from Flatpak. io.github.Bavarder.Bavarder | Version: 0.2.4

Used Provider
Open-Assistant SF-1 12B Model

Screenshots
Screenshot from 2023-09-11 14-27-24


Screenshot from 2023-09-11 14-29-13

Terminal Output

$ io.github.Bavarder.Bavarder                                                                                               [14:30:25]
Linux Torvalds is the founder and lead developer of the Linux kernel. He is also the founder

Can I use ollama ?

Hello, I have installed ollama on my device and downloaded the Gemma 2b on the device.
I can use it through terminal, but I want to know if I can use it with your app ?

Use OpenAI account for ChatGPT free

Description

I have a free account in OpenAI for ChatGPT. Is it possible to use that account with the provided ChatGPT (free), but from within Bavarder?

The default free Providers in Bavarder really are not great, compared to the (free) ChatGPT.

Describe Solution

To be able to use our credentials with the free ChatGPT provided by OpenAI.

Describe Alternatives You've Considered

Login and use ChatGPT online 😉

Additional Context

N/A

Add support for Gemma AI.

Hello. Please add support for Google's open source Gemma AI, which was launched this week.
It has 2 models, 2b and 7b.
Both are great and 2b version can be easily run in both mobile and desktop without having internet running on device's local hardware.
So with that downloaded natively, this app can be a better AI assistant for mobile too, isn't it ?

All* providers generate short and/or generally unhelpful answers

* I have not tested GPT 3.5 or GPT 4, as they require an API key. I have tested all other providers, as shown in the screenshots below

Describe the bug
Answers generated by providers are either very short, aren't relevant to the question, go on nonsense tangents, or all of the above

To Reproduce
Steps to reproduce the behavior:

  1. Attempt to use the program's chat feature

Expected behavior
The responses should he helpful and relevant to the question

Screenshots
image
image
image

Environment

io.github.Bavarder.Bavarder 1.0.0
Environment: GNOME
Gtk: 4.12.1
Python: 3.11.5
OS: Linux 6.5.0-1-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.5.3-1 (2023-09-13)

Open-Assistant SFT-112B responses are cut short

Describe the bug
Every response from Open-Assistant (the default provider option) is cut way too short for the response to be even slightly useful. It seems that sentences are being cut off mid-sentence

To Reproduce
Steps to reproduce the behavior:

  1. Choose Open-Assistant as the provider
  2. Attempt to ask a question

Expected behavior
The length of the answer should be long enough to allow the question to actually be answered

Screenshots
image
image

Environment
io.github.Bavarder.Bavarder 0.2.4
Environment: GNOME
Gtk: 4.10.5
Python: 3.10.13
OS: Linux 6.4.0-4-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.4.13-1 (2023-08-31)
Providers: ['baichat', 'catgpt', 'hfopenassistantsft1pythia12b', 'openaigpt35turbo']
Use Theme: False
Use Text View: False
Clear After Send: False
Close All Without Dialog: False
Current Provider: hfopenassistantsft1pythia12b

Unreadable answer with some gtk themes

Describe the bug
When using some color themes, the answer is not readable due to the background being the same color than the font.

To Reproduce

Steps to reproduce the behavior:

  1. Have some color theming on your gtk, for example using gradience
  2. Type something and wait for answer to appear
  3. You can't see anything

Expected behavior
You should be able to see the answer. The answer background should probably be the same than the question background.

Screenshots
image

Environement
io.github.Bavarder.Bavarder 0.2.3
Environment: GNOME
Gtk: 4.10.3
Python: 3.10.6
OS: Linux 6.2.15-300.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Thu May 11 17:37:39 UTC 2023
Providers: ['baichat', 'catgpt', 'hfopenassistantsft1pythia12b', 'openaigpt35turbo']
Use Theme: True
Use Text View: False
Clear After Send: False
Close All Without Dialog: False
Current Provider: hfopenassistantsft1pythia12b

Chunked messages

Is your feature request related to a problem? Please describe.
As a chatgpt user I am frustrated by openai message size limit when pasting code. It would be a good feature of your app.

Describe the solution you'd like
It would be better if any such messages were automagically chunked into a number of messages, with appropriate prompts to prevent them eliciting a response until the question is asked. The user should be warned that their message has been chunked and that additional prompts have been prepended as loading too much code at once affects the quality of the answers.

Describe alternatives you've considered
There is no other way around the message size limit as it is governed by the api token limits. I usually have to do this manually.

'Clear prompt after send' parameter doesn't work

Describe the bug
'Clear prompt after send' parameter doesn't erase the prompt

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'Preferences'
  2. Click on ''Clear prompt after send'
  3. Close 'Preferences'
  4. Send a prompt and wait for an answer
  5. Answer is received but prompt isn't cleared
  6. Got to 'Preferences': the setting is back to initial state

Expected behavior
The sent prompt should be cleared and the settinf should have saved its modified state

Screenshots
Capture vidéo du 2023-05-08 09-51-34.webm

Environement

io.github.Bavarder.Bavarder 0.1.7
Environment: GNOME
Gtk: 4.10.3
Python: 3.10.6
OS: Linux 6.2.14-300.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon May 1 00:55:28 UTC 2023
Providers: ['baichat', 'catgpt', 'huggingchat', 'openaigpt35turbo']

[Question] How can i add huggingchat on the new update?

Is your feature request related to a problem? Please describe.
No is a question, but before the update you have huggingchat option to put the api, how can i have that back?. thank you

p.d srry to use this template i'm just wanted to make that question

Missing the section "Report an issue" in the "About Bavarder" which should directly open https://github.com/Bavarder/Bavarder/issues

Describe the bug
Currently the "About Bavarder" is missing the classic section "Report an issue".
To report an issue, we need to understand that we need to click on credits -> then on OxMRTT (which is opening https://github.com/0xMRTT) -> then we need to browse OxMRTT (your) webpage and discover the link of the Github page for Bavarder, then click on it -> then click on the issue tab.
This is not very intuitive, much longer process, and not in line with most gnome apps

Expected behavior
Click on "About Bavarder" -> then click "Report an issue" (which should directly open https://github.com/Bavarder/Bavarder/issues)

Thx and keep up the good work

AUR Package is broken.

Describe the bug
The AUR package is broken, the git version doesn't even install and the normal version installs with root permissions to run and no .desktop file.

To Reproduce
Steps to reproduce the behavior:

  1. install bavarder with your favoraite AUR helper.
  2. try to run it! there are no .desktop files so you can't run it from GUI, and from the terminal, you get this error: bash: /usr/bin/bavarder: Permission denied

Expected behavior
it should add a desktop file to apps and open without root permissions.

Environment
I opened the app from the terminal with sudo:

io.github.Bavarder.Bavarder 0.1.7
Environment: GNOME
Gtk: 4.10.3
Python: 3.11.3
OS: Linux 6.3.1-zen1-1-zen #1 ZEN SMP PREEMPT_DYNAMIC Mon, 01 May 2023 17:42:12 +0000
Providers: ['baichat', 'catgpt', 'huggingchat', 'openaigpt35turbo']

All GPT 3.5 Turbo and GPT 4 queries receive "You don't have access to this model, please check your plan and billing details."

Describe the bug
Whenever I send a query to one of these models (default URL) using my API key which works with other applications (https://github.com/kharvd/gpt-cli), I receive the message "You don't have access to this model, please check your plan and billing details."

To Reproduce
Steps to reproduce the behavior:

  1. Upgrade to the OpenAI pay-as-you-go plan
  2. Generate an API key
  3. Enter it into "API key" field for GPT 3.5 Turbo and GPT 4 models
  4. Switch chat to one of the two models
  5. Make a query

Expected behavior
A response to the query

Screenshots
image

Environment
Please post the content of About > Troubleshooting > Debugging Information

io.github.Bavarder.Bavarder 1.0.0
Environment: GNOME
Gtk: 4.12.1
Python: 3.11.5
OS: Linux 6.4.15-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Sep 7 00:25:01 UTC 2023

Additional context
When I last tried the app months ago with a different api key, I received a message that my quota had been exceeded whenever I made a prompt, even though the openai webpage said I barely used anything.

"Local" provider forgets input after closing preference dialog

Describe the bug
My input into the API URL and model name fields is not saved when using the Local provider.

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'Preferences'
  2. Click on 'Local' to show more input fields.
  3. Add custom data to API URL and Model text fields, press the blue confirm-button (or enter).
  4. Enable the Local provider.
  5. Close the Preference dialog.
  6. Open it again and the custom data is replaced by the default OpenAI URL.

Expected behavior
That my URL and Model input is saved.

Environment
Please post the content of About > Troubleshooting > Debugging Information

io.github.Bavarder.Bavarder 1.0.0
Environment: GNOME
Gtk: 4.12.1
Python: 3.11.5
OS: Linux 6.4.15-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Sep 7 00:25:01 UTC 2023

Additional context
I also tried the new built-in model downloader, but there is no way to select them as a provider.

app stuck and acting weirdly on ChromeOS's Debian

  • installed via flatpak

  • with a free model it gets stuck with a wheel turning:
    Screenshot 2023-09-07 09 43 27

  • with API key provided for chat-gpt or GPT4 acts weird, no idea why:
    Screenshot 2023-09-07 09 43 46

I am swearing a $2 donation if/when it gets fixed.

[Feature Request] Switch for choosing shortcut for sending prompt (Ctrl+Enter/Enter)

Is your feature request related to a problem? Please describe.
When typing in a chat window, pressing enter on the keyboard usually sends the message. This saves on the time spent moving the hand from keyboard to mouse and back again to click the send button.

Describe the solution you'd like
When typing a prompt, hitting enter sends the prompt.

Describe alternatives you've considered
Moving the mouse and clicking the button is an option.

Hangs when clicking on the 'Options' menu item.

When I click on the 'Options' item, it hangs for a few seconds, after which the options dialog opens. Sometimes it offers to force close the application.

output.webm

Bavarder v1.0.0 | Fedora Workstation 38 | GNOME 44.5 | Flatpak (GNOME Application Platform version 45)

Also the 'Options' menu is duplicated, this item is in both the hamburger menu and the three dots menu, though maybe that's the way it's designed....

Providers

There is currently only

  • baichat
  • catgpt
  • huggingchat
  • openaigpt35turbo
    enabled by default.

But there is

  • alpacalora
  • baichat
  • bard (disabled for now)
  • catgpt
  • hfdialogpt
  • hfgoogleflant5xxl
  • hfgoogleflanu12
  • hfgpt2 (large, normal and xl)
  • hfopenassistantsft1pythia12b
  • huggingchat
  • openaigpt35turbo
  • openaigpt4 (was default but removed in 0.1.7)
  • openaitextdavinci003
  • starcoder
    available.

The list of available provider is too big so I would like to ask which providers do you want to see by default.

Adding a new provider

Using 0.1.7, open the preferences and select the providers you want

ROADMAP

  • Add a switch in pref for enabling/disabling a provider

Answers from "Open-Assistant SFT-1 12B Model" provider are truncated

Describe the bug
When you choose the "Open-Assistant SFT-1 12B Model", and ask a question, the answer is truncated at ~100 characters

To Reproduce
Steps to reproduce the behavior:

  1. Select "Open-Assistant SFT-1 12B Model" or "Open-Assistant SFT-1 12B Model (HuggingChat)" provider
  2. Type a question (that would expect a non-trivial answer)
  3. See the answer is truncated

Expected behavior
Complete answer

Screenshots
image
image

Environement
io.github.Bavarder.Bavarder 0.2.3
Environment: ubuntu:GNOME
Gtk: 4.10.3
Python: 3.10.6
OS: Linux 5.15.0-73-generic #80-Ubuntu SMP Mon May 15 15:18:26 UTC 2023
Providers: ['baichat', 'hfopenassistantsft1pythia12b', 'openaigpt35turbo', 'huggingchat']
Use Theme: False
Use Text View: False
Clear After Send: False
Close All Without Dialog: False
Current Provider: hfopenassistantsft1pythia12b

Additional context
This issue does not appear with "BAI Chat" provider for example

App crashes when internet connection is lost

Really great app!

I have a problem that in my shitty wifi it sometimes just loses connectivity.

Instead of having a ping test to the used AI server or something, it just freezes.

The app doesnt even crash, I need to manually force it to close via pkill e.g.
Proposed solution

Integrate some kind of internet connectivity check, like pinging the AI server every 5 seconds. If its negative, cancel the current process and display a message.

Opened by @firefoxlover in flathub/io.github.Bavarder.Bavarder#11

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.