Coder Social home page Coder Social logo

hacker-gpt / hackergpt-2.0 Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mckaywrigley/chatbot-ui

478.0 9.0 73.0 3.15 MB

#1 Trusted ethical hacking AI for web application hacking.

Home Page: https://chat.hackerai.co/

License: GNU General Public License v3.0

Shell 0.01% JavaScript 0.30% TypeScript 96.75% CSS 0.20% PLpgSQL 2.74%

hackergpt-2.0's Introduction

HackerGPT

HackerGPT is your indispensable digital companion in the world of hacking, specifically for web and network hacking. Crafted with the unique needs of ethical hackers in mind, this AI-powered assistant stands at the forefront of hacking knowledge and assistance. Equipped with an extensive database of hacking techniques, tools, and strategies, HackerGPT is more than just an information resource—it's an active participant in your hacking journey. Whether you're a beginner looking to learn the ropes or a seasoned professional seeking deeper insights, HackerGPT is your ally in navigating the ever-changing landscape of hacking challenges.

How does HackerGPT work?

When you submit a question, it is transmitted to our server. We first check the authenticity of the user and determine their question quota based on whether they are a free or pro user. Next, we search our database for information that closely matches the inquiry. If we find a strong match, we integrate it into the AI's response process. We then securely send your question to OpenRouter for processing without sending any personal information. Responses vary depending on the module: Responses vary depending on the module:

  • HackerGPT: A Mixtral 8x22B with a semantic search on our hacking data paired with our unique prompt.
  • HackerGPT Pro: A Mistral Large with a semantic search on our hacking data paired with our unique prompt.
  • GPT-4o: The latest and greatest from OpenAI, paired with our unique prompt.

What Makes HackerGPT Special?

HackerGPT is not just an AI that answers your hacking questions, it can also assist you in hacking using widely used open-source hacking tools. If you want to see all available tools, you can open the Plugin Store. Additionally, if you need a quick guide on using a specific tool such as Subfinder, select the tool and type /subfinder -h.

Below are some of the notable tools available with HackerGPT:

  • Subfinder is a subdomain discovery tool designed to enumerate and uncover valid subdomains of websites efficiently through passive online sources.
  • Katana is a next-generation crawling and spidering framework designed for robust, efficient web enumeration.
  • Naabu is a high-speed port scanning tool, focused on delivering efficient and reliable network exploration.

Oh, and yes, you can effortlessly use these tools without typing complex commands — simply select the tool you want and describe in your own words what you need to do.

Along with these, there are more tools available with HackerGPT

A Special Note of Thanks

Thank you so much, @fkesheh and @Fx64b, for your amazing work and dedication to this project.

Thank you for being part of the HackerGPT family.

Updating

In your terminal at the root of your local Chatbot UI repository, run:

npm run update

If you run a hosted instance you'll also need to run:

npm run db-push

to apply the latest migrations to your live database.

Important Note About Running HackerGPT Locally

The primary purpose of this GitHub repo is to show what's behind HackerGPT in order to build trust.

You can run HackerGPT locally, but the RAG system, plugins, and more will only work with proper and complex configuration.

Local Quickstart

Follow these steps to get your own Chatbot UI instance running locally.

You can watch the full video tutorial here.

1. Clone the Repo

git clone https://github.com/Hacker-GPT/HackerGPT-2.0.git

2. Install Dependencies

Open a terminal in the root directory of your local Chatbot UI repository and run:

npm install

3. Install Supabase & Run Locally

Why Supabase?

Previously, we used local browser storage to store data. However, this was not a good solution for a few reasons:

  • Security issues
  • Limited storage
  • Limits multi-modal use cases

We now use Supabase because it's easy to use, it's open-source, it's Postgres, and it has a free tier for hosted instances.

We will support other providers in the future to give you more options.

1. Install Docker

You will need to install Docker to run Supabase locally. You can download it here for free.

2. Install Supabase CLI

MacOS/Linux

brew install supabase/tap/supabase

Windows

scoop bucket add supabase https://github.com/supabase/scoop-bucket.git
scoop install supabase

3. Start Supabase

In your terminal at the root of your local Chatbot UI repository, run:

supabase start

4. Fill in Secrets

1. Environment Variables

In your terminal at the root of your local Chatbot UI repository, run:

cp .env.local.example .env.local

Get the required values by running:

supabase status

Note: Use API URL from supabase status for NEXT_PUBLIC_SUPABASE_URL

Now go to your .env.local file and fill in the values.

If the environment variable is set, it will disable the input in the user settings.

2. SQL Setup

In the 1st migration file supabase/migrations/20240108234540_setup.sql you will need to replace 2 values with the values you got above:

  • project_url (line 53): http://supabase_kong_chatbotui:8000 (default) can remain unchanged if you don't change your project_id in the config.toml file
  • service_role_key (line 54): You got this value from running supabase status

This prevents issues with storage files not being deleted properly.

5. Install Ollama (optional for local models)

Follow the instructions here.

6. Run app locally

In your terminal at the root of your local Chatbot UI repository, run:

npm run chat

Your local instance of Chatbot UI should now be running at http://localhost:3000. Be sure to use a compatible node version (i.e. v18).

You can view your backend GUI at http://localhost:54323/project/default/editor.

Hosted Quickstart

Follow these steps to get your own Chatbot UI instance running in the cloud.

Video tutorial coming soon.

1. Follow Local Quickstart

Repeat steps 1-4 in "Local Quickstart" above.

You will want separate repositories for your local and hosted instances.

Create a new repository for your hosted instance of Chatbot UI on GitHub and push your code to it.

2. Setup Backend with Supabase

1. Create a new project

Go to Supabase and create a new project.

2. Get Project Values

Once you are in the project dashboard, click on the "Project Settings" icon tab on the far bottom left.

Here you will get the values for the following environment variables:

  • Project Ref: Found in "General settings" as "Reference ID"

  • Project ID: Found in the URL of your project dashboard (Ex: https://supabase.com/dashboard/project/<YOUR_PROJECT_ID>/settings/general)

While still in "Settings" click on the "API" text tab on the left.

Here you will get the values for the following environment variables:

  • Project URL: Found in "API Settings" as "Project URL"

  • Anon key: Found in "Project API keys" as "anon public"

  • Service role key: Found in "Project API keys" as "service_role" (Reminder: Treat this like a password!)

3. Configure Auth

Next, click on the "Authentication" icon tab on the far left.

In the text tabs, click on "Providers" and make sure "Email" is enabled.

We recommend turning off "Confirm email" for your own personal instance.

4. Connect to Hosted DB

Open up your repository for your hosted instance of Chatbot UI.

In the 1st migration file supabase/migrations/20240108234540_setup.sql you will need to replace 2 values with the values you got above:

  • project_url (line 53): Use the Project URL value from above
  • service_role_key (line 54): Use the Service role key value from above

Now, open a terminal in the root directory of your local Chatbot UI repository. We will execute a few commands here.

Login to Supabase by running:

supabase login

Next, link your project by running the following command with the "Project ID" you got above:

supabase link --project-ref <project-id>

Your project should now be linked.

Finally, push your database to Supabase by running:

supabase db push

Your hosted database should now be set up!

3. Setup Frontend with Vercel

Go to Vercel and create a new project.

In the setup page, import your GitHub repository for your hosted instance of Chatbot UI. Within the project Settings, in the "Build & Development Settings" section, switch Framework Preset to "Next.js".

In environment variables, add the following from the values you got above:

  • NEXT_PUBLIC_SUPABASE_URL
  • NEXT_PUBLIC_SUPABASE_ANON_KEY
  • SUPABASE_SERVICE_ROLE_KEY

You can also add API keys as environment variables.

  • OPENAI_API_KEY
  • OPENROUTER_API_KEY

For the full list of environment variables, refer to the '.env.local.example' file. If the environment variables are set for API keys, it will disable the input in the user settings.

Click "Deploy" and wait for your frontend to deploy.

Once deployed, you should be able to use your hosted instance of Chatbot UI via the URL Vercel gives you.

Have a feature request, question, or comment?

You can get in touch with us through email at [email protected] or connect with us on X.

Contributing

Interested in contributing to HackerGPT? Please see CONTRIBUTING.md for setup instructions and guidelines for new contributors. As an added incentive, top contributors will have the opportunity to become part of the HackerGPT team.

License

Licensed under the GNU General Public License v3.0

hackergpt-2.0's People

Contributors

thehackergpt avatar mckaywrigley avatar dependabot[bot] avatar fkesheh avatar momonja3 avatar fx64b avatar mikodin avatar gijigae avatar im-calvin avatar gitgitgogogo avatar francofantini avatar burnworks avatar xycjscs avatar perstarkse avatar tim13246879 avatar sebiweise avatar k-kit avatar hikafeng avatar martianingreen avatar dino1729 avatar berniecr avatar

Stargazers

DDX avatar SamLast avatar  avatar Chamara Indrajith avatar Arkadiy avatar  avatar  avatar  avatar  avatar  avatar Hydas avatar Alejo avatar Altruism Bilal avatar Nikos Katsiopis avatar wdani avatar Elli0t avatar Nikos Vourdas avatar Void avatar  avatar  avatar  avatar  avatar  avatar  avatar A.K.M. Tanzir Ahmed avatar satorukikyy avatar Zero avatar Anjum9694 avatar  avatar Lars Schwarz avatar Tomas Rzepka avatar  avatar Ashley Mawoyo avatar  avatar tottokohatto avatar ley0x avatar  avatar no fucking shit code! avatar Pradhyumn Shet Tilve avatar Ephraim Farrell avatar Adyl avatar Mark Kogan avatar  avatar  avatar David Ireoluwa Akins (aka AwesomDev) avatar openGiGi avatar Kishore Pandi avatar  avatar Tristan Blake avatar asd avatar  avatar  avatar  avatar kal avatar jupiter avatar  avatar  avatar Buerck avatar Brad Flaugher avatar  avatar  avatar  avatar  avatar Colin avatar  avatar StealthSec avatar Lyne avatar  avatar  avatar CrE0m avatar  avatar  avatar  avatar  avatar Erika Wiedemann avatar  avatar JaubertLong avatar jianger avatar  avatar Abhinav PP avatar moneya avatar  avatar 冬水判 avatar Aziz Ozdemiroglu avatar fyezool avatar tonychen avatar BlBana avatar Vinay Kumar avatar  avatar  avatar Alexander Dickbauer avatar dkluffy avatar 平芜尽处外 avatar Manikanta Reddy avatar Jonathan DUMONT avatar Lalevin avatar chinanala avatar  avatar  avatar  avatar

Watchers

Hydas avatar  avatar amos8 avatar  avatar  avatar Andrew Kinsey avatar  avatar  avatar Satya Jagannadh avatar

hackergpt-2.0's Issues

Fix Screen Overflow with Code Blocks on Mobile Devices

Description

When the AI responds with a code block, the mobile version of the screen expands wider to the right because the code block doesn't have the appropriate size for mobile screens. Instead, it uses the computer size for the code block, causing screen overflow. This issue is prevalent and affects every mobile user when a code block is used. To address this problem, the code block should have a size relative to the screen, similar to normal chat, and users should be able to scroll through the code block if it exceeds the visible area, just like on computer devices.

Objective

Our goal is to implement a responsive design solution for code blocks that adjusts their size relative to the screen size, ensuring optimal user experience on mobile devices. Users should be able to scroll through the code block if it exceeds the visible area, similar to the behavior on computer devices.

Actions and Considerations (ACC)

  1. Responsive Code Block Design:

    • Implement a CSS or layout solution that adjusts the code block size relative to the screen size, ensuring proper rendering on mobile devices.
    • Ensure that users can scroll through the code block if it exceeds the visible area, providing a consistent experience across devices.
  2. Testing and Quality Assurance:

    • Conduct extensive testing on various mobile devices, platforms, and browsers to ensure compatibility.
    • Test standard and edge-case scenarios with different code block sizes and content to guarantee a reliable and robust solution.

Expected Outcomes

  • A responsive design solution for code blocks that adjusts their size relative to the screen size, ensuring optimal user experience on mobile devices.
  • Improved user experience and satisfaction, as mobile users can view and interact with code blocks without encountering screen overflow issues.
  • Enhanced user trust and confidence in our platform, as users benefit from a more reliable and visually appealing chat interface on mobile devices.

Fix Issue with Removing Dropped Files in Conversation

Description

We have an issue where, when a user has multiple files in a conversation by dropping them one by one, and then tries to delete one of the files, it removes all files instead of just the selected one. To improve the user experience, we need to implement a solution that ensures only the intended file is removed when the user wants to delete it.

Objective

Our goal is to fix the issue with removing dropped files in a conversation, allowing users to delete individual files without affecting the others.

Actions and Considerations (ACC)

  1. Analyze Current Implementation:

    • Investigate the existing code that handles file removal in the conversation to identify the cause of the issue.
  2. Update File Removal Functionality:

    • Implement a solution that ensures only the intended file is removed when the user wants to delete it.
  3. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the updated file removal functionality works as expected, allowing users to delete individual files without affecting the others.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.
    • Verify that the changes work correctly both locally and on Vercel preview.

Expected Outcomes

  • A fix for the issue with removing dropped files in a conversation, improving the overall user experience.
  • Enhanced file management in the conversation, allowing users to delete individual files without any unintended consequences.
  • Enhanced overall functionality and adherence to best practices for file handling and management in a chat interface.

Implement 'Continue Generating' Feature for AI Models

Background and Objective

Users occasionally encounter abrupt cut-offs in AI-generated responses while interacting with the Mistral Small, Medium, and Large models utilized by HackerGPT & HackerGPT Pro. Recognizing the need to enhance user engagement and satisfaction, we aim to address these interruptions that occur when the AI fails to complete its answers, leaving conversations unfinished. Our objective is to implement a "Continue Generating" feature to allow users to seamlessly prompt the AI for extended content, ensuring responses are comprehensive and conversations feel more complete and satisfying.

Actions and Considerations (ACC)

  1. Understand API Mechanics:

  2. Design Feature Logic:

    • Implement logic to append an "Assistant" message as the last item in the API payload, signaling the OpenAI server to continue generating the response.
    • Develop a system to distinguish between different finish_reason states returned by the API (stop, length, function_call, content_filter, null) and trigger the "Continue Generating" feature accordingly.
      • Image
  3. UI/UX Implementation:

    • Design and integrate a "Continue" button within the application UI, ensuring it is contextually displayed based on API response metadata or user interruption.
    • Ensure the button is user-friendly and its purpose is clearly understood.
      • Image
  4. API Interaction Enhancement:

    • Modify the API call structure to support continued generation, ensuring seamless integration with existing conversation flows.
    • Handle any potential edge cases.
  5. Testing and Optimization:

    • Conduct thorough testing to identify and resolve issues with the "Continue Generating" trigger mechanism and API response handling.
    • Test the feature across different AI models and scenarios to ensure robustness and reliability.

Expected Outcomes

  • A robust "Continue Generating" feature that directly addresses user-reported issues with incomplete messages, significantly enhancing the user experience on our platform.
  • Improved engagement and interaction quality, as users feel more satisfied with the AI's ability to deliver comprehensive and coherent responses.
  • Positive feedback from the user community, reflecting an enhanced capability to engage in more meaningful and extended conversations with AI.

Reference that will help: https://medium.com/@daniel-avila/reverse-engineering-chatgpt-the-continue-generating-function-5cb4194b0ea6

Prevent Users from Sending Multiple Messages Simultaneously

Description

A bug has been identified where users can send more than one message at once while still receiving a response from the AI. This behavior can negatively impact the user experience, as it may cause confusion and unexpected results. To address this issue, we need to ensure that users can send only one message at a time and must wait until the AI finishes responding or clicks the stop button before sending another message. In all other instances, users should not be able to send multiple messages simultaneously.

Objective

Our goal is to implement a mechanism that prevents users from sending multiple messages simultaneously, ensuring a smoother and more controlled user experience while interacting with the AI.

Actions and Considerations (ACC)

  1. Single Message Sending Mechanism:

    • Implement a system that allows users to send only one message at a time, requiring them to wait for the AI's response or click the stop button before sending another message.
  2. Testing and Quality Assurance:

    • Conduct extensive testing on various devices, platforms, and browsers to ensure compatibility.
    • Test standard and edge-case scenarios to guarantee feature reliability and robustness.

Expected Outcomes

  • A mechanism that prevents users from sending multiple messages simultaneously, ensuring a smoother and more controlled user experience.
  • Improved user experience and satisfaction, as users can interact with the AI in a more structured and predictable manner.
  • Enhanced user trust and confidence in our platform, as users benefit from a more reliable and intuitive chat interface.

I have not received any response when using the plugins.[Bug]

💻 Operating System

Windows

🌐 Browser

Edge

🐛 Bug Description

I have configured OPENAI-API_KEY and OPENROUTER-API_KEY, but I have not received any response when using the plugins.Do you need any other operations?

🚦 Expected Behavior

No response

📷 Steps to Reproduce

2093024

No response

📝 Additional Context

No response

Add Microsoft Login Option to Login Screen

Description

To provide users with an additional login option, we need to add a Microsoft login button below the Google login button on the login screen. The Microsoft login button should have a similar style to the Google button and allow users to sign in with their Microsoft accounts. To implement and test the Microsoft sign-in feature, an Azure account is required.

Objective

Our goal is to add a Microsoft login option to the login screen, allowing users to sign in with their Microsoft accounts seamlessly.

Actions and Considerations (ACC)

  1. Create Microsoft Login Button:

    • Design a Microsoft login button with a similar style to the Google login button.
  2. Implement Microsoft Sign-In:

    • Use an Azure account to set up and configure Microsoft sign-in functionality.
    • Integrate the Microsoft sign-in feature with the login screen, allowing users to sign in with their Microsoft accounts.
  3. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the Microsoft sign-in feature works as expected and provides a seamless user experience.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.
    • Verify that the changes work correctly both locally and on Vercel preview.

Expected Outcomes

  • A Microsoft login option on the login screen, providing users with an additional way to sign in with their Microsoft accounts.
  • Improved user experience by offering multiple sign-in options, catering to users' preferences.
  • Enhanced overall functionality and adherence to best practices for user authentication and sign-in options.

Save Enhance Menu State (Open/Close) into Supabase

Description

Currently, the enhance menu, which includes the plugin picker, is closed every time the page is reloaded. If a user opens the menu and then reloads the page, the menu will be closed again. To improve the user experience, we should store the enhance menu state (open or closed) in the database and restore it every time the page is reloaded.

Objective

Our goal is to save the enhance menu state (open or closed) into the Supabase database and restore it on page reload, providing a more seamless user experience.

Actions and Considerations (ACC)

  1. Update Profiles Database Table:

    • Add a new column in the "profiles" database table to store the enhance menu state (open or closed) for each user.
  2. Save Enhance Menu State:

    • Implement functionality that saves the enhance menu state (open or closed) into the Supabase database whenever the user changes the state of the menu.
  3. Restore Enhance Menu State on Page Reload:

    • Implement functionality that retrieves the saved enhance menu state from the Supabase database and restores it on the page every time the user reloads the page.
  4. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the enhance menu state is saved and restored correctly on page reload, providing a more seamless user experience.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.
    • Verify that the changes work correctly both locally and on Vercel preview.

Expected Outcomes

  • Improved user experience by saving and restoring the enhance menu state (open or closed) on page reload.
  • Enhanced overall functionality and adherence to best practices for user interface and database management.
  • A more personalized and seamless experience for users, as their preferences for the enhance menu state are remembered and applied.

Optimizing Model Selector for Enhanced Visibility and Accessibility of HackerGPT AI model

Background

Our aim is to enhance the accessibility and prominence of HackerGPT AI within our platform. This involves prioritizing HackerGPT AI in the model selector and any related pop-up/drop-down interfaces, ensuring it is the first and most visible option for all users.

Objective

The goal is to reconfigure our system so that HackerGPT AI is always presented as the default AI model over GPT-4 Turbo, irrespective of the API keys configuration. This includes ensuring that GPT-4 Turbo and HackerGPT AI are both visible and selectable to all users, with HackerGPT AI taking priority over in the model selector and as the default option in new chats.

Image

Actions and Considerations (ACC)

  1. User Interface:

    • Adjust the model selector UI to always list HackerGPT as the first option, ensuring it's visually prioritized.
    • Design the model selector to enhance user interaction and visual appeal, aligning with the updated prioritization.
  2. API Key Configuration and Model Accessibility:

    • Configure the system to bypass the MISTRAL_API_KEY for displaying HackerGPT AI, utilizing OPENROUTER_API_KEY instead. Because right now we using "MISTRAL_API_KEY=SHOULD_NOT_BE_EMPTY" to just make HackerGPT AI display in model selector pop-up/drop-down.
    • Implement a system logic to assign HackerGPT AI as the default model for all new chats, overcoming the current default setting that favors GPT-4 Turbo in every place possible.
  3. Default Setting Adjustment for New Users:

    • Automatically configure HackerGPT as the default AI model for new user settings, ensuring a seamless first experience.
    • Update the environment variable configurations to reflect that MISTRAL_API_KEY should not be a limiting factor for HackerGPT's visibility and default status.

Expected Outcomes

  • HackerGPT is established as the default and primary AI model choice for all users, effectively replacing GPT-4 Turbo in this role.
  • The model selector UI is optimized for better user experience, clearly displaying both HackerGPT and GPT-4 Turbo, with HackerGPT as the top and default option in new chats.
  • Users understand the operational dynamics between HackerGPT, OPENROUTER_API_KEY, and MISTRAL_API_KEY, minimizing confusion regarding model selection and API key "MISTRAL_API_KEY=SHOULD_NOT_BE_EMPTY".

Implement Secure Transformation of Unsupported File Formats into Text for AI Processing

Background

Feedback from our beta testers indicates a significant demand for the ability to process files in formats currently not supported by our system, such as .js, .php, and others. To enhance user experience and broaden the usability of our platform, there's a need to develop a feature that automatically transforms these unsupported file formats into a clean text (.txt) format. This transformation will enable our AI to process and assist with user requests more efficiently, thereby increasing the versatility of our service.

Objective

To create a secure, automated process for converting files in unsupported formats into a text format that our AI can process. This feature must inform users when their file is being transformed to ensure transparency and maintain trust. Security is paramount in this process; we must ensure that the transformation of files does not expose sensitive data or introduce vulnerabilities.

Actions and Considerations (ACC)

  1. File Transformation Mechanism:

    • Develop a robust mechanism to detect and transform unsupported file formats into .txt files.
    • Ensure the transformation process retains the integrity of the original file's content, making it readable and usable by our AI.
  2. User Notification and Consent:

    • Implement user notifications to alert them when a file format is not directly supported and will be transformed into .txt format.
    • Obtain user consent for the transformation process.
  3. Security Measures:

    • Conduct a comprehensive security review of the file transformation process to identify and mitigate potential risks.
    • Implement secure handling practices to protect the content of files during and after transformation.

Expected Outcomes

  • A seamless, secure mechanism for transforming unsupported file formats into text format, enabling broader AI processing capabilities.
  • Transparent communication with users about the file transformation process, ensuring their trust and satisfaction.
  • Strong security practices in place to protect user data throughout the transformation process, maintaining the platform's integrity.
  • Positive user feedback on the new feature, reflecting its utility, security, and impact on enhancing the overall user experience.

Save System Message for Continue Generating Button to Maintain Context with RAG

Description

When the AI-generated output hits the maximum length, the "Continue Generating" button appears to allow users to continue the response where it left off. However, since we use the RAG system, which adds content to the system message, the continuation may not include the previous system message, causing the response to vary. To maintain context and consistency, we must save the system message when the RAG system is being used, specifically for the "Continue Generating" button. Ideally, the system message should be saved only when the "Continue" button is needed, but if not possible, it can be saved every time.

Objective

We aim to implement a system that saves the system message when the RAG system is being used, specifically for the "Continue Generating" button, to maintain context and consistency in AI responses. If feasible, the system message should be saved only when the "Continue" button is needed; otherwise, it can be saved every time.

Actions and Considerations (ACC)

  1. System Message Storage:

    • Implement a mechanism to save the system message when the RAG system is being used, specifically for the "Continue Generating" button.
    • If possible, save the system message only when the "Continue" button is needed; otherwise, save it every time.
  2. Context and Consistency:

    • Ensure that the saved system message is used when continuing the AI response, maintaining context and consistency in the generated output.
  3. Testing and Quality Assurance:

    • Conduct extensive testing.
    • Test standard and edge-case scenarios to guarantee feature reliability and robustness.

Expected Outcomes

  • A system that saves the system message when the RAG system is being used, specifically for the "Continue Generating" button, maintaining context and consistency in AI responses.
  • Improved user experience and satisfaction, as users receive consistent and contextually relevant AI responses when using the "Continue Generating" button.
  • Enhanced user trust and confidence in our platform, as users benefit from more accurate and reliable AI responses.

.

.

Implement Plugin Icon in Sidebar to Trigger Plugin Shop

Description

To improve user experience and engagement with various plugins, we aim to add a plugin icon in the sidebar of the application using IconPuzzle from Tabler. This icon will trigger a plugin shop popup, allowing users to interact with different plugins by installing or uninstalling them as needed right away.

Objective

Our goal is to design and implement a user-friendly plugin icon in the sidebar using IconPuzzle from Tabler that, when clicked, opens the plugin shop popup, enabling users to easily manage their installed plugins.

Actions and Considerations (ACC)

  1. Design and Implementation:

    • Utilize IconPuzzle from Tabler to create a visually appealing and intuitive plugin icon for the sidebar of the application.
    • Implement the logic required to trigger the plugin shop popup when the user clicks on the plugin icon.
  2. Testing and Quality Assurance:

    • Conduct extensive testing on various devices, platforms, and browsers to ensure compatibility.
    • Test standard and edge-case scenarios to guarantee feature reliability and robustness.

Expected Outcomes

  • A fully functional and user-friendly plugin icon in the sidebar using IconPuzzle from Tabler, allowing users to easily access the plugin shop popup.
  • Improved user experience and engagement with various plugins.

Integrating 'Terms of Use' Page into HackerGPT 2.0 with Updated Style

Background

In line with updating our HackerGPT to version 2.0, it is essential to also incorporate the 'Terms of Use' page into the new interface. This integration aims to ensure that users can easily access and understand the terms governing their use of HackerGPT. The content of the 'Terms of Use' will remain the same, but the presentation will be updated to align with the new UI's style and aesthetics.

Objective

To seamlessly integrate the 'Terms of Use' page into HackerGPT 2.0, maintaining the original content but updating its styling to match the new user interface. This page should be readily accessible to users, particularly when they encounter the statement "By using HackerGPT, you agree to our Terms of Use" during the authorization process, with the phrase "Terms of Use" serving as a link to the actual page.

Image

Actions and Considerations (ACC)

  1. Content Consistency:

    • Transfer all existing content from the current 'Terms of Use' page to the new version without alterations.
  2. UI Style Update:

    • Redesign the 'Terms of Use' page to reflect the updated style guide of HackerGPT 2.0, ensuring visual coherence with the new interface.
    • Implement responsive design for optimal viewing on different devices and screen sizes.
  3. User Accessibility and Navigation:

    • Embed a clickable link to the 'Terms of Use' in relevant areas, particularly where users are informed about their agreement to these terms during the authorization process.

      • Image
    • Ensure the transition from the authorization notice to the 'Terms of Use' page is smooth and intuitive.

  4. Testing and Validation:

    • Test the redesigned 'Terms of Use' page for both style consistency and functional responsiveness.
    • Verify the embedded links are correctly directing users to the 'Terms of Use' page from various points within the platform.

Expected Outcomes

  • The 'Terms of Use' page is effectively integrated into the HackerGPT 2.0 UI with an updated design that complements the new interface.
  • Users have easy and direct access to the 'Terms of Use' from the authorization statements, enhancing their understanding and compliance with platform policies.

Implementing Stripe Subscription with Supabase

Background

With the launch of our new chatbot-UI version, we face the important task of re-implementing the Stripe Subscriptions logic. This time, we're integrating with Supabase instead of Firebase, aligning with our updated system architecture and capitalizing on Supabase's robust subscription management capabilities.

Objective

Our main goal is to effectively integrate Stripe Subscriptions into the new chatbot-UI using Supabase. This involves adapting the subscription management interface to fit within the new UI framework, ensuring a smooth transition and maintaining practical functionality for an enhanced user experience.

Actions and Considerations (ACC)

  1. Interface Design and Development:

    • Redesign the subscription management interface for the new chatbot-UI, mirroring the functionality of our previous system.
    • Focus on creating a user-friendly design that seamlessly integrates with the overall look and feel of chatbot-ui 2.
    • Add a "Manage Subscription" option in the Settings, directing users to a Stripe page for easy subscription viewing and management.
      • Image
  2. Stripe-Supabase Integration:

    • Configure Stripe subscriptions within the Supabase framework, guided by the Supabase documentation and "Handling Stripe Webhooks" tutorial.
  3. Upgrade to Plus Feature:

    • Implement an 'Upgrade to Plus' icon for users without an active subscription, enhancing visibility and accessibility.
      • Image
    • Create an engaging popup linked to the 'Upgrade to Plus' icon, detailing the benefits and upgrade steps.
      • Image
  4. Functionality Testing and Deployment:

    • Conduct comprehensive testing to ensure the integration's functionality, reliability, and security.

Expected Outcomes

  • Effective integration of Stripe Subscriptions with Supabase in the new chatbot-UI.
  • Streamlined and user-friendly subscription management experience.
  • Enhanced accessibility for users to manage their subscriptions effortlessly.

Fix Bug Preventing Workspace Deletion

Background

During routine maintenance and feature development, an error was discovered that prevents users from deleting workspaces. The console error "unrecognized configuration parameter 'NEXT_PUBLIC_SUPABASE_URL'" indicates a misconfiguration issue or an error with environment variables, specifically concerning our integration with Supabase. This discovery highlights the need for an immediate fix to restore workspace deletion functionality and maintain the platform's integrity and user experience.

Objective

To identify and rectify the cause behind the "unrecognized configuration parameter 'NEXT_PUBLIC_SUPABASE_URL'" error to enable workspace deletion. The goal is to ensure that the platform's functionality aligns with user expectations and system requirements, without disrupting the overall workflow.

Actions and Considerations (ACC)

  1. Codebase Examination:

    • Review the workspace deletion functionality in the codebase for any changes or errors that might contribute to the issue.
    • Analyze how the Supabase URL is referenced in the deletion process and across the application to pinpoint discrepancies or errors.
  2. Implementing a Fix:

    • Develop and test a solution to correct the configuration or coding error causing the unrecognized parameter issue.
    • Validate the fix across development, staging, and production environments to ensure comprehensive resolution.

Expected Outcomes

  • Elimination of the "unrecognized configuration parameter 'NEXT_PUBLIC_SUPABASE_URL'" error, reinstating the functionality for users to delete workspaces smoothly.
  • Assurance of the platform's capability to manage workspaces efficiently, bolstering user trust and system dependability.

Stream Text-to-Speech Response Right Away Without Waiting for All Chunks

Description

OpenAI provides a streaming option for the tts-1 model, which we use for our text-to-speech functionality. To optimize the feature for a better user experience, we should figure out a way to stream the audio response right away, rather than waiting for all chunks to start the audio.

Objective

Our goal is to improve the text-to-speech functionality by streaming the audio response right away, providing a more seamless and responsive user experience.

Actions and Considerations (ACC)

  1. Analyze OpenAI's Streaming Option:

    • Review the documentation and available resources on OpenAI's streaming option for the tts-1 model to understand its capabilities and limitations.
  2. Update Text-to-Speech Functionality:

    • Implement the necessary changes to our text-to-speech functionality to enable streaming the audio response right away, without waiting for all chunks.
  3. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the updated text-to-speech functionality works as expected, providing a more responsive and seamless user experience.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.
    • Verify that the changes work correctly both locally and on Vercel preview.

Expected Outcomes

  • Improved text-to-speech functionality that streams the audio response right away, without waiting for all chunks.
  • Enhanced user experience, with more responsive and seamless text-to-speech interactions.
  • Adherence to best practices for streaming and audio processing, ensuring optimal performance and reliability.

Adapting Plugin Logic and Enhance Menu for HackerGPT 2.0 from HackerGPT

Background

Our objective is to replicate and adapt the plugins logic found in the original HackerGPT repository ("https://github.com/HackerGPT/HackerGPT") for our new HackerGPT version. This involves integrating the enhance menu that appears as users type, a feature well-received in the current implementation at hackergpt.co. This integration must ensure compatibility with the design and functionality of chatbot-2v, providing a seamless experience where users can access and utilize plugins based on their subscription status (Free or Plus).

Objective

To migrate the existing plugins logic and enhance menu from HackerGPT into the chatbot-2v UI, ensuring that the feature is both stylistically in tune with the new version and functionally identical to the existing system. This includes the implementation of a plugin store popup for installing plugins, pre-installing certain plugins for new users, and allowing users the option to hide the enhance menu as needed.

Actions and Considerations (ACC)

  1. Enhance Menu Integration:

    • Copy the enhance menu logic from the original HackerGPT repository, ensuring it activates as users begin typing.
      Image
    • Adapt the menu's style to align with the chatbot-2v design, maintaining consistency across the platform.
  2. Plugin Access Control:

    • Implement logic to differentiate plugin access between Free and Plus users, mirroring the functionality on hackergpt.co.
    • Ensure that plugin availability is clearly communicated within the UI, preventing confusion.
  3. Plugin Store and Installation:

    • Develop a plugin store popup that allows users to browse and install plugins either from the enhance menu or the sidebar.
      Image
    • Pre-install selected plugins for new users, similar to the approach on hackergpt.co.
      Image
  4. Functionality and Routing:

    • Integrate the plugin functionality with the existing openai route.ts, ensuring plugins trigger correctly upon user interaction.
    • Capture and utilize plugin IDs effectively when messages are sent, ensuring accurate plugin response.
  5. User Interface Adjustments:

    • Allow users the flexibility to hide the enhance menu, preserving user preference and space within the chat interface.
      Image
  6. Testing and Validation:

    • Conduct thorough testing of the enhance menu and plugin functionality across different user types (Free and Plus).
    • Validate the user experience, ensuring the enhance menu and plugins operate seamlessly and without errors.

Expected Outcomes

  • Successful integration of the enhance menu and plugins logic into chatbot-2v, with a design that is consistent with the new UI.
  • Clear differentiation in plugin access between Free and Plus users, with an intuitive plugin store for user exploration and installation.
  • Pre-installed plugins for new users, with the option for all users to hide the enhance menu as desired.

Improve RAG with Custom Code for Embedding and Metadata Extraction

Description

The most crucial factor for HackerGPT is the quality of AI responses. To significantly improve the RAG system, we need to create custom code for text embedding and metadata extraction, such as summary, title, and keywords, which will help us enhance the RAG functionality. The embedding should be done using text-embedding-3-large with 3072 dimensions. The code should follow the best possible settings and parameters to achieve optimal results for our specific data, which consists of guides and tutorials about ethical hacking. Ensure that the best possible chunking and dividing method is used for better vectors. The code should be capable of processing md and txt files.

Assignee

@fkesheh

Objective

Our goal is to improve the RAG system by creating custom code for text embedding and metadata extraction, which will ultimately enhance the quality of AI responses.

Actions and Considerations (ACC)

  1. Research Best Practices:

    • Investigate the best possible settings, parameters, and chunking methods for text embedding and metadata extraction, specifically for our kind of data (guides and tutorials about ethical hacking).
  2. Create Custom Code for Text Embedding and Metadata Extraction:

    • Develop code that uses text-embedding-3-large with 3072 dimensions to create embeddings for our data.
    • Implement functionality to extract relevant metadata, such as summary, title, and keywords, from the guides and tutorials.
    • Use chunks of 800 tokens with an overlap of 400 tokens for better vector generation.
    • Ensure the code supports processing md and txt files.
  3. Integrate Custom Code with RAG System:

    • Update the RAG code to utilize the new text embeddings and metadata for new vectors.
  4. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the custom code works as expected and significantly improves the RAG system's performance.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.

Expected Outcomes

  • Custom code for text embedding and metadata extraction that enhances the RAG system's functionality and improves the quality of AI responses.
  • Improved user experience and satisfaction, as users receive more accurate and relevant AI responses.
  • Enhanced overall functionality and adherence to best practices for text embedding and metadata extraction for our specific data type.

Integrating 'About Us' Page into HackerGPT 2.0 with Updated Style

Background

As part of enhancing our HackerGPT 2.0's user experience, we are migrating the 'About Us' page from the existing HackerGPT platform. This move aims to maintain continuity and provide users with easy access to information about HackerGPT and more. The key is to ensure that while the content remains unchanged, the presentation aligns seamlessly with the new UI's aesthetics.

Objective

To adapt the 'About Us' page from the HackerGPT repository into the HackerGPT 2.0, ensuring the information is presented in a style consistent with the new interface. Users should be able to access this page by clicking an info icon within the chat-help popup, facilitating a smooth and integrated user experience.

Image

Actions and Considerations (ACC)

  1. Content Preservation:

    • Ensure all content from the original 'About Us' page is accurately transferred to the new UI, maintaining the integrity of the information.
  2. Style Integration:

    • Update the 'About Us' page design to match the chatbot-UI 2.0's style guide, ensuring visual consistency across the platform.
    • Implement responsive design principles to ensure the page looks great on all devices.
  3. Navigation and Accessibility:

    • Add an info icon to the chat-help popup that links directly to the 'About Us' page, ensuring it is easily accessible from anywhere in the chat interface.
    • Ensure that the transition to the 'About Us' page is smooth and does not disrupt the user's experience.
  4. Testing and Quality Assurance:

    • Conduct thorough testing to ensure the 'About Us' page is responsive, the content displays correctly, and the style is consistent with the new UI.
    • Test the info icon link within the chat-help popup to ensure it correctly redirects users to the 'About Us' page.

Expected Outcomes

  • The 'About Us' page is successfully integrated into HackerGPT 2.0, with all original information preserved and presented in a style that matches the new UI.
  • Users can easily access the 'About Us' page from the chat-help popup, enhancing the overall navigability and user experience of the platform.

Implement Logic for Plugin Output to Generate Files for Large or Requested Data

Description

We need to create a logic for plugin output that allows users to receive the output in a file if they request it from the AI or use a command for the plugin, or if the output text is too long. This functionality is similar to the existing logic used in the web scraper plugin, which generates a file for website data.

Objective

Our goal is to implement a logic for plugin output that automatically generates a file for users when the output text is too long or when users specifically request it through AI interaction or plugin commands.

Actions and Considerations (ACC)

  1. Define Output File Criteria (@thehackergpt):

    • Determine the threshold for output text length that requires the generation of a file. (300,000 chars I think threshold is good enough)
    • Identify the AI or plugin commands that users can utilize to request output in a file.
  2. Implement Output File Logic (@fkesheh):

    • Develop a logic that automatically generates a file for plugin output when the defined criteria are met.
    • Ensure that the generated files are easily accessible and downloadable for users.
  3. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the implemented logic correctly generates files for plugin output when the defined criteria are met.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.

Expected Outcomes

  • A logic for plugin output that generates files for users when the output text is too long or when users specifically request it through AI interaction or plugin commands.
  • Improved user experience and satisfaction, as users can now receive large plugin output data in a file format that is easy to access and manage.
  • Enhanced overall functionality and adherence to best practices for handling large output data from plugins.

Store RAG Vector IDs and Boolean Values for AI Responses to Improve Feedback System

Description

To enhance our feedback system, we need to store RAG vector IDs (e.g., a576038e-c565-4e6b-bdf4-2244dd214e13) for each message and actual boolean values indicating whether RAG is being used or not for each AI response to the user. This information will be used to improve the feedback system by including additional fields like RAG vector IDs and whether RAG was used or not. This will help us understand how to improve specific Pinecone vectors to provide better results to users.

Assignee

@fkesheh

Objective

Our goal is to enhance the feedback system by storing RAG vector IDs and boolean values for AI responses, which will ultimately help us improve the quality of AI responses.

Actions and Considerations (ACC)

  1. Update AI Response Storage:

    • Store RAG vector IDs for each message. (Pinecone Vectors Ids)
    • Store boolean values indicating whether RAG was used or not for each AI response.
  2. Update Feedback System:

    • Include additional fields in the feedback system to store RAG vector IDs and whether RAG was used or not.
  3. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the updated feedback system works as expected and helps improve the quality of AI responses.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.
    • Verify that the changes work correctly both locally and on Vercel preview.

Expected Outcomes

  • An enhanced feedback system that includes RAG vector IDs and boolean values for AI responses, enabling better analysis and understanding of user feedback.
  • Improved quality of AI responses, as the feedback system helps identify areas for improvement in specific Pinecone vectors.
  • Enhanced overall functionality and adherence to best practices for feedback systems and AI response optimization.

Implement Limits on User-Created Entities to Prevent System Abuse

Background

In an effort to maintain the integrity and performance of our system, there is a pressing need to establish limits on the number of user-created entities, such as folders, workspaces, presets, files, and collections. Without such limits, users may inadvertently or deliberately abuse the system, leading to storage and management issues that can degrade the user experience for all. Implementing these limits will help us ensure that our system remains efficient, reliable, and fair for every user.

Objective

To introduce and enforce limits on the amount of user-created entities within our platform. This initiative aims to prevent system abuse by establishing a maximum threshold for folders, workspaces, presets, files, and collections that a user can create. Upon reaching these limits, users should be prompted to manage their existing entities (e.g., deletion or consolidation) before creating new ones. Additionally, we will explore rate limiting as a means to control the frequency of entity creation.

Actions and Considerations (ACC)

  1. Establishment of Entity Limits:

    • Determine reasonable limits for each type of entity based on average user needs and system capacity.
    • Implement checks within the system to enforce these limits at the point of creation for each entity type.
  2. User Notification and Guidance:

    • Develop user-friendly notifications to alert users when they are approaching or have reached the limit of entities they can create.
    • Provide guidance on how users can manage their existing entities to make room for new ones, emphasizing the importance of mindful resource usage.
  3. Rate Limiting Implementation:

    • Explore the feasibility and potential benefits of implementing rate limiting for the creation of new entities.
    • If adopted, define rate limits that balance preventing abuse with allowing legitimate use.
  4. UI/UX Considerations:

    • Ensure that any limits and notifications are clearly communicated in the user interface, preventing confusion and frustration.
    • Incorporate design elements that encourage users to review and clean up unnecessary entities regularly.
  5. Testing and Feedback:

    • Conduct thorough testing of the new limits and rate limiting mechanisms to ensure they function as intended without hindering user experience.

Expected Outcomes

  • Effective limits on the number of user-created entities, preventing system abuse while accommodating legitimate user needs.
  • Clear communication to users regarding limits and strategies for managing their entities, enhancing the overall user experience.
  • A balanced approach to rate limiting that deters abuse without unduly restricting user productivity and creativity.
  • An adaptable system that can evolve based on user feedback and system performance data.

Implementing Message Rate Limits with Supabase for HackerGPT and GPT-4 Turbo Models

Background

To manage our system resources efficiently and maintain a quality service, we are introducing rate limits on the number of messages users can send within a 3-hour period. This will be implemented using Supabase to store each user's message count for both HackerGPT and GPT-4 Turbo models. The rate limits will differ for Free and Plus users. Additionally, we aim to track the total number of messages sent each month for each model, and if feasible, the token usage as well. This implementation can take cues from our existing rate limit logic in this repository.

Objective

To establish a rate-limiting system that tracks and controls the number of messages sent by each user over a 3-hour period for both HackerGPT and GPT-4 Turbo models. This system should be adjustable and provide clear feedback to users when they reach their limit. Plus, it should maintain a monthly log of total messages and, optionally, token usage for each model.

Actions and Considerations (ACC)

  1. Rate Limit Logic Implementation:

    • Develop rate limit logic in Supabase for both HackerGPT and GPT-4 Turbo models, ensuring different limits for Free and Plus users.
    • Make the message limit easily adjustable for potential future changes.
  2. User Feedback and Notification:

    • Create a mechanism to inform users when they hit their rate limit, using a template similar to:
      • For Free users:
        ⚠️ Hold On! You've Hit Your Usage Cap.
        ⏰ Don't worry—you'll be back in ${rateLimitStatus.timeRemaining}.
        🔓 Want more? Upgrade to Plus and unlock a world of features:
        - Enjoy unlimited usage,
        - Get exclusive access to GPT-4 Turbo,
        - Experience faster response speed,
        - Explore the web with our Web Browsing plugin,
        - Plus, get access to advanced hacking tools like Katana, HttpX, Naabu, and more.
        
      • For Plus users:
        ⚠️ Hold On! You've Hit Your Usage Cap.
        ⏰ Don't worry—you'll be back in ${rateLimitStatus.timeRemaining}.
        
  3. Testing and Quality Assurance:

    • Test the rate-limiting system extensively to ensure accuracy and reliability.
    • Validate the user notification system for different user types (Free and Plus).

Expected Outcomes

  • Effective implementation of message rate limits for both HackerGPT and GPT-4 Turbo models using Supabase.
  • Accurate tracking of monthly message totals and, optionally, token usage.
  • Clear and user-friendly notifications for users who reach their message limits.

Modify Preventing Users from Sending Multiple Messages to Allow Input During AI Response

Description

We previously implemented logic to prevent users from sending multiple messages simultaneously (#160). However, we need to modify this behavior to achieve a different result. Currently, the user input bar is disabled when messages from the AI are streaming back. Instead, we want to allow users to input text in the text bar while the AI is responding, but prevent them from actually sending the message by pressing enter, clicking on starter messages, or using other triggers that can add an additional message while the AI message is still streaming. This behavior is similar to services like ChatGPT and Mistral Chat, and it provides a better user experience.

Objective

Our goal is to modify the existing logic that prevents users from sending multiple messages simultaneously, allowing them to input text in the text bar while the AI is responding but preventing them from sending additional messages until the AI has finished.

Actions and Considerations (ACC)

  1. Update Preventing Multiple Messages Logic:

    • Modify the existing logic to enable user input in the text bar while the AI is streaming a response.
    • Implement restrictions that prevent users from sending additional messages by pressing enter, clicking on starter messages, or using other triggers while the AI is still responding.
  2. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the updated logic effectively prevents users from sending multiple messages simultaneously while allowing them to input text during AI responses.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.

Expected Outcomes

  • An updated logic that allows users to input text in the text bar while the AI is responding but prevents them from sending additional messages until the AI has finished.
  • Improved user experience and satisfaction, as users can now prepare their next question or input while waiting for the AI's response.
  • Enhanced overall functionality and adherence to best practices for user input handling during AI responses.

Prevent URL Manipulation of Login Screen Messages

Description

A user has reported that they can manipulate the URL parameters for login screen messages (e.g., "login?message=..."), which could potentially be exploited by malicious users. Although this vulnerability is considered low severity, it is essential to address it to maintain the security and trust of our users. Our goal is to ensure that the login screen messages cannot be edited or manipulated by users to prevent potential exploits.

Objective

Our goal is to implement a mechanism that prevents users from manipulating the URL parameters for login screen messages, ensuring a more secure user experience.

Actions and Considerations (ACC)

  1. Secure Login Screen Messages:

    • Implement server-side validation and sanitization of the URL parameters to prevent users from manipulating the login screen messages.
    • Consider using an alternative method for displaying login messages that does not rely on URL parameters, such as storing the message in a secure server-side session.
  2. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the implemented solution effectively prevents URL manipulation of login screen messages.
    • Test various scenarios, including potential edge cases, to guarantee a secure and robust solution.

Expected Outcomes

  • A secure mechanism that prevents users from manipulating the URL parameters for login screen messages, ensuring a safer user experience.
  • Enhanced user trust and confidence in our platform, as users are protected from potential low-severity exploits.
  • Improved overall security and adherence to best practices for URL parameter handling and user messaging.

Allow Users to Drop/Attach Multiple Files at Once

Description

Currently, users can only drop or attach one file at a time in the conversation. If a user tries to add more files, only the first file is added. To improve the user experience, we need to implement a solution that allows users to drop or attach up to five files at once. This should also include logic for handling unsupported file types, checking each file extension, and displaying an error message for each specific file with an unsupported extension.

Objective

Our goal is to enhance the file handling functionality in the conversation, allowing users to drop or attach multiple files at once and providing clear error messages for unsupported file types.

Actions and Considerations (ACC)

  1. Update File Handling Functionality:

    • Modify the existing code that handles file dropping and attachment to support the addition of up to five files at once.
  2. Implement Unsupported File Type Logic:

    • Extend the current logic for handling unsupported file types to work with multiple files, checking each file extension and displaying an error message for each specific file with an unsupported extension.
  3. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the updated file handling functionality works as expected, allowing users to drop or attach multiple files at once and providing clear error messages for unsupported file types.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.
    • Verify that the changes work correctly both locally and on Vercel preview.

Expected Outcomes

  • Improved user experience by allowing users to drop or attach multiple files at once in the conversation.
  • Enhanced file handling functionality that provides clear error messages for each specific file with an unsupported extension when attempting to add multiple files.
  • Enhanced overall functionality and adherence to best practices for file handling and management in a chat interface.

Clean Up Supabase Tables and Fields for Optimization

Description

Currently, we have several tables and fields in our Supabase database that are no longer in use. To improve performance and maintain a cleaner database structure, we should delete these unnecessary tables and fields. The tables that need to be deleted are "presets", "preset_workspaces", "prompts", "prompts_workspaces", "tool_workspaces", "collections", "collection_workspaces", "collection_files", "assistant_workspaces", "assistant_tools", "assistant_files", and "assistant_collections". Additionally, we should remove fields from the "profiles" table that are no longer needed, such as api keys like "anthropic_api_key" and "azure_openai_api_key", among others.

Objective

Our goal is to optimize the Supabase database by cleaning up unnecessary tables and fields, ensuring that the application continues to function correctly without any bugs or problems.

Actions and Considerations (ACC)

  1. Analyze Tables and Fields:

    • Review the existing tables and fields in the Supabase database to confirm that they are no longer in use and can be safely deleted.
  2. Delete Unnecessary Tables:

    • Delete the following tables from the Supabase database: "presets", "preset_workspaces", "prompts", "prompts_workspaces", "tool_workspaces", "collections", "collection_workspaces", "collection_files", "assistant_workspaces", "assistant_tools", "assistant_files", and "assistant_collections".
  3. Remove Unnecessary Fields from Profiles Table:

    • Delete fields from the "profiles" table that are no longer needed, such as api keys like "anthropic_api_key" and "azure_openai_api_key", among others.
  4. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the application continues to function correctly without any bugs or problems after the tables and fields have been deleted.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.
    • Verify that the changes work correctly both locally and on Vercel preview.

Expected Outcomes

  • A cleaner and more optimized Supabase database, with unnecessary tables and fields removed.
  • Improved database performance and maintainability, with a more streamlined and organized structure.
  • Enhanced overall functionality and adherence to best practices for database management and optimization.

Resolve Screen Overflow Issue with Multiple Files in Chat

Description

We have identified an issue where the screen overflows when a user adds multiple files in the chat, causing the files to extend beyond the intended ChatFilesDisplay box. The number of files that cause overflow depends on factors such as file names and screen size. Although a quick fix for larger screens has been suggested by adding the max-w-full classname to the div containing the files section, this solution does not work for mobile devices. Our goal is to find a responsive solution that works on both mobile and desktop devices to ensure a better user experience when dealing with numerous files in the chat.

  • Image

  • Image

Objective

Our goal is to identify and implement a responsive design solution that prevents screen overflow when a user adds multiple files in the chat, ensuring optimal user experience on both mobile and desktop devices.

Actions and Considerations (ACC)

  1. Responsive Design Solution:

    • Investigate and develop a layout solution that accommodates multiple files in the chat without causing screen overflow on both mobile and desktop devices.
    • Ensure the solution is compatible with various screen sizes and resolutions.
  2. Testing and Quality Assurance:

    • Conduct extensive testing on various devices, platforms, and browsers to ensure compatibility.
    • Test standard and edge-case scenarios with different numbers and lengths of file names to guarantee a reliable and robust solution.

Expected Outcomes

  • A responsive design solution that prevents screen overflow when dealing with numerous files in the chat on both mobile and desktop devices, considering various factors such as file names and screen dimensions.
  • Improved user experience and satisfaction, as users can interact with multiple files in the chat without encountering screen overflow issues.
  • Enhanced user trust and confidence in our platform, as users benefit from a more reliable and visually appealing chat interface.

Implement Secure "Delete All Chats" Feature in User Profile Settings

Description

To provide users with better control over their chat history and data, we aim to implement a "Delete All Chats" feature within the application settings. This feature will allow users to delete their entire chat history securely and efficiently, ensuring their privacy and autonomy.

Objective

Our goal is to design and implement a user-friendly and secure "Delete All Chats" feature in the application settings, enabling users to permanently remove their chat history with a clear confirmation process.

Actions and Considerations (ACC)

  1. Design and Implementation:

    • Add a "Delete all chats" text item in the Profile tab within the settings menu.
    • Place a "Delete All" button with red color to the right of the text item, indicating the action's permanence and importance.
  2. Confirmation Process:

    • Implement a popup window that appears when the user clicks the "Delete All" button, asking for confirmation to delete all chats.
    • Ensure the popup clearly communicates the permanence of the action and provides "Yes" and "Cancel" options for the user.
  3. Data Deletion Process:

    • Develop a secure and efficient process for deleting all chat data associated with the user's account upon confirmation.
    • Ensure the feature works seamlessly across all devices, platforms, and browsers.
  4. Testing and Quality Assurance:

    • Conduct extensive testing on various devices, platforms, and browsers to ensure compatibility.
    • Test standard and edge-case scenarios to guarantee feature reliability and robustness.

Expected Outcomes

  • A fully functional and user-friendly "Delete All Chats" feature within the Profile tab of the application settings, allowing users to securely and permanently remove their chat history.
  • Enhanced user privacy and autonomy through increased control over chat data.
  • Improved user trust and confidence in our platform, as users can manage their chat history effectively and securely.

Enhancing Existing RAG Logic for Superior AI Responses

Background

The Retrieval-Augmented Generation (RAG) system is foundational in leveraging our extensive hacking database, Pinecone, to enhance HackerGPT's capability in delivering detailed and precise responses. To boost the system's efficiency and response quality, we aim to refine our methodology by optimizing how we process data and execute queries within Pinecone.

Objective

Our goal is to upgrade the RAG system to ensure more accurate and contextually relevant AI responses. This includes refining our approach to utilizing advanced techniques for data embedding, and employing sophisticated querying methods. These enhancements are designed to improve the user experience, offering seamless interaction without the need for extensive fine-tuning of the base language model.

Actions and Considerations (ACC)

  1. Advanced Data Handling and Embedding:

    • Develop a tool or script, ideally in JavaScript or Python, that can process and embed data from a range of formats (Markdown, PDF, TXT) using advanced techniques and platforms, such as unstructured.io. This tool is intended to improve the quality of vectors for Pinecone queries.
    • Create an advanced system for extracting and embedding data, focusing on generating the most accurate and relevant vectors possible. Which explicitly will use text-embedding-3-large for embedding to get most accurate and relevant vectors possible. This system must efficiently manage the intricacies of unstructured data, optimizing it for our RAG system.
  2. Sophisticated Query Optimization and Execution:

    • Formulate a method to compile chat history and other relevant data into a singular, comprehensive query for Pinecone, enhancing the system's ability to discern context and relevance.
    • Apply advanced NLP techniques and algorithms to not only improve the embedding process but also to refine the querying mechanism. This involves using sophisticated models and methods to accurately understand and interpret user queries, ensuring that search results from Pinecone are as precise and relevant as possible.

Expected Outcomes

  • An improved RAG system capable of delivering superior AI responses through enhancements in data processing, embedding techniques, and query precision.
  • A cost-effective and efficient alternative to extensive base LLM fine-tuning, leveraging cutting-edge technological advancements for enhanced performance.

Add Help Guide for Web Scraper Plugin

🥰 Feature Overview

The user reported that they thought the Web Scraper plugin wasn't working because there was no "help" command available for that particular plugin. They mentioned that other plugins usually provide instructions when asking for help, but the plugin did not. This could confuse users unfamiliar with the plugin and may lead them to believe it needs to be fixed.

Screenshot_20240318_161325

🧐 Suggested Solution

A great idea to help users understand how a plugin works is to create a help command. This command would be triggered when a user asks for assistance. The AI would interpret the user's message and respond by generating a command that would activate a pre-built help guide, similar to other plugins. Alternatively, the AI could explain how to use the plugin in its own words by creating a system prompt. This simple solution would help users use the plugin more effectively.

📝 Additional Information

No response

Fix Password Reset Functionality and Redirection

Description

Numerous users have reported that they cannot reset their password, as they are directly redirected to the chat screen without being presented with the option to change their password. This issue also occurs on chatbotui.com. Upon investigation, it was found that the reset password link provides a temporary session that leads to the /login page, which then redirects users straight to the chat screen. Our goal is to implement the correct logic that displays the password change page every time and only redirects users back to the chat after they have entered a new password.

Objective

Our goal is to fix the password reset functionality and ensure that users are presented with the password change page instead of being directly redirected to the chat screen.

Actions and Considerations (ACC)

  1. Update Password Reset Logic:

    • Modify the existing logic to correctly handle the temporary session provided by the reset password link.
    • Ensure that users are directed to the password change page instead of being automatically redirected to the chat screen.
  2. Implement Password Change and Redirection:

    • Create a secure password change page that allows users to enter a new password.
    • After users have successfully changed their password, redirect them back to the chat screen.
  3. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the implemented solution effectively resolves the password reset issue.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.

Expected Outcomes

  • A fixed password reset functionality that correctly displays the password change page instead of redirecting users straight to the chat screen.
  • Improved user experience and satisfaction, as users can now successfully reset their passwords without encountering unexpected redirections.
  • Enhanced security and adherence to best practices for password reset and user authentication processes.

Investigate and Resolve 429 Errors on [POST] /api/chat/mistral

Description

We have been experiencing regular 429 errors on [POST] /api/chat/mistral, indicating that too many requests are being sent to different services. This issue may be caused by sending multiple requests for generating standalone questions, querying Pinecone, and answering the question. Our goal is to investigate the root cause of these errors and implement a solution to prevent them from occurring in the future.

Objective

Our goal is to identify the reasons behind the 429 errors on [POST] /api/chat/mistral and implement a solution to optimize the request handling and prevent these errors from happening regularly.

Actions and Considerations (ACC)

  1. Investigate Root Cause:

    • Analyze server logs and request patterns to identify the primary reasons behind the 429 errors.
    • Assess the current request handling mechanism, including the number of requests sent to different services and their frequency.
  2. Optimize Request Handling:

    • Implement the solution that will solve the problem with 429 errors or minimize it.
  3. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the implemented solution effectively prevents 429 errors on [POST] /api/chat/mistral.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.

Expected Outcomes

  • A clear understanding of the root cause behind the 429 errors on [POST] /api/chat/mistral.
  • An optimized request handling mechanism that prevents these errors from occurring regularly.
  • Improved user experience and reliability of the application, as users no longer encounter 429 errors during regular usage.
  • Enhanced overall performance and efficiency of the application by optimizing the number of requests sent to different services.

Stop AI Response When User Changes or Creates a New Chat

Description

We currently have a bug where the AI response continues streaming even if the user switches to a different chat or creates a new chat, causing the UI to break. To fix this issue and provide a better user experience, we need to stop the AI response when the user changes or creates a new chat.

Objective

Our goal is to fix the bug that causes the AI response to continue streaming when the user changes or creates a new chat, ensuring a smoother user experience and preventing UI errors.

Actions and Considerations (ACC)

  1. Detect Chat Changes:

    • Implement a mechanism to detect when the user switches to a different chat or creates a new chat.
  2. Stop AI Response:

    • When a chat change is detected, stop the AI response from streaming, preventing any UI errors.
  3. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the AI response stops streaming correctly when the user changes or creates a new chat.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.
    • Verify that the changes work correctly both locally and on Vercel preview.

Expected Outcomes

  • A fix for the bug that causes the AI response to continue streaming when the user changes or creates a new chat, improving the overall user experience.
  • Prevention of UI errors caused by the AI response streaming in the wrong chat context.
  • Enhanced overall functionality and adherence to best practices for chat management and AI response handling.

Fix Regeneration Functionality Deleting Messages When User Hits Rate Limit

Description

There is a bug where the regeneration function deletes messages from the chat if the user tries to use it when they have hit the rate limit. Currently, when a user hits the rate limit and clicks on the regenerate button, the message is deleted, and then the rate limit popup is shown. This behavior is incorrect and should be fixed. Instead, when a user hits the rate limit and clicks on the regenerate button, the rate limit popup should be displayed, and the regeneration functionality should not be executed at all.

Objective

Our goal is to fix the regeneration functionality bug that deletes messages when the user hits the rate limit, ensuring that the regeneration function is not executed, and the rate limit popup is displayed correctly.

Actions and Considerations (ACC)

  1. Analyze Current Implementation:

    • Investigate the current implementation of the regeneration functionality and the rate limit handling to identify the cause of the bug.
  2. Update Regeneration Functionality:

    • Ensure that the rate limit popup is displayed when the user has hit the rate limit, and the regeneration functionality is not executed.
  3. Integrate Updated Functionality with Chat Interface:

    • Update the chat interface to include the modified regeneration functionality.
  4. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the regeneration functionality works as expected when the user hits the rate limit, and the rate limit popup is displayed correctly without deleting any messages.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.

Expected Outcomes

  • A fixed regeneration functionality that correctly handles user interactions when they hit the rate limit, ensuring that messages are not deleted, and the rate limit popup is displayed appropriately.
  • Improved user satisfaction, as users will no longer lose messages due to the bug when hitting the rate limit and trying to regenerate a response.
  • Enhanced overall functionality and adherence to best practices for rate limit handling and chat interface design.

Implement Pre-built Plugin Queries for Enhanced User Guidance and Engagement

Background

In order to enhance user experience and facilitate better understanding of our plugin functionalities, we aim to implement pre-built queries for each plugin within our application. These queries will provide users with examples and guidance on how to effectively utilize the plugins, making their interactions more productive and engaging.

Objective

Our goal is to design and implement a user-friendly feature that displays pre-built queries for each plugin, helping users to easily understand and use the plugins. These queries should be visible only when the user has not typed anything in the chat input field and is not currently typing.

Actions and Considerations (ACC)

  1. Design and Implementation:

    • Create a dynamic list of pre-built queries for each plugin, with four distinct examples per plugin.
    • Design the feature to display the relevant pre-built queries only in new chats.
    • Ensure the queries are easily editable through a simple configuration file.
      Image
  2. Plugin-Specific Queries:

    • Develop a system that associates each pre-built query with its respective plugin, ensuring the appropriate queries are displayed when a user selects a specific plugin.
    • Make sure the queries are tailored to showcase the unique capabilities and functions of each plugin.
  3. User Interface and Experience:

    • Design the pre-built queries to be easily clickable, allowing users to insert the selected query into the chat input field with a single click.
    • Ensure the feature is intuitive and visually appealing, enhancing the overall user experience.
  4. Testing and Quality Assurance:

    • Conduct extensive testing on various devices, platforms, and browsers to ensure compatibility.
    • Test standard and edge-case scenarios to guarantee feature reliability and robustness.

Expected Outcomes

  • A fully functional and user-friendly pre-built query feature that helps users better understand and utilize the plugins.
  • Improved user experience and engagement through clear and accessible examples of plugin usage.
  • A simple and easily editable configuration file for managing and updating pre-built queries.
  • Enhanced user satisfaction and confidence in our platform, as users can quickly learn and benefit from the capabilities of each plugin.

Create Admin Dashboard for Feedback System

Description

To effectively manage and analyze user feedback, we need to create an admin dashboard for the feedback system. The dashboard should securely present information about feedback from Supabase in a clean interface, accessible only by admins. The page should implement role-based access control, allowing only specific users with the role of "moderator" to access the page. The dashboard should display a list of user feedback with filters and the ability to mark feedback as reviewed.

Assignee

@fkesheh

Objective

Our goal is to create a secure and user-friendly admin dashboard for the feedback system, enabling moderators to efficiently review, analyze, and manage user feedback.

Actions and Considerations (ACC)

  1. Implement Role-Based Access Control:

    • Add an additional field in the database of profiles to indicate the user's role (normal user or moderator).
    • Ensure that only users with the role of "moderator" can access the admin dashboard.
  2. Create Admin Dashboard Interface:

    • Design a clean and user-friendly interface for the admin dashboard.
    • Display a list of user feedback with short summaries, including "detailed_feedback" (if provided), "feedback" (bad or good), "reason", "model" (AI model name), "plugin" (if used), "allow_sharing" (user's permission to share chat history), and dates ("created_at" and "updated_at").
  • Inspiration example:
  • Image
  1. Implement Filters and Search Functionality:

    • Add filters for feedback (good or bad), AI model used, and plugin usage (used or not).
  2. Display Detailed Feedback Information:

    • Allow moderators to click on each feedback box to view all related information from the Supabase database, including the AI message or full chat history.
  3. Add Feedback Review Status:

    • Include a field in each feedback box to indicate whether the feedback has been reviewed or not.
    • Enable moderators to mark feedback as reviewed.
  4. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the admin dashboard works as expected and securely manages user feedback.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.
    • Verify that the changes work correctly both locally and on Vercel preview.

Expected Outcomes

  • A secure and user-friendly admin dashboard for the feedback system, allowing moderators to efficiently review, analyze, and manage user feedback.
  • Improved feedback management and analysis, enabling better understanding of user needs and preferences.
  • Enhanced overall functionality and adherence to best practices for feedback management and role-based access control.

Implement Rate Limit Analysis for User Messages and Tokens via Helicone.ai Integration

Background

As part of our ongoing efforts to optimize user engagement and resource allocation, there's a need to analyze user interaction with our models in terms of message frequency and token consumption. This analysis will enable us to fine-tune the balance between free and plus messages, ensuring a sustainable and user-friendly service. The integration with Helicone.ai presents an opportunity to achieve this by tracking and analyzing the number of messages sent and the total token count within those messages. This data is crucial for adjusting our service offerings and maintaining a competitive edge.

Objective

To integrate Helicone.ai with our existing OpenAI and OpenRouter setup, enabling real-time analysis of user message rates and token consumption. This integration should respect user privacy by only transmitting necessary identifiers, such as the user ID from the Supabase profiles table. The ultimate goal is to gather actionable insights that will guide the adjustment of free and plus message allowances based on actual usage patterns.

Actions and Considerations (ACC)

  1. Helicone.ai Integration:

    • Establish a connection between Helicone.ai and our OpenAI API, ensuring seamless data flow for analysis.
    • Establish a connection between Helicone.ai and our OpenRouter API, ensuring seamless data flow for analysis.
    • Configure the integration to capture and transmit user IDs as the sole piece of personal information, prioritizing user privacy. Reference: https://docs.helicone.ai/features/advanced-usage/user-metrics
  2. Privacy and Security:

    • Ensure that all data transmission are privacy friendly.
    • Implement robust security measures to protect the data collected and transmitted for analysis.
  3. Integration Testing:

    • Conduct thorough testing of the Helicone.ai integration to ensure accurate data collection and analysis.
    • Validate that user IDs are correctly associated with message and token counts without compromising privacy.

Expected Outcomes

  • Successful integration of Helicone.ai for real-time analysis of user message rates and token consumption.
  • Informed decisions on adjusting free and plus message allowances, enhancing user satisfaction and service sustainability.
  • Maintenance of user privacy and security throughout the data collection and analysis process.

Integrate User Feedback System for AI Message Responses

Description

To improve the quality and relevance of AI responses, we aim to integrate a user feedback system that allows users to express their dissatisfaction with specific AI messages. This system will include a 'dislike' icon alongside each AI response and an option for users to provide additional feedback when necessary. By analyzing the collected feedback and user interaction data, we can continuously refine the RAG system and deliver increasingly relevant and accurate AI responses based on real user experiences.

Objective

Our goal is to design and implement a user feedback system for AI message responses, enabling users to express their dissatisfaction and provide additional feedback when necessary.

Actions and Considerations (ACC)

  1. User Feedback System:
    • Integrate a 'dislike' icon alongside each AI response, allowing users to express dissatisfaction with specific responses.
    • Record whether a disliked message utilized the RAG system, gathering data to identify patterns or areas needing improvement in the RAG-enhanced responses.
    • Include an option for 'Provide additional feedback' when users click the dislike icon, prompting users to give more detailed feedback about their dissatisfaction.
      • Image

Expected Outcomes

  • A user-friendly feedback system that allows users to express dissatisfaction with AI responses and provide additional feedback when necessary.

Implement Email Display in User Profile Settings

Background

As part of our ongoing efforts to improve user experience and transparency, we aim to enhance the profile settings within our application. This improvement involves enabling users to view their email addresses within the "Profile" section, providing a clear and accessible display of their account information.

Objective

Our goal is to design and implement a user-friendly feature in the application settings that allows users to view their email addresses within the "Profile" section.

Actions and Considerations (ACC)

  1. API Design & Front-End Development:

    • Design an intuitive "Email" section within the existing "Profile" tab in the settings menu, displaying the user's current email address.
    • Ensure the email address is presented in a simple and easy-to-read format.
  2. Security Measures:

    • Ensure high-security standards to protect users' email addresses and maintain user privacy.
  3. Testing and Quality Assurance:

    • Conduct extensive testing on various devices, platforms, and browsers to ensure compatibility.
    • Test standard and edge-case scenarios to guarantee feature reliability and robustness.

Expected Outcomes

  • A fully functional and user-friendly email display feature within the "Profile" section of the application settings.
  • Enhanced user experience and transparency through clear and accessible presentation of account information.
  • Maintained security and protection of users' email addresses, ensuring user trust and confidence in our platform.

Display Error Message for Last User Message Exceeding CHUNK_SIZE

Description

To improve user experience and ensure AI understands the conversation better, we need to display an error message in the form of an AI response when the user's last message is longer than the CHUNK_SIZE for a specific model. This error message should be deleted from the chat history in the future to avoid confusion and maintain context for the AI.

Objective

Our goal is to implement a feature that displays an error message when the user's last message exceeds the CHUNK_SIZE limit and automatically removes the error message from the chat history in the future.

Actions and Considerations (ACC)

  1. Detect Exceeded CHUNK_SIZE:

    • Implement a check to determine if the user's last message is longer than the CHUNK_SIZE for the specific model.
  2. Display Error Message:

    • If the user's last message exceeds the CHUNK_SIZE limit, display an error message in the form of an AI response.
  3. Remove Error Message from Chat History:

    • Implement a mechanism to automatically delete the error message from the chat history in the future to maintain context for the AI.
  4. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the error message is displayed correctly when the user's last message exceeds the CHUNK_SIZE limit and is removed from the chat history as expected.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.
    • Verify that the changes work correctly both locally and on Vercel preview.

Expected Outcomes

  • Improved user experience by providing clear error messages when the user's last message exceeds the CHUNK_SIZE limit.
  • Enhanced AI understanding of the conversation by automatically removing error messages from the chat history.
  • Enhanced overall functionality and adherence to best practices for error handling and chat history management.

Improve Plugin Selector Accessability

Description

The Plugin Select can only be opened by clicking on the arrow-down Icon. Nothing happens when you click on 'No plugin selected' on both Desktop and Mobile. This can be improved by setting the onClick on the whole plugin selector instead of just the arrow-down.

Image
(Blue is clickable, red is not clickable)

Actions and Considerations (ACC)

  1. Functionality:
    • Clicking anywhere on the Plugin Selector opens the selector
  2. Testing:
    • Works on both desktop and mobile as expected

Implement account deletion feature in user settings

Background

In line with our commitment to prioritizing user privacy and autonomy, we are initiating the development of a feature that empowers users to delete their accounts within the application settings. This feature will guarantee users have complete control over their personal data, adhering to contemporary data protection standards and regulations.

Assignee

@Fx64b

Objective

Our goal is to design and implement a user-friendly, secure, and compliant feature in the application settings that enables users to delete their accounts and all associated data. This includes rate limits, subscription histories, and any other information that can be removed from our system.

Actions and Considerations (ACC)

  1. API Design & Front-End Development:

    • Design a new "Data Controls" tab in the settings menu to house data management features.
    • Implement a prominent and clear "Delete Account" button within this new tab.
  2. Data Deletion Process:

    • Ensure the deletion process encompasses all user-specific data, such as rate limits, subscription history, and other relevant information.
    • Develop a systematic approach to confirm the permanent removal of user data from our system.
  3. Security Measures:

    • Implement a multi-step confirmation process for account deletion to prevent accidental deletions.
    • Incorporate high-security standards and safeguards to avoid unauthorized deletions.
  4. Testing and Quality Assurance:

    • Conduct extensive testing on various devices, platforms, and browsers to ensure compatibility.
    • Test standard and edge-case scenarios to guarantee feature reliability and robustness.

Expected Outcomes

  • A fully functional and user-friendly account deletion feature that offers a seamless experience.
  • Comprehensive and systematic removal of user data, including rate limits, subscription history, and other relevant information.
  • Compliance with data protection standards and regulations, enhancing user trust and confidence in our platform.

Implement User Option to Disable RAG Enhanced Search

Description

We are using RAG to enhance user searches, but in some cases, users may not need or want the RAG functionality, such as when coding or performing other tasks. To accommodate user preferences, we need to provide an option to disable the RAG by adding a switch with the text "Enhance Search" next to it. When users hover over the "Enhance Search" text, we should provide an explanation of what the Enhance Search feature is. On larger screens, the Enhance Search switch should be placed next to the plugins select bar. For mobile screens, we need to find an alternative placement due to space limitations.

Objective

Our goal is to implement a user-friendly option to enable or disable the RAG Enhance Search feature, providing users with more control over their search experience.

Actions and Considerations (ACC)

  1. Design and Implement Enhance Search Switch:

    • Create an on-off switch for the Enhance Search feature with the label "Enhance Search."
    • Place the switch next to the plugins select bar on larger screens and find an alternative placement for mobile screens.
  2. Add Hover Explanation:

    • Implement a tooltip or hover text that explains the Enhance Search feature when users hover over the label.
  3. Update RAG Functionality:

    • Modify the RAG functionality to respect the user's preference when the Enhance Search switch is turned off.
  4. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the Enhance Search switch works as expected and that the RAG functionality is correctly disabled when the switch is turned off.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.

Expected Outcomes

  • A user-friendly option to enable or disable the RAG Enhance Search feature, providing users with more control over their search experience.
  • Improved user experience and satisfaction, as users can now customize their search experience based on their needs and preferences.
  • Enhanced overall functionality and adherence to best practices for user customization and control over search enhancements.

Google Sign-In Redirection and Session Initialization Bug

Issue Overview:
When users attempt to log in via Google Sign-In on https://chat.hackerai.co/, they experience a multi-step redirection process. Initially, users are prompted to select an account (or this step is skipped if only one account is present), after which they are redirected to the homepage (https://chat.hackerai.co/) instead of directly to the chat interface (https://chat.hackerai.co/chat). To proceed, users must manually click "Start Chatting," which only then redirects them to the chat interface. However, upon this redirection, it appears that the user session is not properly initialized, as evidenced by the inability to create workspaces, prompts, or files—effectively rendering the application non-functional. A page reload clears the issue, suggesting it may be a session initialization or redirection problem.

  • Screenshot_20240205_125359

Comparison with Email Login:
For contrast, logging in via email directly redirects users to "https://chat.hackerai.co/chat". First-time email logins are redirected to a setup completion page at "https://chat.hackerai.co/setup", indicating a smoother and more consistent flow, which might be missing or malfunctioning in the Google Sign-In process.

  • Screenshot_20240205_124928

Suggested Solution:
One potential solution is to streamline the Google Sign-In process to mimic the email login flow, specifically by ensuring that users are directly redirected to "https://chat.hackerai.co/chat?code=" upon authentication. This approach might necessitate investigating the redirection logic post-Google authentication and ensuring that the session initialization properly reflects the user's logged-in state.

Additional Context:
It might be worth exploring if the setup completion page ("https://chat.hackerai.co/setup") plays a role in this issue, particularly in its absence or malfunction in the Google Sign-In flow. Ensuring a consistent user experience across different login methods is crucial for user retention and satisfaction.

Objective:
The goal is to diagnose and resolve the redirection and session initialization issues associated with Google Sign-In, to ensure a seamless and functional user experience equivalent to that of the email login pathway.

Implementing Google Sign-In with Supabase

Background

We're integrating Google Sign-In into our Supabase authentication system. This move is aimed at providing users with a faster, secure, and more convenient way to access our services using their Google accounts.

Objective

Our goal is to seamlessly integrate Google Sign-In with Supabase, enhancing the security and simplicity of our user authentication process.

Actions and Considerations (ACC)

  1. Integration Setup:

    • Configure Google Sign-In in the Supabase authentication settings.
    • Link Google accounts with our user database in Supabase correctly.
  2. Frontend Implementation:

    • Add a Google Sign-In button to the login interface.
    • Implement the logic for handling Google Sign-In on the frontend.
  3. Backend Configuration:

    • Set up secure communication between our backend and Google's authentication system.
  4. User Experience and Testing:

    • Test the Google Sign-In feature on various devices and browsers.

Expected Outcomes

  • Google Sign-In effectively integrated with our Supabase authentication.
  • A more streamlined and secure login experience for users.

[Bug]

💻 Operating System

Other Linux

🌐 Browser

Firefox

🐛 Bug Description

image
POST /api/v2/chat/mistral 500 in 5892ms

🚦 Expected Behavior

Under normal circumstances, they will give me feedback, but now they only report an error

📷 Steps to Reproduce

Normal startup, but unable to run as a result

📝 Additional Context

No response

Implement Disable Auto-scrolling Feature When User Scrolls Up During AI Message Streaming

Description

To enhance the user experience during conversations, we should implement a feature that disables auto-scrolling when the user scrolls up while the AI message is being streamed. This functionality is similar to ChatGPT, where auto-scrolling is disabled when the user decides to scroll up, allowing them to read the message more comfortably before it finishes streaming. Additionally, the auto-scrolling should remain disabled until a new AI message is being streamed.

Objective

Our goal is to improve the user experience by implementing a feature that disables auto-scrolling when the user scrolls up during AI message streaming and keeps it disabled until a new AI message is being streamed.

Actions and Considerations (ACC)

  1. Analyze ChatGPT's Implementation:

    • Study how ChatGPT has implemented the disable auto-scrolling feature when users scroll up during AI message streaming.
  2. Create Custom Code for Disable Auto-scrolling Feature with Persistent Disable:

    • Develop code that detects when the user scrolls up during AI message streaming and disables auto-scrolling accordingly.
    • Ensure the code keeps auto-scrolling disabled until a new AI message is being streamed.
  3. Testing and Quality Assurance:

    • Conduct thorough testing to ensure that the disable auto-scrolling feature with persistent disable works as expected and enhances the user experience during conversations.
    • Test various scenarios, including potential edge cases, to guarantee a robust and reliable solution.

Expected Outcomes

  • A disable auto-scrolling feature with persistent disable that improves the user experience during conversations by allowing users to read AI messages more comfortably before they finish streaming.
  • Improved user satisfaction, as users can now control the scrolling behavior during AI message streaming with minimal interaction.
  • Enhanced overall functionality and adherence to best practices for chat interface design and user experience.

[Bug] Not able to upload a file

💻 Operating System

Windows

🌐 Browser

Chrome

🐛 Bug Description

Vercel blocks files bigger than 5mb, so user get an error of file upload

🚦 Expected Behavior

No response

📷 Steps to Reproduce

No response

📝 Additional Context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.