Coder Social home page Coder Social logo

significant-gravitas / autogpt-code-ability Goto Github PK

View Code? Open in Web Editor NEW
104.0 104.0 32.0 19.81 MB

๐Ÿ–ฅ๏ธ AutoGPT's Coding Ability - empowering everyone to build software using AI

License: MIT License

Python 64.59% Dockerfile 0.20% Jinja 35.13% Shell 0.02% PLpgSQL 0.07%

autogpt-code-ability's People

Contributors

aarushik93 avatar bentlybro avatar majdyz avatar ntindle avatar pwuts avatar swiftyos avatar torantulino avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

autogpt-code-ability's Issues

Update ai_model to be more generic

Currently we have support for OpenAI & Groq, for both we are using the OpenAIChatClient class in common/ai_model.py - this is no longer accurate as it is not just OpenAI.

Rename this to something more generic

Integrate with Claude 3.5 Sonnet

We want to be able to use the coding ability with Claude 3.5 Sonnet.

Right now we only have Groq & OpenAI integrations.

Add one for Anthropic (Claude 3.5 Sonnet).

#280 ---> this is a good one to start with before this issue.

Handle app configuration in a more clean manner

Currently, all of the config is scattered throughout the codebase. For example:

if settings.hosted and os.getenv("HOSTED_DEPLOYMENT") == "google": db_name, db_username = await create_cloud_db(repo)

We need to refactor this into config wrapper.

Add Support for Ollama

Add Support for Ollama - this would be a great one to have so we can run entirely locally

Implement Dynamic Retry Mechanism to Handle OpenAI API Rate Limits

I am a first-time user of the OpenAI API for GPT-4o and have encountered a limitation due to the low rate limits on my account. During my usage, I've found that my attempts to utilize the "Code Ability" feature frequently stall out because I quickly hit the per-minute rate limit.

Feature Request:

I am requesting the implementation of a retry mechanism or a waiting period that allows the server to automatically handle and wait out the per-minute rate limitations imposed by the OpenAI API. This feature should include a parser to read the suggested wait time from the OpenAI rate limiting error and wait at least that amount of time before retrying.

Proposed Solution:

  • Retry Mechanism: Implement a system where, upon encountering a rate limit error, the server will automatically wait for the required amount of time before retrying the request.

  • Error Parsing: Integrate a parser that reads the suggested wait time from the OpenAI rate limiting error response and ensures the server waits for at least that amount of time.

  • Configurable Wait Time: Allow users to configure additional wait time if desired, beyond the suggested minimum.

  • Notification: Provide logs to inform the user that the system is waiting due to rate limit restrictions and the duration of the wait.

Benefits:

  • Improved user experience by reducing manual intervention.

  • Allowing users to fully utilize the "Coding Ability" feature without being disrupted by rate limit issues.

  • Efficient handling of rate limits by adhering to the suggested wait times provided by OpenAI, optimizing the retry mechanism.

Additional Context:

  • Users with higher rate limits may not face this issue as frequently, but for new users or those with lower limits, this feature is essential to ensure consistent functionality.

  • This feature would particularly benefit users who are in the early stages of exploring and integrating the OpenAI API into their projects.

No module named 'click' docker install

I installed it via docker but the app keeps exiting

2024-06-22 01:45:48 Traceback (most recent call last):
2024-06-22 01:45:48 File "", line 198, in _run_module_as_main
2024-06-22 01:45:48 File "", line 88, in _run_code
2024-06-22 01:45:48 File "/app/codex/main.py", line 5, in
2024-06-22 01:45:48 import click
2024-06-22 01:45:48 ModuleNotFoundError: No module named 'click'

Research Agent

Currently, this code ability is dependant on models such as gpt4, gpt4o and llama3 etc, which are not trained to the most recent code libraries and documentation. As a result often this service will attempt to use outdated syntax.
For example when using the OpenAI libraries, which have been updated often.
To solve for this:

Add a new researcher agent that is able to scour the internet for up to date information about code and libraries

Implement Front-End Capability to Resume Unfinished Projects

I am a first-time user of the OpenAI API for GPT-4o and have encountered issues with my projects stalling due to rate limits. To address this, I propose a feature that allows users to pull an unfinished project from the database and resume it from where it left off.

Feature Request:

I am requesting the implementation of a front-end capability that enables users to retrieve an unfinished project from the database and continue working on it seamlessly. This feature would provide a robust solution to rate limit interruptions by allowing users to pick up their work without starting over and consuming OpenAI credits needlessly.

Proposed Solution:

  • Project Retrieval: Add functionality to the front-end that allows users to select and load an unfinished project from the database.

  • State Preservation: Ensure that the state of the project, including all intermediate steps and progress, is preserved and can be accurately resumed.

  • User Interface Update: Update the user interface to include options for viewing and selecting unfinished projects, with clear indications of their status and last activity.

Benefits:

  • Enhanced user experience by minimizing disruption due to rate limits and other interruptions.

  • Increased productivity by allowing users to continue their work from the point of interruption without losing progress.

  • Improved project management with the ability to track and manage multiple projects and their statuses.

Make `CodeValidator` usable as standalone validator

Making database_schema and compiled_route_id optional in CodeValidator, and route_errors_as_todo, raise_validation_error in CodeValidator.validate_code, would allow using it as a stand-alone code validation module. Like this:
https://github.com/Significant-Gravitas/AutoGPT/blob/7f6b7d6d7e5fd92c8816e7c39912c5a72b8e93c2/autogpt/autogpt/agents/prompt_strategies/code_flow.py#L297-L300

A stripped-down version is currently included here: forge/utils/function
but it would be much better for maintainability if we could just import from codex.develop.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.