Coder Social home page Coder Social logo

Comments (16)

shashank42 avatar shashank42 commented on May 15, 2024 1

Got the same error. Seems the error is from github actions and not OpenAI API. I could be wrong.


Traceback (most recent call last):
  File "/app/pr_agent/servers/github_action_runner.py", line 57, in <module>
    asyncio.run(run_action())
  File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/app/pr_agent/servers/github_action_runner.py", line 53, in run_action
    await PRAgent().handle_request(pr_url, body)
  File "/app/pr_agent/agent/pr_agent.py", line 25, in handle_request
    await PRDescription(pr_url).describe()
  File "/app/pr_agent/tools/pr_description.py", line 40, in describe
    await retry_with_fallback_models(self._prepare_prediction)
  File "/app/pr_agent/algo/pr_processing.py", line [20](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:21)8, in retry_with_fallback_models
    return await f(model)
  File "/app/pr_agent/tools/pr_description.py", line 55, in _prepare_prediction
    self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
  File "/app/pr_agent/algo/pr_processing.py", line 43, in get_pr_diff
    diff_files = list(git_provider.get_diff_files())
  File "/app/pr_agent/git_providers/github_provider.py", line 84, in get_diff_files
    for file in files:
  File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line 69, in __iter__
    newElements = self._grow()
  File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line 80, in _grow
    newElements = self._fetchNextPage()
  File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line [21](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:22)3, in _fetchNextPage
    headers, data = self.__requester.requestJsonAndCheck(
  File "/usr/local/lib/python3.10/site-packages/github/Requester.py", line 442, in requestJsonAndCheck
    return self.__check(
  File "/usr/local/lib/python3.10/site-packages/github/Requester.py", line 487, in __check
    raise self.__createException(status, responseHeaders, data)
github.GithubException.RateLimitExceededException: 403 {"message": "API rate limit exceeded for installation ID [28](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:29)441098.", "documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"}

from pr-agent.

ilyadav avatar ilyadav commented on May 15, 2024 1

Hello,
it looks like now it is not openAI related, but GitHub token limitation
When using GITHUB_TOKEN, the rate limit is 1,000 requests per hour per repository.
If you exceed the rate limit, the response will have a 403 status and the x-ratelimit-remaining header will be 0:

please see https://docs.github.com/en/rest/overview/resources-in-the-rest-api?apiVersion=2022-11-28#rate-limits-for-requests-from-github-actions
and
https://docs.github.com/en/rest/overview/resources-in-the-rest-api?apiVersion=2022-11-28#rate-limits-for-requests-from-github-actions

as a fastest response I would suggest adding an exception to make message rather then fail

in pr_processing.py

from github import GithubException

    try:
        diff_files = list(git_provider.get_diff_files())
    except GithubException.RateLimitExceededException as e:
        logging.error('Rate limit exceeded for GitHub API.')

If you want, i can take this. for this small patch and would love work on more robust solution to find way to overcome this problem

Best regards
Ilya

from pr-agent.

ezzcodeezzlife avatar ezzcodeezzlife commented on May 15, 2024

Just tested with another smaller repo, still no luck 😢

from pr-agent.

okotek avatar okotek commented on May 15, 2024

Hi

Can you check your rate limits under https://platform.openai.com/account/rate-limits and check the values for gpt-3.5-turbo and gpt-4?

We'll add handling for OpenAI rate limits soon, and probably fallback to gpt-3.5-turbo in case of small rate limit or unavailability of gpt-4

from pr-agent.

okotek avatar okotek commented on May 15, 2024

#105

from pr-agent.

ezzcodeezzlife avatar ezzcodeezzlife commented on May 15, 2024

image
@okotek

from pr-agent.

okotek avatar okotek commented on May 15, 2024

Could be that 40K is too small when the diff is moderate. I merged the PR, can try again and see if the retry policy helps?

from pr-agent.

ezzcodeezzlife avatar ezzcodeezzlife commented on May 15, 2024

unfortunately still the same issue. What TPM do you have and how did you get it? Any more ideas? @okotek

from pr-agent.

KalleV avatar KalleV commented on May 15, 2024

Did you also check your token usage at https://platform.openai.com/account/usage? I'm curious if an unexpectedly high number of tokens or a burst of requests was used.

from pr-agent.

ezzcodeezzlife avatar ezzcodeezzlife commented on May 15, 2024

No real spike visible in the usage dash, but thanks for the info! @KalleV

from pr-agent.

KalleV avatar KalleV commented on May 15, 2024

Nice, that's good to know. What if you click through the language model usage metrics? On my page, I saw a few show up with quite a few tokens like this one:

gpt-4-0613, 1 request
7,052 prompt + 218 completion = 7,270 tokens

from pr-agent.

okotek avatar okotek commented on May 15, 2024

#117 fallback models implementation

from pr-agent.

ezzcodeezzlife avatar ezzcodeezzlife commented on May 15, 2024

image

some started working for me, but commenting commands still leads to 403

from pr-agent.

ezzcodeezzlife avatar ezzcodeezzlife commented on May 15, 2024

@okotek @KalleV any more ideas around this? 🤔 thank you

from pr-agent.

KalleV avatar KalleV commented on May 15, 2024

I think I managed to quickly replicate the error from the OP right in the https://platform.openai.com/playground while testing PR responses for a PR equivalent to ~4800 tokens (GPT4):

Rate limit reached for 10KTPM-200RPM in organization org-<id> on tokens per min. Limit: 10,000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.

from pr-agent.

ezzcodeezzlife avatar ezzcodeezzlife commented on May 15, 2024

It is fixed for me. Thank you for all your contributions ❤️

from pr-agent.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.