Comments (16)
Got the same error. Seems the error is from github actions and not OpenAI API. I could be wrong.
Traceback (most recent call last):
File "/app/pr_agent/servers/github_action_runner.py", line 57, in <module>
asyncio.run(run_action())
File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/app/pr_agent/servers/github_action_runner.py", line 53, in run_action
await PRAgent().handle_request(pr_url, body)
File "/app/pr_agent/agent/pr_agent.py", line 25, in handle_request
await PRDescription(pr_url).describe()
File "/app/pr_agent/tools/pr_description.py", line 40, in describe
await retry_with_fallback_models(self._prepare_prediction)
File "/app/pr_agent/algo/pr_processing.py", line [20](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:21)8, in retry_with_fallback_models
return await f(model)
File "/app/pr_agent/tools/pr_description.py", line 55, in _prepare_prediction
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
File "/app/pr_agent/algo/pr_processing.py", line 43, in get_pr_diff
diff_files = list(git_provider.get_diff_files())
File "/app/pr_agent/git_providers/github_provider.py", line 84, in get_diff_files
for file in files:
File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line 69, in __iter__
newElements = self._grow()
File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line 80, in _grow
newElements = self._fetchNextPage()
File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line [21](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:22)3, in _fetchNextPage
headers, data = self.__requester.requestJsonAndCheck(
File "/usr/local/lib/python3.10/site-packages/github/Requester.py", line 442, in requestJsonAndCheck
return self.__check(
File "/usr/local/lib/python3.10/site-packages/github/Requester.py", line 487, in __check
raise self.__createException(status, responseHeaders, data)
github.GithubException.RateLimitExceededException: 403 {"message": "API rate limit exceeded for installation ID [28](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:29)441098.", "documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"}
from pr-agent.
Hello,
it looks like now it is not openAI related, but GitHub token limitation
When using GITHUB_TOKEN, the rate limit is 1,000 requests per hour per repository.
If you exceed the rate limit, the response will have a 403 status and the x-ratelimit-remaining header will be 0:
please see https://docs.github.com/en/rest/overview/resources-in-the-rest-api?apiVersion=2022-11-28#rate-limits-for-requests-from-github-actions
and
https://docs.github.com/en/rest/overview/resources-in-the-rest-api?apiVersion=2022-11-28#rate-limits-for-requests-from-github-actions
as a fastest response I would suggest adding an exception to make message rather then fail
in pr_processing.py
from github import GithubException
try:
diff_files = list(git_provider.get_diff_files())
except GithubException.RateLimitExceededException as e:
logging.error('Rate limit exceeded for GitHub API.')
If you want, i can take this. for this small patch and would love work on more robust solution to find way to overcome this problem
Best regards
Ilya
from pr-agent.
Just tested with another smaller repo, still no luck 😢
from pr-agent.
Hi
Can you check your rate limits under https://platform.openai.com/account/rate-limits and check the values for gpt-3.5-turbo and gpt-4?
We'll add handling for OpenAI rate limits soon, and probably fallback to gpt-3.5-turbo in case of small rate limit or unavailability of gpt-4
from pr-agent.
from pr-agent.
from pr-agent.
Could be that 40K is too small when the diff is moderate. I merged the PR, can try again and see if the retry policy helps?
from pr-agent.
unfortunately still the same issue. What TPM do you have and how did you get it? Any more ideas? @okotek
from pr-agent.
Did you also check your token usage at https://platform.openai.com/account/usage? I'm curious if an unexpectedly high number of tokens or a burst of requests was used.
from pr-agent.
No real spike visible in the usage dash, but thanks for the info! @KalleV
from pr-agent.
Nice, that's good to know. What if you click through the language model usage metrics? On my page, I saw a few show up with quite a few tokens like this one:
gpt-4-0613, 1 request
7,052 prompt + 218 completion = 7,270 tokens
from pr-agent.
#117 fallback models implementation
from pr-agent.
some started working for me, but commenting commands still leads to 403
from pr-agent.
@okotek @KalleV any more ideas around this? 🤔 thank you
from pr-agent.
I think I managed to quickly replicate the error from the OP right in the https://platform.openai.com/playground while testing PR responses for a PR equivalent to ~4800 tokens (GPT4):
Rate limit reached for 10KTPM-200RPM in organization org-<id> on tokens per min. Limit: 10,000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.
from pr-agent.
It is fixed for me. Thank you for all your contributions ❤️
from pr-agent.
Related Issues (20)
- Add persistent_comment option for `/describe` HOT 6
- URL
- Failed to review PR: name 'is_valid_file' is not defined HOT 1
- No need to check OPENAI_KEY when using non OpenAI models HOT 1
- [GitHub Actions] How do I configure using `.pr_agent.toml` file? HOT 5
- Changelog commit includes option to start with "[skip ci]" > "[skip ci] Update CHANGELOG.md" HOT 4
- Can't log in to IntelliJ IDEA
- Can we use pr-agent with custom llm provider? HOT 7
- 'summarize_mode' is undefined HOT 5
- gpt-3.5-turbo context window is 16k HOT 3
- gitlab_provider asign personal_access_token to oauth_token may not work? HOT 5
- Clarifications on Azure Open AI configuration HOT 3
- /review broke - docker:latest HOT 3
- Replace 👀 with ❤️ when response is completed HOT 2
- Have `/config` print the working config as well as all the options HOT 1
- Do we take any benchmarks between different LLMs? HOT 1
- Add label base on pull request description. HOT 6
- PR-Agent Chrome Extension: AWS CodeCommit HOT 1
- Azure DevOps output for suggestions is not aligned correctly in latest release HOT 2
- Update tiktoken to use gpt-4o HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pr-agent.