codium-ai / pr-agent Goto Github PK
View Code? Open in Web Editor NEW๐CodiumAI PR-Agent: An AI-Powered ๐ค Tool for Automated Pull Request Analysis, Feedback, Suggestions and More! ๐ป๐
License: Apache License 2.0
๐CodiumAI PR-Agent: An AI-Powered ๐ค Tool for Automated Pull Request Analysis, Feedback, Suggestions and More! ๐ป๐
License: Apache License 2.0
Thanks for publishing this tool. It's very fun to try! The PR descriptions and summaries are impressive.
I was curious about the possibility of including the original project-tracking ticket in the context of the reviews. That could open up an even more accurate AI-generated review that could evaluate whether the changes meet the ticket's requirements.
For JIRA, I was picturing a system that looks for configured project ID(s) in the commits (i.e. MYPROJ) and then performs a GET request to retrieve relevant fields to augment the context.
Roadmap item:
Add additional context to the prompt. For example, repo (or relevant files) summarization, with tools such a ctags
I install by docker and get ValueError: The provided URL is not a valid GitHub URL
Are there any plans to support GitHub Enterprise?
This test is failing after a453437 with the following error:
tests/unittest/test_convert_to_markdown.py:98: AssertionError
=========================== short test summary info ============================
FAILED tests/unittest/test_convert_to_markdown.py::TestConvertToMarkdown::test_simple_dictionary_input - assert "- ๐ฏ **Main t...de after 2'}}" == '- ๐ฏ **Main t...\n ```'
- ๐ฏ **Main theme:** Test
- ๐ **Type of PR:** Test type
- ๐งช **Relevant tests added:** no
- โจ **Focused PR:** Yes
- ๐ก **General PR suggestions:** general suggestion...
+ - **Code suggestions:**
+ - {'Code example': {'Before': 'Code before', 'After': 'Code after'}}
+ - {'Code example': {'Before': 'Code before 2', 'After': 'Code after 2'}}
- - ๐ค **Code suggestions:**
-
- - **Code example:**
- - **Before:**
- ```
- Code before
- ```
- - **After:**
- ```
- Code after
- ```
-
- - **Code example:**
- - **Before:**
- ```
- Code before 2
- ```
- - **After:**
- ```
- Code after 2
- ```
========================= 1 failed, 39 passed in 0.79s =========================
Following https://github.com/Codium-ai/pr-agent/blob/main/INSTALL.md#method-5-run-as-a-github-app steps 5 and 6, it guides to update a pr_agent/settings/.secrets.toml
file to be used later on as part of the configuration files https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/config_loader.py#L10.
Though, when building the container image, the https://github.com/Codium-ai/pr-agent/blob/main/.dockerignore#L2 file makes docker
to ignore this file as part of the build, causing the image to always fail with error
GitHub app ID and private key are required when using GitHub app deployment
I install and run PR-agent command on an PR, but the github action fails, I attached the log of github action:
2023-07-28T04:26:31.3676488Z ##[command]/usr/bin/docker run --name bedb40892f5da3aed45acbef1944b5ef909ba_714136 --label 5bedb4 --workdir /github/workspace --rm -e "OPENAI_KEY" -e "GITHUB_TOKEN" -e "HOME" -e "GITHUB_JOB" -e "GITHUB_REF" -e "GITHUB_SHA" -e "GITHUB_REPOSITORY" -e "GITHUB_REPOSITORY_OWNER" -e "GITHUB_REPOSITORY_OWNER_ID" -e "GITHUB_RUN_ID" -e "GITHUB_RUN_NUMBER" -e "GITHUB_RETENTION_DAYS" -e "GITHUB_RUN_ATTEMPT" -e "GITHUB_REPOSITORY_ID" -e "GITHUB_ACTOR_ID" -e "GITHUB_ACTOR" -e "GITHUB_TRIGGERING_ACTOR" -e "GITHUB_WORKFLOW" -e "GITHUB_HEAD_REF" -e "GITHUB_BASE_REF" -e "GITHUB_EVENT_NAME" -e "GITHUB_SERVER_URL" -e "GITHUB_API_URL" -e "GITHUB_GRAPHQL_URL" -e "GITHUB_REF_NAME" -e "GITHUB_REF_PROTECTED" -e "GITHUB_REF_TYPE" -e "GITHUB_WORKFLOW_REF" -e "GITHUB_WORKFLOW_SHA" -e "GITHUB_WORKSPACE" -e "GITHUB_ACTION" -e "GITHUB_EVENT_PATH" -e "GITHUB_ACTION_REPOSITORY" -e "GITHUB_ACTION_REF" -e "GITHUB_PATH" -e "GITHUB_ENV" -e "GITHUB_STEP_SUMMARY" -e "GITHUB_STATE" -e "GITHUB_OUTPUT" -e "RUNNER_OS" -e "RUNNER_ARCH" -e "RUNNER_NAME" -e "RUNNER_ENVIRONMENT" -e "RUNNER_TOOL_CACHE" -e "RUNNER_TEMP" -e "RUNNER_WORKSPACE" -e "ACTIONS_RUNTIME_URL" -e "ACTIONS_RUNTIME_TOKEN" -e "ACTIONS_CACHE_URL" -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/lzg.cmd.booking/lzg.cmd.booking":"/github/workspace" 5bedb4:0892f5da3aed45acbef1944b5ef909ba
2023-07-28T04:26:32.8597954Z Traceback (most recent call last):
2023-07-28T04:26:32.8598810Z File "/usr/local/lib/python3.10/site-packages/starlette_context/ctx.py", line 31, in data
2023-07-28T04:26:32.8599516Z return _request_scope_context_storage.get()
2023-07-28T04:26:32.8599982Z LookupError: <ContextVar name='starlette_context' at 0x7f8f4d717ab0>
2023-07-28T04:26:32.8600197Z
2023-07-28T04:26:32.8600370Z During handling of the above exception, another exception occurred:
2023-07-28T04:26:32.8600584Z
2023-07-28T04:26:32.8600684Z Traceback (most recent call last):
2023-07-28T04:26:32.8601019Z File "/app/pr_agent/servers/github_action_runner.py", line 68, in <module>
2023-07-28T04:26:32.8601335Z asyncio.run(run_action())
2023-07-28T04:26:32.8601644Z File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
2023-07-28T04:26:32.8601946Z return loop.run_until_complete(main)
2023-07-28T04:26:32.8602297Z File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
2023-07-28T04:26:32.8602614Z return future.result()
2023-07-28T04:26:32.8602925Z File "/app/pr_agent/servers/github_action_runner.py", line 64, in run_action
2023-07-28T04:26:32.8603275Z await PRAgent().handle_request(pr_url, body)
2023-07-28T04:26:32.8603627Z File "/app/pr_agent/agent/pr_agent.py", line 24, in handle_request
2023-07-28T04:26:32.8603942Z await PRReviewer(pr_url, args=args).review()
2023-07-28T04:26:32.8604267Z File "/app/pr_agent/tools/pr_reviewer.py", line 35, in __init__
2023-07-28T04:26:32.8604648Z self.git_provider = get_git_provider()(pr_url, incremental=self.incremental)
2023-07-28T04:26:32.8605048Z File "/app/pr_agent/git_providers/github_provider.py", line 21, in __init__
2023-07-28T04:26:32.8605405Z self.installation_id = context.get("installation_id", None)
2023-07-28T04:26:32.8605764Z File "/usr/local/lib/python3.10/_collections_abc.py", line 824, in get
2023-07-28T04:26:32.8606046Z return self[key]
2023-07-28T04:26:32.8606340Z File "/usr/local/lib/python3.10/collections/__init__.py", line 1102, in __getitem__
2023-07-28T04:26:32.8606637Z if key in self.data:
2023-07-28T04:26:32.8615691Z File "/usr/local/lib/python3.10/site-packages/starlette_context/ctx.py", line 33, in data
2023-07-28T04:26:32.8616428Z raise ContextDoesNotExistError
2023-07-28T04:26:32.8617285Z starlette_context.errors.ContextDoesNotExistError: You didn't use the required middleware or you're trying to access `context` object outside of the request-response cycle.```
Context:
I am trying to follow the installation guide for Gitlab but I am facing some weird errors with [Docker] I face GitHub token is required when using user deployment.
but When I use it with my own installation on my machine with MacBook (M1) it causes me the error as pr-agent not found.
I followed the installation guide but no luck no success. A handful documents with required information is highly appreciated.
It would be great to have the ability to customize the prompts, or at least add lines to the initial description for extra guidance. For example, for the code suggestion prompt, users might want to add an extra line for telling the LLM to explicitly suggest type hints, possible unit tests, etc.
On errors, handle gracefully by:
Hi,
I'm trying to run the PR Agent from source with method 2 and I'm running into the following error:
$ PYTHOPATH=. python pr_agent/cli.py --pr_url https://github.com/<MY_ORG>/<MY_REPO>/pull/134
Reviewing PR: https://github.com/<MY_ORG>/<MY_REPO>/pull/134
Traceback (most recent call last):
File "/Users/zmeir/git/pr-agent/pr_agent/cli.py", line 27, in <module>
run()
File "/Users/zmeir/git/pr-agent/pr_agent/cli.py", line 22, in run
reviewer = PRReviewer(args.pr_url, installation_id=None, cli_mode=True)
File "/Users/zmeir/git/pr-agent/pr_agent/tools/pr_reviewer.py", line 19, in __init__
self.git_provider = get_git_provider()(pr_url, installation_id)
File "/Users/zmeir/git/pr-agent/pr_agent/git_providers/github_provider.py", line 29, in __init__
self.set_pr(pr_url)
File "/Users/zmeir/git/pr-agent/pr_agent/git_providers/github_provider.py", line 33, in set_pr
self.pr = self._get_pr()
File "/Users/zmeir/git/pr-agent/pr_agent/git_providers/github_provider.py", line 189, in _get_pr
return self._get_repo().get_pull(self.pr_num)
File "/Users/zmeir/git/pr-agent/pr_agent/git_providers/github_provider.py", line 186, in _get_repo
return self.github_client.get_repo(self.repo)
File "/Users/zmeir/git/pr-agent/venv/lib/python3.9/site-packages/github/MainClass.py", line 363, in get_repo
headers, data = self.__requester.requestJsonAndCheck("GET", url)
File "/Users/zmeir/git/pr-agent/venv/lib/python3.9/site-packages/github/Requester.py", line 442, in requestJsonAndCheck
return self.__check(
File "/Users/zmeir/git/pr-agent/venv/lib/python3.9/site-packages/github/Requester.py", line 487, in __check
raise self.__createException(status, responseHeaders, data)
github.GithubException.BadCredentialsException: 401 {"message": "Bad credentials", "documentation_url": "https://docs.github.com/rest"}
I set my user_token
in pr_agent/settings/.secrets.toml
and made sure the token is authorized for MY_ORG and has the proper repo scope:
I tested the same token with curl
and it seems to work - for example:
$ curl -X GET -H "Authorization:token <MY_TOEKN>" -H "Content-Type:application/json" 'https://api.github.com/repos/<MY_ORG>/<MY_REPO>/pulls?head=<MY_ORG>:<MY_BRANCH>&base=master&state=open'
[
{
"url": "https://api.github.com/repos/<MY_ORG>/<MY_REPO>/pulls/134",
"id": <MY_ID>,
"node_id": "<MY_NODE_ID>",
"html_url": "https://github.com/<MY_ORG>/<MY_REPO>/pull/134",
"diff_url": "https://github.com/<MY_ORG>/<MY_REPO>/pull/134.diff",
"patch_url": "https://github.com/<MY_ORG>/<MY_REPO>/pull/134.patch",
"issue_url": "https://api.github.com/repos/<MY_ORG>/<MY_REPO>/issues/134",
"number": 134,
"state": "open",
...
}
]
$ curl -X GET -H "Authorization:token <MY_TOKEN>" -H "Content-Type:application/json" 'https://api.github.com/repos/<MY_ORG>/<MY_REPO>'
{
"id": <MY_ID>,
"node_id": "<MY_NODE_ID>",
"name": "<MY_REPO>",
"full_name": "<MY_ORG>/<MY_REPO>",
"private": true,
"owner": { ... }
}
Some technical info:
I suspect the issue is related to a recent change in pr_agent.algo.pr_processing.find_line_number_of_relevant_line_in_file
which now returns a tuple, and when used in pr_agent.git_providers.github_provider.GithubProvider.create_inline_comment
it expects just an int.
Exception:
github.GithubException.GithubException: 422 {"message": "Invalid request.\n\nFor 'properties/position', [17, 52] is not a number.\nFor 'properties/position', [23, 228] is not a number.\nFor 'properties/position', [25, 230] is not a number.", "documentation_url": "https://docs.github.com/rest/pulls/reviews#create-a-review-for-a-pull-request"}
Full stacktrace:
Traceback (most recent call last):
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/applications.py", line 290, in __call__
await super().__call__(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette_context/middleware/raw_middleware.py", line 92, in __call__
await self.app(scope, receive, send_wrapper)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/routing.py", line 241, in app
raw_response = await run_endpoint_function(
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/routing.py", line 167, in run_endpoint_function
return await dependant.call(**values)
File "/home/vcap/app/pr_agent/servers/github_app.py", line 37, in handle_github_webhooks
response = await handle_request(body, event=request.headers.get("X-GitHub-Event", None))
File "/home/vcap/app/pr_agent/servers/github_app.py", line 132, in handle_request
await agent.handle_request(api_url, "/review")
File "/home/vcap/app/pr_agent/agent/pr_agent.py", line 72, in handle_request
await command2class[action](pr_url, args=args).run()
File "/home/vcap/app/pr_agent/tools/pr_reviewer.py", line 111, in run
self._publish_inline_code_comments()
File "/home/vcap/app/pr_agent/tools/pr_reviewer.py", line 256, in _publish_inline_code_comments
self.git_provider.publish_inline_comments(comments)
File "/home/vcap/app/pr_agent/git_providers/github_provider.py", line 167, in publish_inline_comments
self.pr.create_review(commit=self.last_commit_id, comments=comments)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/github/PullRequest.py", line 559, in create_review
headers, data = self._requester.requestJsonAndCheck(
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/github/Requester.py", line 445, in requestJsonAndCheck
return self.__check(
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/github/Requester.py", line 490, in __check
raise self.__createException(status, responseHeaders, data)
github.GithubException.GithubException: 422 {"message": "Invalid request.\n\nFor 'properties/position', [17, 52] is not a number.\nFor 'properties/position', [23, 228] is not a number.\nFor 'properties/position', [25, 230] is not a number.", "documentation_url": "https://docs.github.com/rest/pulls/reviews#create-a-review-for-a-pull-request"}
As topic says, would you ever consider supporting .NET C# 4.8 / 6.0 ?
When I try to run any review/description/etc on merge request for repo that is in sub group, I get GitlabGetError - 404.
This works
https://gitlab.com/org_name/repo_name/-/merge_requests/1
This does not work
(gitlab.exceptions.GitlabGetError: 404: 404 Project Not Found)
Full trace bellow
https://gitlab.com/org_name/repos_group/repo_name/-/merge_requests/1
Expected result
https://gitlab.com/org_name/repos_group/repo_name/-/merge_requests/1 - should work
Full error trace
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gitlab/exceptions.py", line 336, in wrapped_f
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gitlab/mixins.py", line 122, in get
server_data = self.gitlab.http_get(path, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gitlab/client.py", line 828, in http_get
result = self.http_request(
File "/usr/local/lib/python3.10/site-packages/gitlab/client.py", line 794, in http_request
raise gitlab.exceptions.GitlabHttpError(
gitlab.exceptions.GitlabHttpError: 404: 404 Project Not Found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/pr_agent/cli.py", line 101, in <module>
run()
File "/app/pr_agent/cli.py", line 55, in run
commands[command](args.pr_url, args.rest)
File "/app/pr_agent/cli.py", line 84, in _handle_review_command
reviewer = PRReviewer(pr_url, cli_mode=True, args=rest)
File "/app/pr_agent/tools/pr_reviewer.py", line 22, in __init__
self.git_provider = get_git_provider()(pr_url, incremental=self.incremental)
File "/app/pr_agent/git_providers/gitlab_provider.py", line 34, in __init__
self._set_merge_request(merge_request_url)
File "/app/pr_agent/git_providers/gitlab_provider.py", line 51, in _set_merge_request
self.mr = self._get_merge_request()
File "/app/pr_agent/git_providers/gitlab_provider.py", line 257, in _get_merge_request
mr = self.gl.projects.get(self.id_project).mergerequests.get(self.id_mr)
File "/usr/local/lib/python3.10/site-packages/gitlab/v4/objects/projects.py", line 860, in get
return cast(Project, super().get(id=id, lazy=lazy, **kwargs))
File "/usr/local/lib/python3.10/site-packages/gitlab/exceptions.py", line 338, in wrapped_f
raise error(e.error_message, e.response_code, e.response_body) from e
gitlab.exceptions.GitlabGetError: 404: 404 Project Not Found
Hey, i get the following error when runnung PR-Agent as Github Action. I follow the installation steps.
During the "PR Agent action step" I get the following error. Important to note that there is only one open PR at the time. I also checked to run API calls with the same OpenAI key and it works with no problems.
Sorry for the big stacktrace but maybe it helps :
Run Codium-ai/pr-agent@main
env:
OPENAI_KEY: ***
GITHUB_TOKEN: ***
/usr/bin/docker run --name c9a4a5c25221dd1fdc40a4b361asde6834f697_1d7b19 --label c9a4a5 --workdir /github/workspace --rm -e "OPENAI_KEY" -e "GITHUB_TOKEN" -e "HOME" -e "GITHUB_JOB" -e "GITHUB_REF" -e "GITHUB_SHA" -e "GITHUB_REPOSITORY" -e "GITHUB_REPOSITORY_OWNER" -e "GITHUB_REPOSITORY_OWNER_ID" -e "GITHUB_RUN_ID" -e "GITHUB_RUN_NUMBER" -e "GITHUB_RETENTION_DAYS" -e "GITHUB_RUN_ATTEMPT" -e "GITHUB_REPOSITORY_ID" -e "GITHUB_ACTOR_ID" -e "GITHUB_ACTOR" -e "GITHUB_TRIGGERING_ACTOR" -e "GITHUB_WORKFLOW" -e "GITHUB_HEAD_REF" -e "GITHUB_BASE_REF" -e "GITHUB_EVENT_NAME" -e "GITHUB_SERVER_URL" -e "GITHUB_API_URL" -e "GITHUB_GRAPHQL_URL" -e "GITHUB_REF_NAME" -e "GITHUB_REF_PROTECTED" -e "GITHUB_REF_TYPE" -e "GITHUB_WORKFLOW_REF" -e "GITHUB_WORKFLOW_SHA" -e "GITHUB_WORKSPACE" -e "GITHUB_ACTION" -e "GITHUB_EVENT_PATH" -e "GITHUB_ACTION_REPOSITORY" -e "GITHUB_ACTION_REF" -e "GITHUB_PATH" -e "GITHUB_ENV" -e "GITHUB_STEP_SUMMARY" -e "GITHUB_STATE" -e "GITHUB_OUTPUT" -e "RUNNER_OS" -e "RUNNER_ARCH" -e "RUNNER_NAME" -e "RUNNER_ENVIRONMENT" -e "RUNNER_TOOL_CACHE" -e "RUNNER_TEMP" -e "RUNNER_WORKSPACE" -e "ACTIONS_RUNTIME_URL" -e "ACTIONS_RUNTIME_TOKEN" -e "ACTIONS_CACHE_URL" -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/DLL/DLL":"/github/workspace" c9a4a5:c25221dd1asd61b3de6834f697
Traceback (most recent call last):
File "/app/pr_agent/servers/github_action_runner.py", line 57, in <module>
asyncio.run(run_action())
File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/app/pr_agent/servers/github_action_runner.py", line 43, in run_action
await PRReviewer(pr_url).review()
File "/app/pr_agent/tools/pr_reviewer.py", line 70, in review
self.prediction = await self._get_prediction()
File "/app/pr_agent/tools/pr_reviewer.py", line 92, in _get_prediction
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
File "/app/pr_agent/algo/ai_handler.py", line 60, in chat_completion
response = await openai.ChatCompletion.acreate(
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
return await super().acreate(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
response, _, api_key = await requestor.arequest(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 382, in arequest
resp, got_stream = await self._interpret_async_response(result, stream)
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 726, in _interpret_async_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.
Please let me know what is the issue here, thanks. Love the project!
Running into following issue
Run Codium-ai/pr-agent@main
/usr/bin/docker run --name bedb4853746a2bef44f7888dad2cd8a27189f_7b7377 --label 5bedb4 --workdir /github/workspace --rm -e "OPENAI_KEY" -e "GITHUB_TOKEN" -e "HOME" -e "GITHUB_JOB" -e "GITHUB_REF" -e "GITHUB_SHA" -e "GITHUB_REPOSITORY" -e "GITHUB_REPOSITORY_OWNER" -e "GITHUB_REPOSITORY_OWNER_ID" -e "GITHUB_RUN_ID" -e "GITHUB_RUN_NUMBER" -e "GITHUB_RETENTION_DAYS" -e "GITHUB_RUN_ATTEMPT" -e "GITHUB_REPOSITORY_ID" -e "GITHUB_ACTOR_ID" -e "GITHUB_ACTOR" -e "GITHUB_TRIGGERING_ACTOR" -e "GITHUB_WORKFLOW" -e "GITHUB_HEAD_REF" -e "GITHUB_BASE_REF" -e "GITHUB_EVENT_NAME" -e "GITHUB_SERVER_URL" -e "GITHUB_API_URL" -e "GITHUB_GRAPHQL_URL" -e "GITHUB_REF_NAME" -e "GITHUB_REF_PROTECTED" -e "GITHUB_REF_TYPE" -e "GITHUB_WORKFLOW_REF" -e "GITHUB_WORKFLOW_SHA" -e "GITHUB_WORKSPACE" -e "GITHUB_ACTION" -e "GITHUB_EVENT_PATH" -e "GITHUB_ACTION_REPOSITORY" -e "GITHUB_ACTION_REF" -e "GITHUB_PATH" -e "GITHUB_ENV" -e "GITHUB_STEP_SUMMARY" -e "GITHUB_STATE" -e "GITHUB_OUTPUT" -e "RUNNER_DEBUG" -e "RUNNER_OS" -e "RUNNER_ARCH" -e "RUNNER_NAME" -e "RUNNER_ENVIRONMENT" -e "RUNNER_TOOL_CACHE" -e "RUNNER_TEMP" -e "RUNNER_WORKSPACE" -e "ACTIONS_RUNTIME_URL" -e "ACTIONS_RUNTIME_TOKEN" -e "ACTIONS_CACHE_URL" -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/A1_core/A1_core":"/github/workspace" 5bedb4:853746a2bef44f7888dad2cd8a27189f
Traceback (most recent call last):
File "/app/pr_agent/servers/github_action_runner.py", line 5, in <module>
from pr_agent.agent.pr_agent import PRAgent
File "/app/pr_agent/agent/pr_agent.py", line 6, in <module>
from pr_agent.algo.utils import update_settings_from_args
File "/app/pr_agent/algo/utils.py", line 11, in <module>
import yaml
ModuleNotFoundError: No module named 'yaml'
Using
on:
pull_request:
issue_comment:
jobs:
pr_agent_job:
runs-on: ubuntu-latest
name: Run pr agent on every pull request, respond to user comments
steps:
- name: PR Agent action step
id: pragent
uses: Codium-ai/pr-agent@main
env:
OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Currently, the development process incorporates version control; however, we are facing a critical challenge due to the absence of structured release versioning. While the codebase is versioned internally, we lack the practice of creating distinct releases. This is leading to potential instability in the development environment, as the absence of well-defined release points can introduce bugs and errors with each subsequent development iteration.
Impact:
The absence of clear release versioning means that we are only able to access the latest version of the codebase. Without established release points, we lack a stable reference to which we can easily revert in the event of critical issues. This holds true irrespective of whether we are utilizing GitHub Actions, Docker images, or other development tools. Consequently, our ability to ensure a stable and reliable development environment is compromised.
Hello, excellent project, my congratulations to all contributors, just one question at the moment are you planning to add a parameter or some configuration to skip the analysis or summary of a specific file type, example of command:
--skip-files *.snap
Hi โ ,
From what I can understand, currently the project is a wrapper on calls to a LLM. An agent would connect with other tools directly; is there a roadmap on those features?
I noticed the PR Agent is removing existing labels assigned to the PR. For example, if a label is assigned by https://github.com/actions/labeler, it will get removed as soon as the task runs (i.e. /describe).
The IDE plugin indicates the number of behaviors in a method/class. It then suggests tests to cover these behaviors. I'm requesting a feature for pr-agent that recognizes the total behaviors changed/added, looks at the tests changed/added, and indicates the number covered. A status check is included; if all are covered it passes. If not, it fails. This is to ensure all behaviors are covered.
If behaviors are missed it could suggest tests that cover them, similar to how the IDE plugin works.
This could optionally be taken a step father to list the behaviors and the test(s) that cover them.
I created a dummy repo and opened a PR just for testing this, but this is what I get:
Traceback (most recent call last):
File "/app/pr_agent/servers/github_action_runner.py", line 68, in <module>
asyncio.run(run_action())
File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/app/pr_agent/servers/github_action_runner.py", line 53, in run_action
await PRReviewer(pr_url).run()
File "/app/pr_agent/tools/pr_reviewer.py", line 95, in run
self.git_provider.publish_comment("Preparing review...", is_temporary=True)
File "/app/pr_agent/git_providers/github_provider.py", line 123, in publish_comment
response = self.pr.create_issue_comment(pr_comment)
File "/usr/local/lib/python3.10/site-packages/github/PullRequest.py", line 51[7](https://github.com/talyaw/test_pr_agent/actions/runs/5735109027/job/15542318749#step:3:8), in create_issue_comment
headers, data = self._requester.requestJsonAndCheck(
File "/usr/local/lib/python3.10/site-packages/github/Requester.py", line 442, in requestJsonAndCheck
return self.__check(
File "/usr/local/lib/python3.10/site-packages/github/Requester.py", line 4[8](https://github.com/talyaw/test_pr_agent/actions/runs/5735109027/job/15542318749#step:3:9)7, in __check
raise self.__createException(status, responseHeaders, data)
github.GithubException.GithubException: 403 {"message": "Resource not accessible by integration", "documentation_url": "https://docs.github.com/rest/issues/comments#create-an-issue-comment"}
Any idea how to make it work?
I've seen mentions to tweak configuration for various things, but that is not currently possible when using the GitHub App.
I've been running the "/describe" on a lot of PRs, and the output is excellent. With reviews automatically being created by the PR Agent, this one has been my primary shortcut. One usability issue I'm running into, however, is that the "/describe" replaces the entire PR description section and title.
The "/describe -c" option for posting the description as a comment works as expected, but what I'd really like to do is automate the process of manually filling out Github pull request templates.
For a possible implementation, it seems like trying to automatically map the LLM response to a dynamically configured pull request template would have too many edge cases. Instead, what if there was support for short-code or variable expansion mechanisms such as: pr_agent:describe
, and then the AI could substitute the variable with the description?
When huge files (i.e. autogenerated code etc.) are part of a GitLab repo, by default it shows the following message:
When trying to review such a PR using the review bot, it produces the following warning:
WARNING:root:File was modified, but no patch was found. Manually creating patch: funnelui/graphql.schema.json.
and then takes around 8 minutes (running in a Docker container on an M1 Mac) to build the diff.
Since files with thousands of lines are likely to be auto-generated code anyway, it would be nice to either
I'm not sure how GitHub handles files with a huge diff, but a similar feature might be useful for GitHub as well.
Got below error while running:
python cli.py --pr_url="https://github.com/XXX/pull/17" describe
changes in file: No changes were made to the file. stop
INFO:root:Preparing answer...
ERROR:root:Failed to parse AI prediction: while scanning a simple key
in "<unicode string>", line 5, column 1:
The PR changes the filename of a ...
^
could not find expected ':'
in "<unicode string>", line 6, column 1:
PR Main Files Walkthrough:
^
INFO:root:Successfully parsed AI prediction after removing 6 lines
INFO:root:Pushing answer...
Current configuration allows for a number of suggestions. Can we get all of them?
Hello,
I really like your project, have been using it for my PRs as well. One question though. Do you have any plans on releasing the agent for a Gitlab repo?
Regards
Do you have any plans to add configuration options that would allow the use of custom LLMs in future versions?
I recently published a package llm-client that can be very helpful in enabling the support to run other LLM models, including OpenAI, Google, AI21, HuggingfaceHub, Aleph Alpha, Anthropic, Local models with transformers.
The GitHub App automatically runs /review when a PR is opened, but it does not do anything to subsequent pushes to the PR. We'd like to run /review -i for each subsequent push.
Is there possibly a way to use this with a github action, that would avoid having to run on a separate server? Perhaps it could be triggered upon creation of a PR, then use something like perhaps https://github.com/marketplace/actions/add-pr-comment to automatically add an evaluation? Or perhaps there is an even better way, that's just a thought.
How to setup bitbucket with docker. when running the command line, it says GITHUB.USER_TOKEN is required
The "Preparing answer..." comment can be quite noisy and spammy. Is there a way to turn it off?
IMO, the best experience is to put the ๐ emoji on the comment.
Does it provide support for Azure Repos?
openai.error.InvalidRequestError: The model: gpt-4-0613
does not exist
Do you plan on adding support for customizing openai API setting?
More specifically, will it be possible to control the openai api_type and api_base properties to run against Azure OpenAI? Similarly to how this is done when using the openai
package directly:
import openai
# This is set to `azure`
openai.api_type = "azure"
# The API key for your Azure OpenAI resource.
openai.api_key = os.getenv("OPENAI_API_KEY")
# The base URL for your Azure OpenAI resource. e.g. "https://<your resource name>.openai.azure.com"
openai.api_base = os.getenv(" OPENAI_API_BASE ")
# OpenAI API Version
openai.api_version = os.getenv(" OPENAI_API_VERSION")
First, I love this project! It checked a few boxes I was looking for to automate the code reviews.
I'm more than willing to partner with you to enhance this project.
I've been testing this and noticed that the /improve
comment does not work on PRs with many changes.
Here is an example.
Are you planning to add support for Azure DevOps pull requests?
When creating inline code comments in PRReviewer
each comment is pushed to GitHub as a separate API call with pr.create_review_comment
.
By using pr.create_review
it is possible to send all inline comments in a single API call, which could be helpful when there are a lot of inline comments to avoid hitting the API rate limit.
I'm getting a timeout error when calling /improve
. This seems to happen when the PR diff is fairly large. For example: 11 files changed, ~1k additions, ~300 deletions. In my configuration.toml
I set pr_code_suggestions.num_code_suggestions=5
Traceback (most recent call last):
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/timeout.py", line 44, in wrapper
result = future.result(timeout=local_timeout_duration)
File "/home/vcap/deps/0/python/lib/python3.10/concurrent/futures/_base.py", line 460, in result
raise TimeoutError()
concurrent.futures._base.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vcap/app/pr_agent/algo/ai_handler.py", line 71, in chat_completion
response = await acompletion(
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/main.py", line 43, in acompletion
return await loop.run_in_executor(None, func)
File "/home/vcap/deps/0/python/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/utils.py", line 118, in wrapper
raise e
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/utils.py", line 107, in wrapper
result = original_function(*args, **kwargs)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/timeout.py", line 47, in wrapper
raise exception_to_raise(f"A timeout error occurred. The function call took longer than {local_timeout_duration} second(s).")
openai.error.Timeout: A timeout error occurred. The function call took longer than 60 second(s).
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vcap/deps/0/python/lib/python3.10/logging/__init__.py", line 1100, in emit
msg = self.format(record)
File "/home/vcap/deps/0/python/lib/python3.10/logging/__init__.py", line 943, in format
return fmt.format(record)
File "/home/vcap/deps/0/python/lib/python3.10/logging/__init__.py", line 678, in format
record.message = record.getMessage()
File "/home/vcap/deps/0/python/lib/python3.10/logging/__init__.py", line 368, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "/home/vcap/app/pr_agent/servers/github_app.py", line 154, in <module>
start()
File "/home/vcap/app/pr_agent/servers/github_app.py", line 150, in start
uvicorn.run(app, host="0.0.0.0", port=int(os.environ.get("PORT", "3000")))
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/main.py", line 578, in run
server.run()
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
return asyncio.run(self.serve(sockets=sockets))
File "/home/vcap/deps/0/python/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/home/vcap/deps/0/python/lib/python3.10/asyncio/base_events.py", line 636, in run_until_complete
self.run_forever()
File "/home/vcap/deps/0/python/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
self._run_once()
File "/home/vcap/deps/0/python/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
handle._run()
File "/home/vcap/deps/0/python/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/applications.py", line 290, in __call__
await super().__call__(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette_context/middleware/raw_middleware.py", line 92, in __call__
await self.app(scope, receive, send_wrapper)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/routing.py", line 241, in app
raw_response = await run_endpoint_function(
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/routing.py", line 167, in run_endpoint_function
return await dependant.call(**values)
File "/home/vcap/app/pr_agent/servers/github_app.py", line 37, in handle_github_webhooks
response = await handle_request(body, event=request.headers.get("X-GitHub-Event", None))
File "/home/vcap/app/pr_agent/servers/github_app.py", line 110, in handle_request
await agent.handle_request(api_url, comment_body)
File "/home/vcap/app/pr_agent/agent/pr_agent.py", line 72, in handle_request
await command2class[action](pr_url, args=args).run()
File "/home/vcap/app/pr_agent/tools/pr_code_suggestions.py", line 50, in run
await retry_with_fallback_models(self._prepare_prediction)
File "/home/vcap/app/pr_agent/algo/pr_processing.py", line 218, in retry_with_fallback_models
return await f(model)
File "/home/vcap/app/pr_agent/tools/pr_code_suggestions.py", line 68, in _prepare_prediction
self.prediction = await self._get_prediction(model)
File "/home/vcap/app/pr_agent/tools/pr_code_suggestions.py", line 79, in _get_prediction
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
File "/home/vcap/app/pr_agent/algo/ai_handler.py", line 82, in chat_completion
logging.error("Error during OpenAI inference: ", e)
Message: 'Error during OpenAI inference: '
Arguments: (Timeout(message='A timeout error occurred. The function call took longer than 60 second(s).', http_status=None, request_id=None),)
WARNING:root:Failed to generate prediction with gpt-4: Traceback (most recent call last):
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/litellm/timeout.py", line 44, in wrapper
result = future.result(timeout=local_timeout_duration)
Would be great to have an option for some of us.
Great tool!
We wanted to only use PR agent on the private repos in our org, so we made a change to ignore private=false
repos.
If others would be interested in this then I can open a PR to contribute it with a configuration to control it.
Hello,
I am using a gitlab instance and the merge request that is reviewed happens to have many items (34 commits, 24 pipelines and 49 changes). Running the cli causes a warning about python-gitlab
only using a subset of the data available:
Reviewing PR: https://git.my-domain.com/project/repo/-/merge_requests/24/
/app/pr_agent/git_providers/gitlab_provider.py:51: UserWarning: Calling a `list()` method without specifying `get_all=True` or `iterator=True` will return a maximum of 20 items. Your query returned 20 of 24 items. See https://python-gitlab.readthedocs.io/en/v3.15.0/api-usage.html#pagination for more details. If this was done intentionally, then this warning can be supressed by adding the argument `get_all=False` to the `list()` call. (python-gitlab: /usr/local/lib/python3.10/site-packages/gitlab/client.py:976)
self.last_diff = self.mr.diffs.list()[-1]
INFO:root:Reviewing PR...
INFO:root:Getting PR diff...
INFO:root:Getting AI prediction...
INFO:openai:message='OpenAI API response' path=https://api.openai.com/v1/chat/completions processing_ms=xxx request_id=xxx response_code=200
INFO:root:Preparing PR review...
INFO:root:Pushing PR review...
I am using the latest docker cli build and this is my command line:
docker run --rm -it -e OPENAI.KEY="key" -e CONFIG.GIT_PROVIDER="gitlab" -e GITLAB.URL="https://git.my-domain.com/" -e GITLAB.PERSONAL_ACCESS_TOKEN="token" -e PR_REVIEWER.REQUIRE_TESTS_REVIEW="false" -e PR_REVIEWER.NUM_CODE_SUGGESTIONS="4" codiumai/pr-agent --pr_url "https://git.my-domain.com/project/repo/-/merge_requests/24/" review
I noticed a convenient way to keep track of PR Agent commands on Github, especially with additional customization via custom options, is through the saved replies feature.
hey is there anyway to deal with token limitation
The GitHub App automatically runs /review when a PR is opened, but /describe -c and /improve are not automated. We'd like these to also run, or maybe a single command that does all three?
Keeping images in a git repository can slow down the git clone command because it makes the repo bigger and heavier.
I've noticed that when calling the /describe
command, the prompt uses the commit message in the PR to generate the PR description. In cases where a commit is no longer relevant (for example - it was later reverted in the same PR) this produces the wrong output in the PR description.
See this small PR example:
In this example I've made 2 commits:
After these 2 commits I called the /describe
command, and as you can see, the PR title and description say that this PR includes changes to the docstring, but if you look at the diff with the main branch you can clearly see this is not true.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.