Coder Social home page Coder Social logo

link-agi / autoagents Goto Github PK

View Code? Open in Web Editor NEW
1.1K 24.0 144.0 90.16 MB

[IJCAI 2024] Generate different roles for GPTs to form a collaborative entity for complex tasks.

Home Page: https://huggingface.co/spaces/LinkSoul/AutoAgents

License: MIT License

Python 54.21% Dockerfile 0.31% Shell 1.10% HTML 4.46% JavaScript 35.52% CSS 4.40%

autoagents's Introduction

AutoAgents: A Framework for Automatic Agent Generation

autoagents logo: A Framework for Automatic Agent Generation.

Generate different roles for GPTs to form a collaborative entity for complex tasks.

Paper CN doc EN doc JA doc License: MIT

AutoAgents is an experimental open-source application for an Automatic Agents Generation Experiment based on LLM. This program, driven by LLM, autonomously generates multi-agents to achieve whatever goal you set.

The execution process of AutoAgents.

  • 2023.08.30: 🚀 Adding a custom agent collection, AgentBank, allows you to add custom agents.

🚀 Features

  • Planner: Determines the expert roles to be added and the specific execution plan according to the problem.
  • Tools: The set of tools that can be used, currently only compatible with the search tools.
  • Observers: Responsible for reflecting on whether the planner and the results in the execution process are reasonable, currently including reflection checks on Agents, Plan, and Action.
  • Agents: Expert role agents generated by the planner, including name, expertise, tools used, and LLM enhancement.
  • Plan: The execution plan is composed of the generated expert roles, each step of the execution plan has at least one expert role agent.
  • Actions: The specific actions of the expert roles in the execution plan, such as calling tools or outputting results.

Demo

Online demo:

Video demo:

  • Rumor Verification
    rumor-verification.mp4
  • Gluttonous Snake
    snake-game-demo-en.mp4

Installation and Usage

Installation

git clone https://github.com/LinkSoul-AI/AutoAgents
cd AutoAgents
python setup.py install

Configuration

  • Configure your OPENAI_API_KEY in any of config/key.yaml / config/config.yaml / env
  • Priority order: config/key.yaml > config/config.yaml > env
# Copy the configuration file and make the necessary modifications.
cp config/config.yaml config/key.yaml
Variable Name config/key.yaml env
OPENAI_API_KEY # Replace with your own key OPENAI_API_KEY: "sk-..." export OPENAI_API_KEY="sk-..."
OPENAI_API_BASE # Optional OPENAI_API_BASE: "https://<YOUR_SITE>/v1" export OPENAI_API_BASE="https://<YOUR_SITE>/v1"

Usage

  • Commandline mode:
python main.py --mode commandline --llm_api_key YOUR_OPENAI_API_KEY --serpapi_key YOUR_SERPAPI_KEY --idea "Is LK-99 really a room temperature superconducting material?"
  • Websocket service mode:
python main.py --mode service --host "127.0.0.1" --port 9000

Docker

  • Build docker image:
IMAGE="linksoul.ai/autoagents"
VERSION=1.0

docker build -f docker/Dockerfile -t "${IMAGE}:${VERSION}" .
  • Start docker container:
docker run -it --rm -p 7860:7860 "${IMAGE}:${VERSION}"

Contributing

AutoAgents is dedicated to creating a cutting-edge automated multi-agent environment for large language models. We are actively seeking enthusiastic collaborators to embark with us on this thrilling and innovative journey.

This project exists thanks to all the people who contribute:

Contributor Contributor Contributor Contributor Contributor Contributor Contributor Contributor Contributor Contributor

How Can You Contribute?

  • Issue Reporting and Pull Requests: Encountering difficulties with AutoAgents? Feel free to raise the issue in English. Additionally, you're welcome to take initiative by resolving these issues yourself. Simply request to be assigned the issue, and upon resolution, submit a pull request (PR) with your solution.

  • Software Development Contributions: As an engineer, your skills can significantly enhance AutoAgents. We are in constant pursuit of skilled developers to refine, optimize, and expand our framework, enriching our feature set and devising new modules.

  • Content Creation for Documentation and Tutorials: If writing is your forte, join us in improving our documentation and developing tutorials or blog posts. Your contribution will make AutoAgents more user-friendly and accessible to a diverse audience.

  • Innovative Application Exploration: Intrigued by the prospects of multi-agent systems? If you're keen to experiment with AutoAgents, we're excited to support your endeavors and curious to see your innovative creations.

  • User Feedback and Strategic Suggestions: We highly value user input. Engage with AutoAgents and share your feedback. Your insights are crucial for ongoing enhancements, ensuring our framework's excellence and relevance.

Contact Information

If you have any questions or feedback about this project, please feel free to contact us. We highly appreciate your suggestions!

We will respond to all questions within 2-3 business days.

License

MIT license

Citation

If you find our work and this repository useful, please consider giving a star ⭐ and citation 🍺:

@article{chen2023auto,
  title={AutoAgents: The Automatic Agents Generation Framework},
  author={Chen, Guangyao and Dong, Siwei and Shu, Yu and Zhang, Ge and Jaward, Sesay and Börje, Karlsson and Fu, Jie and Shi, Yemin},
  journal={arXiv preprint},
  year={2023}
}

Wechat Group

Wechat Group

Acknowledgements

The system, action_bank and role_bank of this code base is built using MetaGPT

Icons in the framework made by Darius Dan, Freepik, kmg design, Flat Icons, Vectorslab from FlatIcon


Star History Chart

autoagents's People

Contributors

eltociear avatar harshhere905 avatar hrushik98 avatar icgy96 avatar ishaan-jaff avatar jaykef avatar pentesterpriyanshu avatar s1w3 avatar shiyemin avatar tabbbsy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autoagents's Issues

Errors interrupting the process

It was running well, the AI was assigning the agents and everything but suddently i got this error and it stopped:

RoleFeedback

The role suggestions provided were very helpful and have been incorporated into the role list. The suggestions for the UI/UX Designer, AdversarialWarning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.
2023-10-19 19:07:12.974 | INFO | autoagents.system.provider.openai_api:update_cost:95 - Total running cost: $0.729 | Max budget: $10.000 | Current cost: $0.202, prompt_tokens=3727, completion_tokens=1500
Traceback (most recent call last):
File "D:\AI\AutoAgents\myenv\lib\site-packages\tenacity-8.2.2-py3.9.egg\tenacity_asyncio.py", line 50, in call
result = await fn(*args, **kwargs)
File "D:\AI\AutoAgents\autoagents\actions\action\action.py", line 65, in _aask_v1
instruct_content = output_class(**parsed_data)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for task
PlanFeedback
field required (type=value_error.missing)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "D:\AI\AutoAgents\main.py", line 56, in
asyncio.run(commanline(proxy=proxy, llm_api_key=args.llm_api_key, serpapi_key=args.serpapi_key, idea=args.idea))
File "C:\Users\pedro\AppData\Local\Programs\Python\Python39\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Users\pedro\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete
return future.result()
File "D:\AI\AutoAgents\main.py", line 30, in commanline
await startup.startup(idea, investment, n_round, llm_api_key=llm_api_key, serpapi_key=serpapi_key, proxy=proxy)
File "D:\AI\AutoAgents\startup.py", line 14, in startup
await explorer.run(n_round=n_round)
File "D:\AI\AutoAgents\autoagents\explorer.py", line 57, in run
await self.environment.run()
File "D:\AI\AutoAgents\autoagents\environment.py", line 192, in run
await asyncio.gather(*futures)
File "D:\AI\AutoAgents\autoagents\roles\role.py", line 239, in run
rsp = await self._react()
File "D:\AI\AutoAgents\autoagents\roles\role.py", line 209, in _react
return await self._act()
File "D:\AI\AutoAgents\autoagents\roles\manager.py", line 33, in _act
response = await self._rc.todo.run(self._rc.important_memory, history=roles_plan, suggestions=suggestions)
File "D:\AI\AutoAgents\autoagents\actions\create_roles.py", line 141, in run
rsp = await self.aask_v1(prompt, "task", OUTPUT_MAPPING)
File "D:\AI\AutoAgents\myenv\lib\site-packages\tenacity-8.2.2-py3.9.egg\tenacity_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
File "D:\AI\AutoAgents\myenv\lib\site-packages\tenacity-8.2.2-py3.9.egg\tenacity_asyncio.py", line 47, in call
do = self.iter(retry_state=retry_state)
File "D:\AI\AutoAgents\myenv\lib\site-packages\tenacity-8.2.2-py3.9.egg\tenacity_init
.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x285fc81c760 state=finished raised ValidationError>]
(myenv) PS D:\AI\AutoAgents>

TypeError: 'async for' requires an object with __aiter__ method, got generator

I'm getting this error for python 3.10 + 3.11. Suggestions on what might fix it?

conda create -n AutoAgents python=3.10 -y
conda activate AutoAgents
pip3 install -r requirements.txt
python setup.py install

python main.py --mode commandline --idea "Write a poem about the last five Australian prime ministers" --llm_api_key=$OPENAI_API_KEY --serpapi_key=$SERPAPI_API_KEY

The output is an error:

2023-10-15 18:16:16.657 | INFO     | autoagents.system.config:__init__:43 - Config loading done.
2023-10-15 18:16:17.477 | INFO     | autoagents.explorer:invest:33 - Investment: $10.0.
Traceback (most recent call last):
  File "/Users/drnic/workspace/aiagents/AutoAgents/main.py", line 56, in <module>
    asyncio.run(commanline(proxy=proxy, llm_api_key=args.llm_api_key, serpapi_key=args.serpapi_key, idea=args.idea))
  File "/opt/homebrew/Caskroom/miniconda/base/envs/AutoAgents/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/AutoAgents/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/Users/drnic/workspace/aiagents/AutoAgents/main.py", line 30, in commanline
    await startup.startup(idea, investment, n_round, llm_api_key=llm_api_key, serpapi_key=serpapi_key, proxy=proxy)
  File "/Users/drnic/workspace/aiagents/AutoAgents/startup.py", line 14, in startup
    await explorer.run(n_round=n_round)
  File "/Users/drnic/workspace/aiagents/AutoAgents/autoagents/explorer.py", line 57, in run
    await self.environment.run()
  File "/Users/drnic/workspace/aiagents/AutoAgents/autoagents/environment.py", line 192, in run
    await asyncio.gather(*futures)
  File "/Users/drnic/workspace/aiagents/AutoAgents/autoagents/roles/role.py", line 239, in run
    rsp = await self._react()
  File "/Users/drnic/workspace/aiagents/AutoAgents/autoagents/roles/role.py", line 207, in _react
    await self._think()
  File "/Users/drnic/workspace/aiagents/AutoAgents/autoagents/roles/role.py", line 155, in _think
    next_state = await self._llm.aask(prompt)
  File "/Users/drnic/workspace/aiagents/AutoAgents/autoagents/system/provider/base_gpt_api.py", line 42, in aask
    rsp = await self.acompletion_text(message, stream=True)
  File "/Users/drnic/workspace/aiagents/AutoAgents/autoagents/system/provider/openai_api.py", line 33, in wrapper
    return await f(*args, **kwargs)
  File "/Users/drnic/workspace/aiagents/AutoAgents/autoagents/system/provider/openai_api.py", line 230, in acompletion_text
    return await self._achat_completion_stream(messages)
  File "/Users/drnic/workspace/aiagents/AutoAgents/autoagents/system/provider/openai_api.py", line 173, in _achat_completion_stream
    async for chunk in response:
TypeError: 'async for' requires an object with __aiter__ method, got generator

Installed packages:

$ pip3 list
Package                 Version
----------------------- ------------
aiohttp                 3.8.5
aiosignal               1.3.1
anthropic               0.3.6
anyio                   3.7.1
appdirs                 1.4.4
asgiref                 3.7.2
async-timeout           4.0.3
attrs                   23.1.0
autoagents              0.1
backoff                 2.2.1
camel-converter         3.0.3
certifi                 2023.7.22
channels                4.0.0
charset-normalizer      3.3.0
click                   8.1.7
cohere                  4.27
dataclasses-json        0.5.14
diskcache               5.6.3
distro                  1.8.0
Django                  4.2.6
duckduckgo-search       2.9.4
et-xmlfile              1.1.0
exceptiongroup          1.1.3
faiss-cpu               1.7.4
fastavro                1.8.2
filelock                3.12.4
fire                    0.4.0
frozenlist              1.4.0
fsspec                  2023.9.2
h11                     0.14.0
httpcore                0.18.0
httpx                   0.25.0
huggingface-hub         0.17.3
idna                    3.4
importlib-metadata      6.8.0
iniconfig               2.0.0
langchain               0.0.236
langsmith               0.0.10
litellm                 0.1.236
loguru                  0.6.0
lxml                    4.9.3
marshmallow             3.20.1
meilisearch             0.21.0
multidict               6.0.4
mypy-extensions         1.0.0
numexpr                 2.8.7
numpy                   1.24.3
openai                  0.27.8
openapi-schema-pydantic 1.2.4
openpyxl                3.1.2
packaging               23.2
pandas                  1.4.1
pandas-stubs            2.0.2.230605
pip                     23.2.1
pluggy                  1.3.0
pydantic                1.10.7
pytest                  7.2.2
python-dateutil         2.8.2
python-docx             0.8.11
python-dotenv           1.0.0
pytz                    2023.3.post1
PyYAML                  6.0
regex                   2023.10.3
replicate               0.15.4
requests                2.31.0
setuptools              65.6.3
six                     1.16.0
sniffio                 1.3.0
SQLAlchemy              2.0.22
sqlparse                0.4.4
tenacity                8.2.2
termcolor               2.3.0
tiktoken                0.3.3
tokenizers              0.14.1
tomli                   2.0.1
tqdm                    4.64.0
types-pytz              2023.3.1.1
typing_extensions       4.5.0
typing-inspect          0.8.0
urllib3                 2.0.6
websockets              11.0.3
wheel                   0.41.2
yarl                    1.9.2
zipp                    3.17.0

Typo

python main.py --mode commandline --llm_api_key YOUR_OPENAI_API_KEY --serapi_key YOUR_SERPAPI_KEY --idea "Is LK-99 really a room temperature superconducting material?"

You have serapi instead of serpapi

No API key provided.

Using the docker container i get:

"openai.error.AuthenticationError: No API key provided."

when both API keys are provided via the webinterface.

Azure Open AI API

How can I run using Azure Open AI API ?
I even comment out Open AI API , and using Azure API , and still asking to input the API Key for Open AI ?
Can you help ?

DO NOT MODIFY THIS FILE, create a new key.yaml, define OPENAI_API_KEY.

The configuration of key.yaml has a higher priority and will not enter git

if OpenAI

OPENAI_API_KEY: "YOUR_API_KEY"

OPENAI_API_BASE: "YOUR_API_BASE"

OPENAI_PROXY: "http://127.0.0.1:8118"

#OPENAI_API_MODEL: "gpt-4"
#MAX_TOKENS: 1500
#RPM: 10

if Anthropic

#Anthropic_API_KEY: "YOUR_API_KEY"

if AZURE, check https://github.com/openai/openai-cookbook/blob/main/examples/azure/chat.ipynb

OPENAI_API_TYPE: "azure"
OPENAI_API_BASE: "xxxx"
OPENAI_API_KEY: "xxxxx"
OPENAI_API_VERSION: "2023-08-01-preview"
DEPLOYMENT_ID: "xxxxx"

Feature: Adding contributors section to the README.md file.

There is no Contributors section in readme file .
As we know Contributions are what make the open-source community such an amazing place to learn, inspire, and create.
The Contributors section in a README.md file is important as it acknowledges and gives credit to those who have contributed to a project, fosters community and collaboration, adds transparency and accountability, and helps document the project's history for current and future maintainers. It also serves as a form of recognition, motivating contributors to continue their efforts.
contributors

无法运行服务

在x86环境下编译镜像,然后成功运行容器,配置环境变量:

OPENAI_API_KEY
OPENAI_API_BASE
GOOGLE_API_KEY
GOOGLE_CSE_ID

配置环境变量的方式为docker-compose.yml

version: "3"

services:

  autogents:
    container_name: autoagents
    image: linksoul.ai/autoagents:1.0
    ports:
      - "7860:7860"
    environment:
      - OPENAI_API_KEY=sk-dxxxxxxxxxxxxxxxxxxxxx
      - OPENAI_API_BASE=https://xxxxxxxxxxxxxxxx/v1
      - GOOGLE_API_KEY=xxxxxxxxxxxx
      - GOOGLE_CSE_ID=xxxxxxxxxxxxxxxxxxxxx

前端配置OPENAI_API_KEYserpapi_key

运行demo:write a novel

出现如下错误:

*** Starting uWSGI 2.0.21 (64bit) on [Sun Sep  3 07:56:42 2023] ***
compiled with version: 10.2.1 20210110 on 28 August 2023 00:19:00
os: Linux-6.2.16-4-bpo11-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-4~bpo11+1 (2023-07-07T15:05Z)
nodename: c0b5303aa6bf
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 16
current working directory: /app/autoagents
detected binary path: /usr/local/bin/uwsgi
your processes number limit is 255937
your memory page size is 4096 bytes
detected max file descriptor number: 524288
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
Python version: 3.10.13 (main, Aug 26 2023, 00:59:23) [GCC 10.2.1 20210110]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x564ecd919d00
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 1239640 bytes (1210 KB) for 16 cores
*** Operational MODE: preforking ***
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x564ecd919d00 pid: 21 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 21)
spawned uWSGI worker 1 (pid: 23, cores: 1)
spawned uWSGI worker 2 (pid: 24, cores: 1)
running "unix_signal:15 gracefully_kill_them_all" (master-start)...
2023-09-03 07:56:43,071 INFO success: quit_on_failure entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /style.css HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /libs/bootsrap.min.css HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/css/bulma-slider.min.css HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/css/fontawesome.all.min.css HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/css/bulma.min.css HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/css/bulma-carousel.min.css HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/css/index.css HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/css/bootsrap.min.css HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/css/styles.css HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/js/bulma-carousel.min.js HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/js/bulma-slider.min.js HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /demo.html HTTP/1.1" 304 0 "http://192.168.125.104:7860/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/webfonts/fa-solid-900.woff2 HTTP/1.1" 304 0 "http://192.168.125.104:7860/static/css/fontawesome.all.min.css" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /static/webfonts/fa-brands-400.woff2 HTTP/1.1" 304 0 "http://192.168.125.104:7860/static/css/fontawesome.all.min.css" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /libs/bootstrap.bundle.min.js HTTP/1.1" 304 0 "http://192.168.125.104:7860/demo.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:51 +0000] "GET /js/app_websocket.js HTTP/1.1" 304 0 "http://192.168.125.104:7860/demo.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
2023-09-03 07:56:51,465 | WARNING  | ws_service:echo:129 - New user registered, uid: 20230903075651.465486_ea38d31a-5d73-4e21-b50f-7199323210ce
192.168.125.107 - - [03/Sep/2023:07:56:53 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
192.168.125.107 - - [03/Sep/2023:07:56:53 +0000] "GET /api HTTP/1.1" 101 4 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
2023-09-03 07:56:53,282 | WARNING  | ws_service:echo:143 - Websocket closed: remote endpoint going away
2023-09-03 07:56:53,282 | WARNING  | ws_service:echo:147 - Auto unregister, uid: 20230903075651.465486_ea38d31a-5d73-4e21-b50f-7199323210ce
2023-09-03 07:56:53,425 | WARNING  | ws_service:echo:129 - New user registered, uid: 20230903075653.425462_8c831a89-4c5f-45f1-943a-e407ca78dcdb
192.168.125.107 - - [03/Sep/2023:07:57:18 +0000] "GET /images/default.jpg HTTP/1.1" 304 0 "http://192.168.125.104:7860/demo.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
2023-09-03 07:57:18,544 | WARNING  | ws_service:handle_message_wrapper:65 - New task:ea39abd4-96c2-4ea6-9e0c-c5687c67c311
2023-09-03 07:57:18.545 | INFO     | autoagents.explorer:invest:33 - Investment: $3.0.
2023-09-03 07:57:18.547 | INFO     | autoagents.roles.role:_act:167 - Ethan(Manager): ready to CreateRoles
192.168.125.107 - - [03/Sep/2023:07:57:18 +0000] "GET /images/1.jpg HTTP/1.1" 304 0 "http://192.168.125.104:7860/demo.html" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
2023-09-03 07:57:55,030 | ERROR    | ws_service:handle_message:62 - Traceback (most recent call last):
  File "/app/autoagents/ws_service.py", line 55, in handle_message
    await startup.startup(idea=idea, task_id=task_id, llm_api_key=llm_api_key, serpapi_key=serpapi_key, proxy=proxy, alg_msg_queue=alg_msg_queue)
  File "/app/autoagents/startup.py", line 17, in startup
    await explorer.run(n_round=n_round)
  File "/app/autoagents/autoagents/explorer.py", line 59, in run
    await self.environment.run()
  File "/app/autoagents/autoagents/environment.py", line 186, in run
    await asyncio.gather(*futures)
  File "/app/autoagents/autoagents/roles/role.py", line 239, in run
    rsp = await self._react()
  File "/app/autoagents/autoagents/roles/role.py", line 209, in _react
    return await self._act()
  File "/app/autoagents/autoagents/roles/role.py", line 168, in _act
    response = await self._rc.todo.run(self._rc.important_memory)
  File "/app/autoagents/autoagents/actions/create_roles.py", line 123, in run
    rsp = await sas.run(context=context, system_text=SEARCH_AND_SUMMARIZE_SYSTEM_EN_US)
  File "/app/autoagents/autoagents/actions/search_and_summarize.py", line 151, in run
    result = await self._aask(prompt, system_prompt)
  File "/app/autoagents/autoagents/actions/action.py", line 48, in _aask
    return await self.llm.aask(prompt, system_msgs)
  File "/app/autoagents/autoagents/provider/base_gpt_api.py", line 41, in aask
    rsp = await self.acompletion_text(message, stream=True)
  File "/app/autoagents/autoagents/provider/openai_api.py", line 32, in wrapper
    return await f(*args, **kwargs)
  File "/app/autoagents/autoagents/provider/openai_api.py", line 228, in acompletion_text
    return await self._achat_completion_stream(messages)
  File "/app/autoagents/autoagents/provider/openai_api.py", line 162, in _achat_completion_stream
    response = await openai.ChatCompletion.acreate(
  File "/home/user/.local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(
  File "/home/user/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 382, in arequest
    resp, got_stream = await self._interpret_async_response(result, stream)
  File "/home/user/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 726, in _interpret_async_response
    self._interpret_response_line(
  File "/home/user/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 743, in _interpret_response_line
    raise error.ServiceUnavailableError(
openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.

Produced Code Not Found

Autoagents produced lot of ok level information, plans and then said it completed my task, but it didn't. The final comment that the application code had been reviewed, tested and ready to launch, but the code is no where to be found.

Integrate with LM Studio

Try this on 'https://huggingface.co/spaces/LinkSoul/AutoAgents'. 4 chats and blew my 0.50 credit from OpenAI. Its just a test to see how this work.

I just wonder if we can have text box inside this huggingface site that use LM Studio because I use LM Studio and its such a very easy to get and use LLM. Its also have function as a local server. If i just can connect this IP server to the huggingface that you guys build there will be awesome.

Incorrect API key provided

After deploying the project locally, I kept getting error messages indicating that my API key was incorrect. I also tried to replace several API keys. Similarly, I have also tested your GitHub project's Online Demo and encountered errors. Can you help me analyze the reason? Or what insights do you have. Are these API keys the reason why I bought intermediate API keys online? There really isn't any good way, please help me analyze it.
微信图片_20240326215417

autogen

what is the difference between autogen from microsoft and this?

TypeError: 'async for' requires an object with __aiter__ method, got generator

not able to run autoagents, fresh install.

python main.py --mode commandline --llm_api_key sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx --idea "Is LK-99 really a room temperature superconducting material?"
2023-10-15 13:37:44.757 | INFO | autoagents.system.config:init:43 - Config loading done.
SerpAPI key:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
2023-10-15 13:37:51.314 | INFO | autoagents.explorer:invest:33 - Investment: $10.0.
Traceback (most recent call last):
File "/home/aitoofaan/llms/autoagents/AutoAgents/main.py", line 56, in
asyncio.run(commanline(proxy=proxy, llm_api_key=args.llm_api_key, serpapi_key=args.serpapi_key, idea=args.idea))
File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/aitoofaan/llms/autoagents/AutoAgents/main.py", line 30, in commanline
await startup.startup(idea, investment, n_round, llm_api_key=llm_api_key, serpapi_key=serpapi_key, proxy=proxy)
File "/home/aitoofaan/llms/autoagents/AutoAgents/startup.py", line 14, in startup
await explorer.run(n_round=n_round)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/explorer.py", line 57, in run
await self.environment.run()
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/environment.py", line 192, in run
await asyncio.gather(*futures)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/roles/role.py", line 239, in run
rsp = await self._react()
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/roles/role.py", line 207, in _react
await self._think()
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/roles/role.py", line 155, in _think
next_state = await self._llm.aask(prompt)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/system/provider/base_gpt_api.py", line 42, in aask
rsp = await self.acompletion_text(message, stream=True)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/system/provider/openai_api.py", line 33, in wrapper
return await f(*args, **kwargs)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/system/provider/openai_api.py", line 230, in acompletion_text
return await self._achat_completion_stream(messages)
File "/home/aitoofaan/llms/autoagents/AutoAgents/autoagents/system/provider/openai_api.py", line 173, in _achat_completion_stream
async for chunk in response:
TypeError: 'async for' requires an object with aiter method, got generator

[bug] - Run Error with a SnakeGame example

Execution

  • gpt-4-0613
  • python main.py --mode commandline --llm_api_key sk-xxxxxxx --serpapi_key adcxxxxxxx --idea "Write a snake game"
  • version:main, commit id 3a341d054a6535f892b1541ea98558f7b082e3f2

What configurations or steps did I do wrong?

Or is this project simply not capable of generating an application at the level of Snake?

Error Log

## CurrentStep
Test the code to ensure it works as expected.

## Action
Print

## ActionInput
2024-01-24 09:05:26.450 | INFO     | autoagents.system.provider.openai_api:update_cost:95 - Total running cost: $0.709 | Max budget: $10.000 | Current cost: $0.142, prompt_tokens=928, completion_tokens=720
2024-01-24 09:05:56.482 | INFO     | autoagents.roles.group:_act:87 - Alex(Group): ready to Game_Logic_Designer_Action
To test the code, run the Python file 'snake_game.py' in a terminal. If the game starts and you can control the snake using the arrow keys, the code works as expected. If the game ends when the snake hits the border or itself, and the snake grows when it eats the food, then the game logic is correct.## Thought 
The previous steps have defined the rules and mechanics of the snake game and tested the code. The current step is to write the code for the game logic. 

## Task
[Game Logic Designer]: Design the rules and mechanics of the snake game. Write the code for the game logic. Test the code to ensure it works as expected. The output of this step should be the code for the game logic.

## CurrentStep
Write the code for the snake game logic.

## Action
Write File

## ActionInput
>>>snake_game.py
```python
import random
import curses

s = curses.initscr()
curses.curs_set(0)
sh, sw = s.getmaxyx()
w = curses.newwin(sh, sw, 0, 0)
w.keypad(1)
w.timeout(100)

snk_x = sw//4
snk_y = sh//2
snake = [
    [snk_y, snk_x],
    [snk_y, snk_x-1],
    [snk_y, snk_x-2]
]

food = [sh//2, sw//2]
w.addch(int(food[0]), int(food[1]), curses.ACS_PI)

key = curses.KEY_RIGHT

while True:
    next_key = w.getch()
    key = key if next_key == -1 else next_key

    if snake[0][0] in [0, sh] or \
        snake[0][1]  in [0, sw] or \
        snake[0] in snake[1:]:
        curses.endwin()
        quit()

    new_head = [snake[0][0], snake[0][1]]

    if key == curses.KEY_DOWN:
        new_head[0] += 1
    if key == curses.KEY_UP:
        new_head[0] -= 1
    if key == curses.KEY_LEFT:
        new_head[1] -= 1
    if key == curses.KEY_RIGHT:
        new_head[1] += 1

    snake.insert(0, new_head)

    if snake[0] == food:
        food = None
        while food is None:
            nf = [
                random.randint(1, sh-1),
                random.randint(1, sw-1)
            ]
            food = nf if nf not in snake else None
        w.addch(food[0], food[1], curses.ACS_PI)
    else:
        tail = snake.pop()
        w.addch(int(tail[0]), int(tail[1]), ' ')

    w.addch(int(snake[0][0]), int(snake[0][1]), curses.ACS_CKBOARD)

>>>END
2024-01-24 09:06:36.705 | INFO     | autoagents.system.provider.openai_api:update_cost:95 - Total running cost: $0.838 | Max budget: $10.000 | Current cost: $0.129, prompt_tokens=1022, completion_tokens=568
Traceback (most recent call last):
  File "/workspaces/AutoAgents/main.py", line 56, in <module>
    asyncio.run(commanline(proxy=proxy, llm_api_key=args.llm_api_key, serpapi_key=args.serpapi_key, idea=args.idea))
  File "/usr/local/python/3.10.13/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/python/3.10.13/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/workspaces/AutoAgents/main.py", line 30, in commanline
    await startup.startup(idea, investment, n_round, llm_api_key=llm_api_key, serpapi_key=serpapi_key, proxy=proxy)
  File "/workspaces/AutoAgents/startup.py", line 14, in startup
    await explorer.run(n_round=n_round)
  File "/workspaces/AutoAgents/autoagents/explorer.py", line 57, in run
    await self.environment.run()
  File "/workspaces/AutoAgents/autoagents/environment.py", line 203, in run
    await asyncio.gather(*futures)
  File "/workspaces/AutoAgents/autoagents/roles/role.py", line 239, in run
    rsp = await self._react()
  File "/workspaces/AutoAgents/autoagents/roles/role.py", line 209, in _react
    return await self._act()
  File "/workspaces/AutoAgents/autoagents/roles/group.py", line 91, in _act
    response = await self._rc.todo.run(context)
  File "/workspaces/AutoAgents/autoagents/actions/custom_action.py", line 142, in run
    filename = re.findall('>>>(.*?)\n', str(rsp.instruct_content.ActionInput))[0]
IndexError: list index out of range

HTTP code 404 from API报错

我在运行的时候报错"raise error.APIError(openai.error.APIError: HTTP code 404 from API (

404 Not Found

)",我猜测是因为我所使用的api_key与api_base以及所使用的OPENAI_API_MODEL与目前项目支持的gpt版本不对应(我所使用的是gpt-3.5-turbo),请问我应该使用的API的URL包含版本号是什么?

ImportError: cannot import name 'ModelField' from 'pydantic.fields'

File "/data_local/02-hw/AutoAgents/autoagents/actions/action/init.py", line 1, in
from .action import Action
File "/data_local/02-hw/AutoAgents/autoagents/actions/action/action.py", line 14, in
from autoagents.system.llm import LLM
File "/data_local/02-hw/AutoAgents/autoagents/system/llm.py", line 7, in
from .provider.anthropic_api import Claude2 as Claude
File "/data_local/02-hw/AutoAgents/autoagents/system/provider/anthropic_api.py", line 10, in
import anthropic
File "/root/anaconda3/envs/auto-agents/lib/python3.11/site-packages/anthropic/init.py", line 3, in
from . import types
File "/root/anaconda3/envs/auto-agents/lib/python3.11/site-packages/anthropic/types/init.py", line 5, in
from .completion import Completion as Completion
File "/root/anaconda3/envs/auto-agents/lib/python3.11/site-packages/anthropic/types/completion.py", line 3, in
from .._models import BaseModel
File "/root/anaconda3/envs/auto-agents/lib/python3.11/site-packages/anthropic/_models.py", line 11, in
from pydantic.fields import ModelField
ImportError: cannot import name 'ModelField' from 'pydantic.fields' (/root/anaconda3/envs/auto-agents/lib/python3.11/site-packages/pydantic/fields.py)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.