Coder Social home page Coder Social logo

kuafuai / devopsgpt Goto Github PK

View Code? Open in Web Editor NEW
6.3K 306.0 819.0 10.8 MB

Multi agent system for AI-driven software development. Combine LLM with DevOps tools to convert natural language requirements into working software. Supports any development language and extends the existing code.

Home Page: https://www.kuafuai.net

License: Other

Python 17.76% Smarty 0.28% HTML 43.96% CSS 6.60% JavaScript 22.63% Batchfile 0.20% Shell 0.25% Dockerfile 0.04% Mako 0.04% Less 8.25%

devopsgpt's Introduction

DevOpsGPT: AI-Driven Software Development Automation Solution

CN doc EN doc JA doc EN doc roadmap roadmap

💡 Get Help - Q&A

💡 Submit Requests - Issue

💡 Technical exchange - [email protected]


Introduction

Welcome to the AI Driven Software Development Automation Solution, abbreviated as DevOpsGPT. We combine LLM (Large Language Model) with DevOps tools to convert natural language requirements into working software. This innovative feature greatly improves development efficiency, shortens development cycles, and reduces communication costs, resulting in higher-quality software delivery.

Features and Benefits

  • Improved development efficiency: No need for tedious requirement document writing and explanations. Users can interact directly with DevOpsGPT to quickly convert requirements into functional software.
  • Shortened development cycles: The automated software development process significantly reduces delivery time, accelerating software deployment and iterations.
  • Reduced communication costs: By accurately understanding user requirements, DevOpsGPT minimizes the risk of communication errors and misunderstandings, enhancing collaboration efficiency between development and business teams.
  • High-quality deliverables: DevOpsGPT generates code and performs validation, ensuring the quality and reliability of the delivered software.
  • [Enterprise Edition] Existing project analysis: Through AI, automatic analysis of existing project information, accurate decomposition and development of required tasks on the basis of existing projects.
  • [Enterprise Edition] Professional model selection: Support language model services stronger than GPT in the professional field to better complete requirements development tasks, and support private deployment.
  • [Enterprise Edition] Support more DevOps platforms: can connect with more DevOps platforms to achieve the development and deployment of the whole process.

Demo(Click to play video)

  1. DevOpsGPT Vision video
  2. Demo - Software development and deployment to Cloud
  3. Demo - Develop an API for adding users in Java SpringBoot

Workflow

Through the above introduction and Demo demonstration, you must be curious about how DevOpsGPT achieves the entire process of automated requirement development in an existing project. Below is a brief overview of the entire process:

工作流程

  • Clarify requirement documents: Interact with DevOpsGPT to clarify and confirm details in requirement documents.
  • Generate interface documentation: DevOpsGPT can generate interface documentation based on the requirements, facilitating interface design and implementation for developers.
  • Write pseudocode based on existing projects: Analyze existing projects to generate corresponding pseudocode, providing developers with references and starting points.
  • Refine and optimize code functionality: Developers improve and optimize functionality based on the generated code.
  • Continuous integration: Utilize DevOps tools for continuous integration to automate code integration and testing.
  • Software version release: Deploy software versions to the target environment using DevOpsGPT and DevOps tools.

Use Cloud Services

Vists kuafuai.net

Quick Start

  1. Run with source code

    1. Download the released version, or clone the latest code(instability), Ensure SQLite and Python3.7 or later is ready.
    2. Generate the configuration file: Copy env.yaml.tpl and rename it to env.yaml.
    3. Modify the configuration file: Edit env.yaml and add the necessary information such as GPT Token (refer to documentation link for detailed instructions).
    4. Run the service: Execute sh run.sh on Linux or Mac, or double-click run.bat on Windows.
    5. Access the service: Access the service through a browser (check the startup log for the access address, default is http://127.0.0.1:8080).
    6. Complete requirement development: Follow the instructions on the page to complete requirement development, and view the generated code in the ./workspace directory.
  2. Run with Docker

    1. Create a directory: mkdir -p workspace
    2. Copy env.yaml.tpl from the repository to the current directory and rename it to env.yaml
    3. Modify the configuration file: edit env.yaml and add necessary information such as GPT Token.
    4.  docker run -it \
       -v$PWD/workspace:/app/workspace \
       -v$PWD/env.yaml:/app/env.yaml \
       -p8080:8080 -p8081:8081 kuafuai/devopsgpt:latest
      
    5. Access the service: Access the service through a browser (access address provided in the startup log, the default is http://127.0.0.1:8080).
    6. Complete the requirement development: complete the requirement development according to the guidance of the page, and view the generated code in the ./workspace directory

For detailed documentation and configuration parameters, please refer to the documentation link.

Limitations

Although we strive to enhance enterprise-level software development efficiency and reduce barriers with the help of large-scale language models, there are still some limitations in the current version:

  • The generation of requirement and interface documentation may not be precise enough and might not meet developer intent in complex scenarios.
  • In the current version, automating the understanding of existing project code is not possible. We are exploring a new solution that has shown promising results during validation and will be introduced in a future version.

Product Roadmap

  • Accurate requirement decomposition and development task breakdown based on existing projects.
  • New product experiences for rapid import of development requirements and parallel automation of software development and deployment.
  • Introduce more software engineering tools and professional tools to quickly complete various software development tasks under AI planning and execution.

We invite you to participate in the DevOpsGPT project and contribute to the automation and innovation of software development, creating smarter and more efficient software systems!

Disclaimer

This project, DevOpsGPT, is an experimental application and is provided "as-is" without any warranty, express or implied. By using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise.

The developers and contributors of this project do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as a result of using this software. You are solely responsible for any decisions and actions taken based on the information provided by DevOpsGPT.

Please note that the use of the GPT language model can be expensive due to its token usage. By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges.

As an autonomous experiment, DevOpsGPT may generate content or take actions that are not in line with real-world business practices or legal requirements. It is your responsibility to ensure that any actions or decisions made based on the output of this software comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from the use of this software.

By using DevOpsGPT, you agree to indemnify, defend, and hold harmless the developers, contributors, and any affiliated parties from and against any and all claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys' fees) arising from your use of this software or your violation of these terms.

Reference project

devopsgpt's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

devopsgpt's Issues

Expecting value: line 1 column 1 (char 0)

Traceback (most recent call last):
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm.py", line 15, in chatCompletion
message, success = obj.chatCompletion(context)
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm_basic.py", line 73, in chatCompletion
raise e
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm_basic.py", line 59, in chatCompletion
response = openai.ChatCompletion.create(
File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm.py", line 19, in chatCompletion
message, success = obj.chatCompletion(context)
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm_basic.py", line 73, in chatCompletion
raise e
File "/root/DevOpsGPT/backend/app/pkgs/tools/llm_basic.py", line 59, in chatCompletion
response = openai.ChatCompletion.create(
File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.
Traceback (most recent call last):
File "/root/DevOpsGPT/backend/app/controllers/common.py", line 9, in decorated_function
result = func(*args, **kwargs)
File "/root/DevOpsGPT/backend/app/controllers/step_requirement.py", line 23, in clarify
msg, success = clarifyRequirement(userPrompt, globalContext, appArchitecture)
File "/root/DevOpsGPT/backend/app/pkgs/prompt/prompt.py", line 19, in clarifyRequirement
return obj.clarifyRequirement(userPrompt, globalContext, appArchitecture)
File "/root/DevOpsGPT/backend/app/pkgs/prompt/requirement_basic.py", line 80, in clarifyRequirement
return json.loads(message), success
File "/usr/lib/python3.10/json/init.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
172.30.192.1 - - [14/Sep/2023 18:35:53] "POST /step_requirement/clarify HTTP/1.1" 200 -

Failed to access GPT

Get this from both windows and Docker.

Error: Failed to access GPT, please check whether your network can connect to GPT and terminal proxy is running properly.

Some files failes to write or check.

Back end listens but curl gives 404

Back end listens but curl gives 404

ubuntu@devopsllm:~/DevOpsGPT-master$ netstat -nat | grep LISTEN
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8081 0.0.0.0:* LISTEN <<

but back end gives 404,

ubuntu@devopsllm:~/DevOpsGPT-master$ curl http://127.0.0.1:8081
<!doctype html>

<title>404 Not Found</title>

Not Found

The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.

Local Response JSON Serialize

I replaced the OpenAI code in "llm_basic.py" with a request from the RWKV-Runner API and tested using model "RWKV-4-World-3B-v1-20230619-ctx4096" but I got some errors after running a simple task, although the API configured correctly, I tested it using a prompt from "subtask_basic.py" and the returned ["choices"][0]["message"]["content"] from results looks correct like the "rwkv_response.json" window in screenshot

Capture10
Capture11

Unable to push the code to github

Hi,

What changes do I have to make in order to push the DevOpsGPT generated code to my github?

I have made changes in env.yaml file as per following:
DEVOPS_TOOLS: "github"
GIT_ENABLED: true
GIT_URL: "https://github.com"
GIT_API: "https://api.github.com"
GIT_TOKEN:
GIT_USERNAME:
GIT_EMAIL:

When I click on "submit code", it tries to push the code to kuafuai's repo and gives following error:
"Failed to push code. In the ./workspace/19/kuafuai/template_freestyleApp directory. git push failed."

希望取得联系,并支持 InternLM

尊敬的 DevOpsGPT 应用开发者,我是 InternLM 社区开发者&志愿者尖米, 大佬开源的工作对我的启发很大,希望可以探讨使用 InternLM 实现 DevOpsGPT 的可能性和实现路径,我的微信是 mzm312,希望可以取得联系进行更深度的交流;

前端无法登录进去

页面无法登录
1691126562685

前端的登录请求响应为200
image

后端的响应状态为304
1691126903130

界面也无法正常显示
image

该怎么解决啊,跪求大佬指点!

when i click adjust code it wipes the existing code

when i click adjust code it wipes the existing code
it should submit the existing code

option to auto complete uncoded todo annotation, lots of code is not auto completed need to copy paste the todo annotation to the adjust code box and submit it to get the code

Stuck on "Thinking"?

I realized I had to keep the initial description really short, got past that problem, but now part way through it, it just freezes and says thinking?

What am I doing wrong? lol. Someone please help me, and tell me what I gotta do to be more helpable, is there a log file somewhere I should send?

Comparative advantage and unique features of DevOpsGPT over MetaGPT

Hello,

I came across the DevOpsGPT project recently, and I must say, I am intrigued by the scope and potential of your solution. I have observed its capability to automate software development by leveraging Language Models in concert with DevOps tools to convert language-based requirements into functional software. This unique approach can significantly enhance development efficiency and reduce communication overheads.

However, I also came across a similar project named MetaGPT, which emphasizes multiple GPT roles to tackle complex tasks in software development. It has the ability to take single line requirements as inputs and output user stories, competitive analysis, requirements, data structures, APIs, documents etc., essentially providing all processes of a software company along with carefully orchestrated SOPs.

I am thus interested in understanding what competitive advantage does DevOpsGPT have over MetaGPT? Specifically, what are the core differences and unique selling propositions that make DevOpsGPT stand out? Also, how does DevOpsGPT cope with complex tasks compared to the multi-role approach of MetaGPT?

Any insights into this comparison would help us to make a more informed decision about which AI-infused DevOps solution would better suit our requirements.

Thank you for your time!

Custom LLMs & APIs Endpoints

Will it support the local llm models like LlaMa and RWKV using the local APIs like oobobga Text Generation WebUI ?
This would solve the problems of insufficient OpenAI credits and limited tokens and the countries that ChatGPT access is still unavailable in yet

启动项目,出现报错信息 ImportError: cannot import name 'EVENT_TYPE_OPENED' from 'watchdog.events'

启动命令python3.11 backend/run.py
启动项目,出现报错信息如下:

WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.

  • Running on all addresses (0.0.0.0)
  • Running on http://127.0.0.1:8081
  • Running on http://192.168.101.173:8081
    Press CTRL+C to quit
    Traceback (most recent call last):
    File "/Users/xxl/Documents/app/DevOpsGPT/backend/run.py", line 51, in
    app.run(host=BACKEND_HOST, port=BACKEND_PORT, debug=BACKEND_DEBUG)
    File "/Users/xxl/anaconda3/lib/python3.11/site-packages/flask/app.py", line 889, in run
    run_simple(t.cast(str, host), port, self, **options)
    File "/Users/xxl/anaconda3/lib/python3.11/site-packages/werkzeug/serving.py", line 1097, in run_simple
    run_with_reloader(
    File "/Users/xxl/anaconda3/lib/python3.11/site-packages/werkzeug/_reloader.py", line 440, in run_with_reloader
    reloader = reloader_loops[reloader_type](
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/xxl/anaconda3/lib/python3.11/site-packages/werkzeug/_reloader.py", line 315, in init
    from watchdog.events import EVENT_TYPE_OPENED
    ImportError: cannot import name 'EVENT_TYPE_OPENED' from 'watchdog.events' (/Users/xxl/anaconda3/lib/python3.11/site-packages/watchdog/events.py)

运行服务失败

执行脚本
image
运行python程序
image
备注:测试key在其他程序中是可用的

docker depoly

我發現這好像是國內鏡像
是否能改成國外或者過內

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f091fb04400>, 'Connection to pypi.tuna.tsinghua.edu.cn timed out. (connect timeout=15)')': /simple/flask-sqlalchemy/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Read timed out. (read timeout=15)")': /simple/flask-sqlalchemy/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Read timed out. (read timeout=15)")': /simple/flask-sqlalchemy/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Read timed out. (read timeout=15)")': /simple/flask-sqlalchemy/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Read timed out. (read timeout=15)")': /simple/flask-sqlalchemy/
ERROR: Could not find a version that satisfies the requirement flask-sqlalchemy (from versions: none)
ERROR: No matching distribution found for flask-sqlalchemy

Use with a local model

What would it take to modify for use with gpt4all or use with another local model method?

What technical modification suggestions would you have?

Expecting value: line 1 column 2 (char 1) 后端服务返回异常,请联系管理员检查终端服务日志以及浏览器控制台报错信息。

"question": "结果是否需要可视化展示?",
"reasoning": "可视化需要前端配合开发",
"answer_sample": "不需要可视化"

}
]

请检查以上问题,如有需要补充或者修改的,请提出,非常感谢!
Traceback (most recent call last):
File "F:\oyworld\GPT\DevOpsGPT\backend\app\controllers\common.py", line 10, in decorated_function
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\oyworld\GPT\DevOpsGPT\backend\app\controllers\step_requirement.py", line 23, in clarify
msg, success = clarifyRequirement(userPrompt, globalContext, appArchitecture)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\oyworld\GPT\DevOpsGPT\backend\app\pkgs\prompt\prompt.py", line 19, in clarifyRequirement
return obj.clarifyRequirement(userPrompt, globalContext, appArchitecture)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\oyworld\GPT\DevOpsGPT\backend\app\pkgs\prompt\requirement_basic.py", line 80, in clarifyRequirement
return json.loads(message), success
^^^^^^^^^^^^^^^^^^^
File "C:\Users\W\anaconda3\envs\eng\Lib\json_init_.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\W\anaconda3\envs\eng\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\W\anaconda3\envs\eng\Lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 2 (char 1)

編輯應用列表有BUG

newService = ApplicationService.create_service(appID, service["service_name"], service["service_git_path"], service["service_workflow"], service["service_role"], service["service_language"], service["service_framework"], service["service_database"], service["service_api_type"], service["service_api_location"], service["service_container_name"], service["service_container_group"], service["service_region"], service["service_public_ip"], service["service_security_group"], service["service_cd_subnet"], service["service_struct_cache"])
KeyError: 'service_name'

會變成創建新的
但是我只是要修改

Help with Backend host config

Is any one able to help me with my env.yaml file? I keep getting an error saying "failed to read backend config......" What am I missing?

C:\Users\Todd\Desktop\DevOpsGPT\backend>python run.py
←[91mError: Failed to read the BACKEND_HOST configuration, please 【copy a new env.yaml from env.yaml.tpl】 and reconfigure it according to the documentation. ←[0m
env - Copy.txt

Run.bat issue in windows 11

When I try to run the code for docker install in conda env I get this error...

(devopsgpt) PS J:\GPT.MODELS\DevOpsGPT> docker run -it
"docker run" requires at least 1 argument.
See 'docker run --help'.

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Create and run a new container from an image


When I run run.bat from folder I get ...

    1 file(s) copied.

Installing missing packages...
'C:\Program' is not recognized as an internal or external command,
operable program or batch file.
The system cannot find the file C:\Program.
Environment variable HTTP_PROXY not defined
Environment variable HTTPS_PROXY not defined
Environment variable ALL_PROXY not defined
'http_proxy' is not recognized as an internal or external command,
operable program or batch file.
'https_proxy' is not recognized as an internal or external command,
operable program or batch file.
'all_proxy' is not recognized as an internal or external command,
operable program or batch file.
000
Environment variable HTTP_PROXY not defined
Environment variable HTTPS_PROXY not defined
Environment variable ALL_PROXY not defined
'http_proxy' is not recognized as an internal or external command,
operable program or batch file.
'https_proxy' is not recognized as an internal or external command,
operable program or batch file.
'all_proxy' is not recognized as an internal or external command,
operable program or batch file.
000


etc repeating

I did config env file with api for openai and github

Thx in advance

Comparative Analysis of YiVal and DevOpsGPT: Unique Selling Points and Competitive Edges

Hello Contributors and Community,

I recently found that there are two very interesting projects, DevOpsGPT and YiVal, both of which are based on AI and specifically large language models, but seemingly aiming at two different aspects in the AI-ML deployment process. I wanted to open a discussion to understand the unique competitive advantages and features that YiVal holds in comparison to DevOpsGPT.

At the outset, let me provide a brief understanding of the two projects:

  • DevOpsGPT: An AI-Driven Software Development Automation Solution that combines Large Language Models with DevOps tools to convert natural language requirements into working software, thereby enhancing development efficiency and reducing communication costs. It generates code, performs validation, and can analyze existing project information and tasks.

  • YiVal: An GenAI-Ops framework aimed at iterative tuning of Generative AI model metadata, params, prompts, and retrieval configurations, tuned with test dataset generation, evaluation algorithms, and improvement strategies. It streamlines prompt development, supports multimedia and multimodel input, and offers automated prompt generation and prompt-related artifact configuration.

Looking at both of these, it seems they provide unique features to cater to different needs in the AI development and deployment pipeline. However, I'm curious to further understand the unique selling points and specific competitive advantages of YiVal.

Here are a few questions that might be worth discussing:

  1. DevOpsGPT seems to convert natural language requirements into working software while YiVal seems focused on fine-tuning Generative AI with test dataset generation and improvement strategies. In what ways does YiVal outperform DevOpsGPT in facilitating a more robust and efficient machine learning model iteration and training process?

  2. One of the highlighted features of YiVal is its focus on Human(RLHF) and algorithm-based improvers along with the inclusion of a detailed web view. Can you provide a bit more insight into how these features are leveraged in YiVal and how they compare to DevOpsGPT's project analysis and code generation features?

  3. DevOpsGPT offers a feature to analyze existing projects and tasks, whereas YiVal emphasizes streamlining prompt development and multimedia/multimodel input. How does YiVal handle integration with existing models and datasets? Is there any scope for reverse-engineering or retraining established models with YiVal?

  4. In terms of infrastructure, how does YiVal compare to DevOpsGPT? Do they need similar resources for deployment and operation, or does one offer more efficiency?

  5. Lastly, how is the user experience on YiVal compared to DevOpsGPT? I see YiVal boasts a "non-code" experience for building Gen-AI applications, but how does this hold up against DevOpsGPT's efficient and understandable automated development process?

I'd appreciate any insights or thoughts on these points. Looking forward to stimulating discussions!

我是个15年经验加构师, 我认为你们这项目能成.

我近几个月一直在使用AI开发, 基本上我80% 以上的开发都已转移到了ai 开发, 但是现有的自动化产品都不能实现我想像的功能, 你们这个非常接近了. 软件开发就是得到中途不断修改模块需求.

程序员我认为是可以大量替代的, 但加构师很难, 因为需求很繁杂, 只能由人担任.

我也想加入你们的开源项目.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.