Coder Social home page Coder Social logo

limaoyi1 / auto-ppt Goto Github PK

View Code? Open in Web Editor NEW
466.0 4.0 85.0 50.96 MB

Auto generate pptx using gpt-3.5, Free to use online / 通过gpt-3.5生成PPT,免费在线使用

License: MIT License

Python 97.76% HTML 1.35% JavaScript 0.89%
ai aigc generate gpt gpt-3 gpt3 gpt3-turbo ppt pptx gpt-ppt

auto-ppt's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

auto-ppt's Issues

Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 400 Bad Request'))).

Retrying langchain.llms.openai.completion_with_retry.._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 400 Bad Request'))).

运行appLication.py出错,可以正常打印API key

H:\anaconda\envs\slidev\lib\site-packages\langchain\llms\openai.py:173: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: from langchain.chat_models import ChatOpenAI
warnings.warn(
H:\anaconda\envs\slidev\lib\site-packages\langchain\llms\openai.py:753: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: from langchain.chat_models import ChatOpenAI
warnings.warn(
127.0.0.1 - - [13/Dec/2023 10:04:59] "POST /generate_title HTTP/1.1" 500 -
Traceback (most recent call last):
File "H:\anaconda\envs\slidev\lib\site-packages\redis\connection.py", line 707, in connect
sock = self.retry.call_with_retry(
File "H:\anaconda\envs\slidev\lib\site-packages\redis\retry.py", line 46, in call_with_retry
return do()
File "H:\anaconda\envs\slidev\lib\site-packages\redis\connection.py", line 708, in
lambda: self._connect(), lambda error: self.disconnect(error)
File "H:\anaconda\envs\slidev\lib\site-packages\redis\connection.py", line 1006, in _connect
raise err
File "H:\anaconda\envs\slidev\lib\site-packages\redis\connection.py", line 994, in _connect
sock.connect(socket_address)
ConnectionRefusedError: [WinError 10061] 由于目标计算机积极拒绝,无法连接。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "H:\anaconda\envs\slidev\lib\site-packages\flask\app.py", line 2213, in call
return self.wsgi_app(environ, start_response)
File "H:\anaconda\envs\slidev\lib\site-packages\flask\app.py", line 2193, in wsgi_app
response = self.handle_exception(e)
File "H:\anaconda\envs\slidev\lib\site-packages\flask_cors\extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "H:\anaconda\envs\slidev\lib\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "H:\anaconda\envs\slidev\lib\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "H:\anaconda\envs\slidev\lib\site-packages\flask_cors\extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "H:\anaconda\envs\slidev\lib\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "H:\anaconda\envs\slidev\lib\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "H:\aifeng\data\PPT\Auto-PPT\application.py", line 70, in stream1
return Response(gen_title_v2.predict_title_v2(form, role, title, topic_num),
File "H:\aifeng\data\PPT\Auto-PPT\generation\gen_ppt_outline.py", line 37, in predict_title_v2
return self.GptChain.predict(text)
File "H:\aifeng\data\PPT\Auto-PPT\chain\gpt_memory.py", line 49, in predict
return self.llm_chain.predict(human_input=question)
File "H:\anaconda\envs\slidev\lib\site-packages\langchain\chains\llm.py", line 252, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "H:\anaconda\envs\slidev\lib\site-packages\langchain\chains\base.py", line 220, in call
inputs = self.prep_inputs(inputs)
File "H:\anaconda\envs\slidev\lib\site-packages\langchain\chains\base.py", line 372, in prep_inputs
external_context = self.memory.load_memory_variables(inputs)
File "H:\anaconda\envs\slidev\lib\site-packages\langchain\memory\buffer.py", line 39, in load_memory_variables
return {self.memory_key: self.buffer}
File "H:\anaconda\envs\slidev\lib\site-packages\langchain\memory\buffer.py", line 24, in buffer
self.chat_memory.messages,
File "H:\anaconda\envs\slidev\lib\site-packages\langchain\memory\chat_message_histories\redis.py", line 48, in messages
_items = self.redis_client.lrange(self.key, 0, -1)
File "H:\anaconda\envs\slidev\lib\site-packages\redis\commands\core.py", line 2741, in lrange
return self.execute_command("LRANGE", name, start, end)
File "H:\anaconda\envs\slidev\lib\site-packages\redis\client.py", line 1266, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "H:\anaconda\envs\slidev\lib\site-packages\redis\connection.py", line 1461, in get_connection
connection.connect()
File "H:\anaconda\envs\slidev\lib\site-packages\redis\connection.py", line 713, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 10061 connecting to 127.0.0.1:6379. 由于目标计算机积极拒绝,无法连接。.

换代理接口后,调用接口报错

1 2 3

报错信息:
`> Entering new LLMChain chain...
Prompt after formatting:
You are a chatbot having a conversation with a human.

Human:
            我希望你作为```老师```,以```工作报告```为主题生成3个```测试```PPT标题,要求能吸引人的注意。
            以下是返回的一些要求:
            1.【The response should be a list of 3 items separated by "

" (例如: 香蕉
天气
说明)】

Chatbot:

127.0.0.1 - - [23/May/2024 19:34:11] "POST /generate_title HTTP/1.1" 500 -
Traceback (most recent call last):
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\flask\app.py", line 2213, in call
return self.wsgi_app(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\flask\app.py", line 2193, in wsgi_app
response = self.handle_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\flask_cors\extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(args, kwargs)))
^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\flask_cors\extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(args, kwargs)))
^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\application.py", line 70, in stream1
return Response(gen_title_v2.predict_title_v2(form, role, title, topic_num),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\generation\gen_ppt_outline.py", line 37, in predict_title_v2
return self.GptChain.predict(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\chain\gpt_memory.py", line 49, in predict
return self.llm_chain.predict(human_input=question)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\chains\llm.py", line 252, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\chains\base.py", line 243, in call
raise e
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\chains\base.py", line 237, in call
self._call(inputs, run_manager=run_manager)
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\chains\llm.py", line 92, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\chains\llm.py", line 102, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\llms\base.py", line 186, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\llms\base.py", line 279, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\llms\base.py", line 223, in _generate_helper
raise e
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\llms\base.py", line 210, in _generate_helper
self.generate(
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\llms\openai.py", line 795, in generate
for stream_resp in completion_with_retry(self, messages=messages, **params):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\llms\openai.py", line 90, in completion_with_retry
return completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\tenacity_init
.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\tenacity_init
.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\tenacity_init
.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\concurrent\futures_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self.exception
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\tenacity_init
.py", line 382, in call
result = fn(args, kwargs)
^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\langchain\llms\openai.py", line 88, in _completion_with_retry
return llm.client.create(kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "D:\study_code\Auto-PPT-latest_branch\venv\Lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.AuthenticationError: Incorrect API key provided: sk-GtWPG
*********************nP98. You can find your API key at https://platform.openai.com/account/api-keys.`

raise APIRemovedInV1(symbol=self._symbol) openai.lib._old_api.APIRemovedInV1:

Traceback (most recent call last):
File "E:\workplace\Auto-PPT\venv\lib\site-packages\flask\app.py", line 2213, in call
return self.wsgi_app(environ, start_response)
File "E:\workplace\Auto-PPT\venv\lib\site-packages\flask\app.py", line 2193, in wsgi_app
response = self.handle_exception(e)
File "E:\workplace\Auto-PPT\venv\lib\site-packages\flask_cors\extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "E:\workplace\Auto-PPT\venv\lib\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "E:\workplace\Auto-PPT\venv\lib\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "E:\workplace\Auto-PPT\venv\lib\site-packages\flask_cors\extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "E:\workplace\Auto-PPT\venv\lib\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "E:\workplace\Auto-PPT\venv\lib\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "E:\workplace\Auto-PPT\application.py", line 73, in stream1
return Response(gen_title_v2.predict_title_v2(form, role, title, topic_num),
File "E:\workplace\Auto-PPT\generation\gen_ppt_outline.py", line 37, in predict_title_v2
return self.GptChain.predict(text)
File "E:\workplace\Auto-PPT\chain\gpt_memory.py", line 51, in predict
return self.llm_chain.predict(human_input=question)
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\chains\llm.py", line 298, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\chains\base.py", line 310, in call
raise e
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\chains\base.py", line 304, in call
self._call(inputs, run_manager=run_manager)
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\chains\llm.py", line 108, in _call
response = self.generate([inputs], run_manager=run_manager)
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\chains\llm.py", line 120, in generate
return self.llm.generate_prompt(
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\llms\base.py", line 507, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\llms\base.py", line 656, in generate
output = self._generate_helper(
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\llms\base.py", line 544, in _generate_helper
raise e
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\llms\base.py", line 531, in _generate_helper
self._generate(
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\llms\openai.py", line 1117, in _generate
for chunk in self._stream(prompts[0], stop, run_manager, **kwargs):
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\llms\openai.py", line 1077, in _stream
for stream_resp in completion_with_retry(
File "E:\workplace\Auto-PPT\venv\lib\site-packages\langchain\llms\openai.py", line 114, in completion_with_retry
return llm.client.create(**kwargs)
File "E:\workplace\Auto-PPT\venv\lib\site-packages\openai_utils_proxy.py", line 22, in getattr
return getattr(self.get_proxied(), attr)
File "E:\workplace\Auto-PPT\venv\lib\site-packages\openai_utils_proxy.py", line 43, in get_proxied
return self.load()
File "E:\workplace\Auto-PPT\venv\lib\site-packages\openai\lib_old_api.py", line 33, in load
raise APIRemovedInV1(symbol=self._symbol)
openai.lib._old_api.APIRemovedInV1:

You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. pip install openai==0.28

A detailed migration guide is available here: openai/openai-python#742

openai : 1.3.0
langchain : 0.0.336

服务器异常,请联系管理员!

第一步添加描述,点击下一步提示“服务器异常,请联系管理员!”

root@Auto-PPT:/Python-3.9.9/Auto-PPT# python application.py

  • Serving Flask app 'application'
  • Debug mode: on
    WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on all addresses (0.0.0.0)
  • Running on http://127.0.0.1:5000
  • Running on http://192.168.1.224:5000
    Press CTRL+C to quit
  • Restarting with stat
  • Debugger is active!
  • Debugger PIN: 441-904-527
    192.168.2.100 - - [11/Dec/2023 10:19:34] "GET / HTTP/1.1" 200 -
    192.168.2.100 - - [11/Dec/2023 10:19:34] "GET /static/css/main.05987304.css HTTP/1.1" 304 -
    192.168.2.100 - - [11/Dec/2023 10:19:34] "GET /static/js/runtime.1b8d8764.js HTTP/1.1" 304 -
    192.168.2.100 - - [11/Dec/2023 10:19:34] "GET /static/js/base.0ec2920c.js HTTP/1.1" 304 -
    192.168.2.100 - - [11/Dec/2023 10:19:34] "GET /static/js/antd.56ea7e9d.js HTTP/1.1" 304 -
    192.168.2.100 - - [11/Dec/2023 10:19:34] "GET /static/js/lodash.4814a24f.js HTTP/1.1" 304 -
    192.168.2.100 - - [11/Dec/2023 10:19:34] "GET /static/js/vendors.3862e664.js HTTP/1.1" 304 -
    192.168.2.100 - - [11/Dec/2023 10:19:34] "GET /static/js/main.aee5e3a4.js HTTP/1.1" 304 -

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.