tiangolo / meinheld-gunicorn-flask-docker Goto Github PK
View Code? Open in Web Editor NEWDocker image with Meinheld and Gunicorn for Flask applications in Python.
License: MIT License
Docker image with Meinheld and Gunicorn for Flask applications in Python.
License: MIT License
While running a plot.ly dash flask app, the app only loads in the browser if I don't specify a worker type. The entrypoint.sh
script sets it to egg:meinheld#gunicorn_worker
. With this option the app stalls on 'Loading ...' in the browser. When running gunicorn inside the container by hand - without the worker type option - it loads just fine.
What is the default worker type, and how do they differ?
I have in my main.py
:
from flask_script import Server, Manager
then:
app.manager = Manager(app)
app.manager.add_command(
'runserver',
Server(
host=app.config['FLASK_BIND'],
port=app.config['FLASK_PORT']
)
)
# import csv files
app.manager.add_command('import', scripts.Import())
and the command is python main.py runserver
locally. How can I pass the runserver
part of the command to my app in the container to run it?
I have flask app with this structure
app
├── checkpoints
├── data
│ └── raw
├── font
├── logs
├── pycache
├── static
│ └── img
│ └── play_button
├── styles
└── templates
when running locally, the image files in /static/img/ show up fine. (i.e. http://localhost/static/img/image.jpg)
When i put a nginx proxy in front of it (i.e. https://somedomain/), the static files are 404'ing. (i.e. https://somedomain/static/img/image.jpg).
Ideas?
In the simple setup below I am expecting the gunicorn to process at least 4 requests in parallel, but I get just 2.
In the configuration below we can see 2 workers with 2 threads.
In addition, we tried to configure number of workers == 1 with 2 threads, then we got just 1 request being processed in a given moment.
So it's looks like gunicorn + meinheld is able to process just 1 request in each worker, which is way inefficient.
We are sending 53 requests in parallel in each setup.
simple_app.py:
import sys
from flask import Flask, request
import json
import time
app = Flask(__name__)
@app.route("/", methods=['POST'])
def hello():
headers = request.headers
data = json.loads(request.data)
version = "{}.{}".format(sys.version_info.major, sys.version_info.minor)
message = "Hello World from Flask in a Docker container running Python {} with Meinheld and Gunicorn (default): {}:{}".format(
version, headers, json.dumps(data)
)
print(message)
time.sleep(15)
return message
gunicorn_dev_conf.py:
import json
import multiprocessing
import os
workers_per_core_str = os.getenv("WORKERS_PER_CORE", "2")
web_concurrency_str = os.getenv("WEB_CONCURRENCY", None)
host = os.getenv("HOST", "0.0.0.0")
port = os.getenv("PORT", "80")
bind_env = os.getenv("BIND", None)
use_loglevel = os.getenv("LOG_LEVEL", "debug")
if bind_env:
use_bind = bind_env
else:
use_bind = "{host}:{port}".format(host=host, port=port)
cores = multiprocessing.cpu_count()
workers_per_core = float(workers_per_core_str)
default_web_concurrency = workers_per_core * cores
if web_concurrency_str:
web_concurrency = int(web_concurrency_str)
assert web_concurrency > 0
else:
web_concurrency = int(default_web_concurrency)
# Gunicorn config variables
loglevel = use_loglevel
workers = web_concurrency
workers = 2
reuse_port = True
bind = use_bind
keepalive = 120
errorlog = "-"
threads = 2
# For debugging and testing
log_data = { "loglevel": loglevel, "workers": workers, "bind": bind,
# Additional, non-gunicorn variables
"workers_per_core": workers_per_core, "host": host, "port": port}
accesslog = "-"
preload_app = False
print_config = True
# max_requests = 1900
# max_requests_jitter = 50
sendfile = True
print(json.dumps(log_data), flush=True)
docker run command:
docker run --rm --name demo_dev --privileged -it -p 8080:80 -v ./gunicorn_dev_conf.py:/app/gunicorn_conf.py:rw -v ./simple_app.py:/app/main.py tiangolo/meinheld-gunicorn-flask:python3.8-alpine3.11
Parallel curl command:
for i in {1..53}
do
echo "Welcome num$i; $(date)"
curl "http://localhost:8080/" -d @simpleJson.json -H 'content-type: application/json' -o dir_$i&&echo "num"$i"; $(date)" &
done
simpleJson.json:
{
"field1": [
{
"field2": "09-090"
}
]
}
output:
10.0.2.100 - - [05/Aug/2021:12:16:20 +0000] "POST / HTTP/1.1" 200 251 "-" "curl/7.61.1"
Hello World from Flask in a Docker container running Python 3.8 with Meinheld and Gunicorn (default): Host: localhost:8080
User-Agent: curl/7.61.1
Accept: */*
Content-Type: application/json
Content-Length: 70
:{"field1": [{"field2": "09-090"}]}
10.0.2.100 - - [05/Aug/2021:12:16:20 +0000] "POST / HTTP/1.1" 200 251 "-" "curl/7.61.1"
Hello World from Flask in a Docker container running Python 3.8 with Meinheld and Gunicorn (default): Host: localhost:8080
User-Agent: curl/7.61.1
Accept: */*
Content-Type: application/json
Content-Length: 70
:{"field1": [{"field2": "09-090"}]}
>> WAIT 15 seconds...
10.0.2.100 - - [05/Aug/2021:12:16:35 +0000] "POST / HTTP/1.1" 200 251 "-" "curl/7.61.1"
Hello World from Flask in a Docker container running Python 3.8 with Meinheld and Gunicorn (default): Host: localhost:8080
User-Agent: curl/7.61.1
Accept: */*
Content-Type: application/json
Content-Length: 70
:{"field1": [{"field2": "09-090"}]}
10.0.2.100 - - [05/Aug/2021:12:16:35 +0000] "POST / HTTP/1.1" 200 251 "-" "curl/7.61.1"
Hello World from Flask in a Docker container running Python 3.8 with Meinheld and Gunicorn (default): Host: localhost:8080
User-Agent: curl/7.61.1
Accept: */*
Content-Type: application/json
Content-Length: 70
:{"field1": [{"field2": "09-090"}]}
Hello! First off let me thank you the great project, it's a wonderful starting point for those of use using a flask framework.
I'm somewhat new to building web applications using flask, and I am close to finishing one however I've been struggling to figure the cause of what seems like a thread lock, GIL or worker starvation of sorts. Basically these are my symptoms,
The first time I load my webpage it immediately starts loading the relevant content which is some static html with dynamic html that is rendered from ajax calls, so when you login you can visibly see the loading bars telling me that ajax has made the calls to my python routes to load data.
However I notice that if I have more than one session active, lets say I have 2 tabs open to the same page or if more that one user loads the page then the subsequent loads will be stalled, so when I login instead of immediately loading the page and starting the ajax calls it seems like it's frozen/stalled (DOM does not load) and I either have to wait or hard refresh the page several times for it to load correctly and start the ajax calls.
I've spent quite of bit of time trying to figure it out and these are my observations,
When running the docker image on my localhost opening up multiple sessions does not seem to stall the application from loading
With the same flask application I attempted to deploy it using your synchronous uswgi framwork and this issue did not occur, each time I attempted to load the page it immediately rendered and my ajax calls proceeded (I have limitations as to why I cannot use this setup and I really built it as a way to test this issue)
I'm not sure if this is relevant but in the firefox network analysis tab the guilty culprit that stalled the page from rendering was always the same (picture below) (notice it took 2 minues to load, where in uswgi everything loaded immediately )
[10] [ERROR] ConnectionResetError: [Errno 104] Connection reset by peer
I attempted to debug it from a browser perspective but came up short and was hoping maybe you could provide some insight into the issue. If you need any additional information from me please let me know ! Thank you!
I have a long running process which loads some training models from Machine Learning and hosts this using Flask, however this docker image is killing my worker process before the models are fully loaded and it keeps restarting the application.
Is there any way to configure the type of worker or how gunicorn monitors the workers not responding that they are still active?
I see similar issues kind of raised around the web, people are talking about increasing or changing the default --timeout, but i don't see anywhere where that is possible to change?
Thanks for your efforts, which made it easy for us to deploy flask code. However, when I tried to configure SSL with Gunicorn's certfile and keyfile variables, I could not succeed. Please provide some help,thanks.
Is it possible to add Let's Encrypt certificates on startup with renewing once in a while? Let's say I want to run an image inheriting from this one in production. It would be mandatory to use TLS under the hood.
I have switched from using "tiangolo / uwsgi-nginx-flask-docker" to now using "tiangolo / meinheld-gunicorn-flask-docker"
I have noticed that previously my "print" logs would display immediately in the log file. I use these logs for troubleshooting and validation within the app to make sure the functions have performed as expected. e.g. "Updating score" in the example from my old docker.
uwsgi-nginx-flask-docker Example:
[pid: 22|app: 0|req: 8/14] 192.168.0.229 () {48 vars in 797 bytes} [Wed Sep 1 19:56:16 2021] GET / => generated 14303 bytes in 1213 msecs (HTTP/1.1 200) 2 headers in 82 bytes (1 switches on core 0) 192.168.0.229 - - [01/Sep/2021:19:56:17 +0100] "GET / HTTP/1.1" 200 14303 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 14_7 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/92.0.4515.90 Mobile/15E148 Safari/604.1" "192.168.0.1" [pid: 21|app: 0|req: 7/15] 192.168.0.229 () {50 vars in 860 bytes} [Wed Sep 1 19:56:19 2021] GET /score => generated 7178 bytes in 578 msecs (HTTP/1.1 200) 2 headers in 81 bytes (1 switches on core 0) 192.168.0.229 - - [01/Sep/2021:19:56:20 +0100] "GET /score HTTP/1.1" 200 7178 "" "Mozilla/5.0 (iPhone; CPU iPhone OS 14_7 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/92.0.4515.90 Mobile/15E148 Safari/604.1" "192.168.0.1" Updating score Results!B22
Now using meinheld-gunicorn-flask-docker i dont get any of the http logging or my print messages in the live log. Until i reboot then i get my print messages all come in at once but still no http logs. See in the example below that the message "Wiping tally!" only appeared AFTER i rebooted the container.
Is it something to do with loglevels? If so how do i set the log level? Or could it be that im running async and its blocking the logs?
meinheld-gunicorn-flask-docker Example:
`
[2021-09-04 19:29:53 +0100] [1] [INFO] Using worker: egg:meinheld#gunicorn_worker
[2021-09-04 19:29:53 +0100] [8] [INFO] Booting worker with pid: 8
Exception in callback Cron.set_result(<_GatheringFu...ute 'send'")]>)
handle: <Handle Cron.set_result(<_GatheringFu...ute 'send'")]>)>
Traceback (most recent call last):
File "/usr/local/lib/python3.8/asyncio/events.py", line 81, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.8/site-packages/aiocron/init.py", line 100, in set_result
raise result
File "/app/cogs/cron.py", line 22, in cronmsg
await channel.send('Whos available to play this week?')
AttributeError: 'NoneType' object has no attribute 'send'
[2021-09-05 11:07:08 +0100] [1] [INFO] Handling signal: term
[2021-09-05 11:07:08 +0100] [8] [INFO] Worker exiting (pid: 8)
{"loglevel": "info", "workers": 1, "bind": "0.0.0.0:80", "workers_per_core": 2.0, "host": "0.0.0.0", "port": "80"}
IF branch is:pro
Git branch is:pre
Using Dev Worksheet for Get Commands
footyappdev logged in successfully
Wiping tally!
Checking for script in /app/prestart.sh
There is no script /app/prestart.sh
[2021-09-05 11:07:19 +0100] [1] [INFO] Starting gunicorn 20.0.4
[2021-09-05 11:07:19 +0100] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2021-09-05 11:07:19 +0100] [1] [INFO] Using worker: egg:meinheld#gunicorn_worker
[2021-09-05 11:07:19 +0100] [8] [INFO] Booting worker with pid: 8
`
It is unclear to me how to specify to gunicorn to run in HTTPS mode. I've declared the certfile and keyfile as follows in the config file, which it does seem to read.
certfile = 'app/cert.pem'
keyfile = 'app/key.pem'
but nothing changes when running the container.
Hi,
I'm testing running this container on Azure App Service. For diagnostics purposes, App Services allows connecting to the containers using SSH as described:
https://docs.microsoft.com/en-us/azure/app-service/containers/configure-custom-container#enable-ssh
What would be the best way to incorporate the required SSH startup to this container?
Regards,
Stefan
Meinheld reads | gunicorn config worker_connections
. The default value for worker-connections
is 1000. Maybe consider exposing this setting for tuning and explain that the maximum number of concurrent requests is workers * worker-connections
. For applications which access a database understanding that you may be requesting a large number of simultaneously connections could be important! I mistakenly believed that max concurrent requests was 2 * num_cpu_cores + 1
when it is in fact (2 * num_cpu_cores + 1) * 1000
If I add a worker-level gunicorn hook to my gunicorn_conf.py
, for example:
accesslog = '-'
loglevel = 'debug'
errorlog = '-'
capture_output = True
def post_request(worker, req, environ, resp):
worker.log.info(req)
worker.log.info(environ)
worker.log.info(resp)
These logs never show up in the docker logs
. I know this config file is being correctly loaded, however, as it prints out my custom config on startup. How can I get these logs to go the stdout and thus the docker logs
?
Hi tiangolo, this is not so much of an issue - but a question. I am not sure where to post it.
I had issues working with your uwsgi-nginx - I am using flask restplus and probably had issues with my callable not being 'app' as I have been using a class instance for that..
anyway - I tried this gunicorn-flask and it worked like a charm when I specify: APP_MODULE="main:app" environment variable on docker run. I have no idea why it did not work for me on uwsgi. I bumped my head against the wall...
Anyway - I have noticed this Docker does not have nginx included - do you suggest to use that a seperate docker or somehow embed this into this gunicorn docker?
470 to be precise, of which 4 critical and 48 high. Most of these can probably be fixed with a later version of the underlying OS base image
It would be cool if the container checked for a requirements.txt in the app folder everytime I restart the container and pip install the content of the requirements.txt. That would make it so easy to deploy updates to my app.
Image always expose port 80 regardless PORT, BIND environment variables settings.
Steps to reproduce:
FROM tiangolo/meinheld-gunicorn-flask:python3.7-alpine3.8
ENV BIND 0.0.0.0:8080
ENV PORT 8080
ENV LOG_LEVEL debug
EXPOSE 8080:8080
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c36edb38d913 someuser/someimage:latest "/entrypoint.sh /sta…" 9 minutes ago Up 9 minutes 80/tcp, 8080/tcp stupefied_morse
It would be nice if this image supported prestart.sh, just like uwsgi-nginx-flask-docker. This would allow seamless migration between the two.
Hi i am new to docker. Any changes made to file in volume is reflected inside container. But the api response is not as per the changes in the volume. It seems code during the build process is cached.
I've tried multiple versions of FROM tiangolo/meinheld-gunicorn-flask:python3.6
, including (also) FROM tiangolo/meinheld-gunicorn-flask:python3.7
and FROM tiangolo/meinheld-gunicorn-flask:python3.6-alpine3.8
and each time I get the following after running docker run -d -p 80:80 -e MODULE_NAME="flask_app.main" myimage
:
Running script /app/prestart.sh
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:
#! /usr/bin/env bash
# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head
[2019-09-03 21:18:57 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-09-03 21:18:57 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2019-09-03 21:18:57 +0000] [1] [INFO] Using worker: egg:meinheld#gunicorn_worker
[2019-09-03 21:18:57 +0000] [8] [INFO] Booting worker with pid: 8
[2019-09-03 21:18:57 +0000] [8] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.6/site-packages/gunicorn/workers/base.py", line 129, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.6/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.6/site-packages/gunicorn/util.py", line 350, in import_app
__import__(module)
File "/app/app/main.py", line 4, in <module>
from apscheduler.schedulers.background import BackgroundScheduler
ModuleNotFoundError: No module named 'apscheduler'
[2019-09-03 21:18:57 +0000] [8] [INFO] Worker exiting (pid: 8)
{"loglevel": "info", "workers": 8, "bind": "0.0.0.0:80", "workers_per_core": 2.0, "host": "0.0.0.0", "port": "80"}
[2019-09-03 21:18:57 +0000] [1] [INFO] Shutting down: Master
[2019-09-03 21:18:57 +0000] [1] [INFO] Reason: Worker failed to boot.
{"loglevel": "info", "workers": 8, "bind": "0.0.0.0:80", "workers_per_core": 2.0, "host": "0.0.0.0", "port": "80"}
I even tried renaming "flask_app" to "app" and dropping the -e, but I get the "No module named 'apscheduler'" error each time.
Have a process in my flask app that refreshes data in a db every hour, works fine in Dev, but once I move to prod gunicorn, on startup the jobs get added multiple times... Did some research and came across this article, but not sure how I can pass the --preload
flag to this through docker compose:
https://stackoverflow.com/questions/16053364/make-sure-only-one-worker-launches-the-apscheduler-event-in-a-pyramid-web-app-ru
Python code:
# Set cron job to pull data every hour
with app.app_context():
try:
scheduler = BackgroundScheduler()
scheduler.add_job(func=refresh_database, trigger="cron", hour='*')
dash_app.server.logger.info('Starting cron jobs')
scheduler.start()
except BaseException as e:
dash_app.server.logger.error('Error starting cron jobs: {}'.format(e))
if __name__ == '__main__':
dash_app.run_server(host='0.0.0.0', debug=False, port=80, ssl_context=('/keys/cert.crt', '/keys/certkey'))
Docker Compose:
fitly:
build:
context: ./dockercontrol-master
dockerfile: fitly-dockerfile
container_name: fitly
restart: always
depends_on:
- mariadb
- letsencrypt
ports:
- "8050:80"
environment:
- MODULE_NAME=index
- VARIABLE_NAME=app
- TZ=America/New_York
- PUID=1001
- PGID=100
volumes:
- /share/CACHEDEV2_DATA/Container/Fitly/config.ini:/app/config.ini
- /share/CACHEDEV2_DATA/Container/Fitly/log.log:/app/log.log
- /share/CACHEDEV2_DATA/Container/LetsEncrypt/keys:/app/keys
Also...If there is a better approach then what I am trying to do above, please share!
When adding statsd option in custom gunicorn file, it's throwing the below error:
TypeError: access() missing 3 required positional arguments: 'req', 'environ' and 'request_time'
I have a use-case for using Python 2.7 with this image:
Do you need support for Python 2.7?
Let me know in an issue and I'll add it.
But only after knowing that someone actually needs it.
Thank you and great work!
I was having strange behavior for a long time, spending weeks trying to debug very strange connectivity issues with hanging queries and a particular graphql endpoint. nothing about the issue itself had any explainable cause and finally i tried loading gunicorn manually and magically all of the issues vanished. I looked at the entry_point.sh and saw that some of the environment variables that should have been set were all blank: APP_MODULE and GUNICORN_CONF
the environment from my docker-compose is unchanged. the only thing I've changed is adding in the custom entry point. Is there something obviously missing or incorrect that would explain why the default entrypoint would have problems?
The application is in /app/wsgi.py and the app module name is app, e.g.
from espresso import app
app.settings.wsgi_mode = True
import webapp.web.dispatch_application_route
image: {image_tag}
entrypoint: "gunicorn wsgi:app -c gunicorn_conf.py"
ports:
- "5000:5000"
volumes:
- ./env.staging.json:/var/tmp/env.json
- ./gunicorn_conf.py:/app/gunicorn_conf.py
environment:
- LOG_LEVEL=debug
- MODULE_NAME=wsgi
- FLASK_CONFIG=development
- FLASK_ENV=development
- ENV_FILE=/var/tmp/env.json
networks:
internal-network:
aliases:
- "backend"
outside-world:
Recently, our test environment always reported the following error, and it is on the fixed interface. Even if it is re-released, it will still report this error.Errors info
[error] 11#11: *1 upstream prematurely closed connection while reading response header from upstream, client: 192.168.160.1, server: , request: "GET /index HTTP/1.1", upstream: "http://127.0.0.1:8080/index", host: "127.0.0.1:3000"
So I wrote a test script locally to reproduce the problem, but I don’t know why it happened.This is our basic image,We manage our image throw supervisord.
FROM tiangolo/meinheld-gunicorn-flask:python3.7
I wrote a flask interface, and set gunicorn timeout 10
import configparser
import datetime
import time
from decimal import Decimal
import logging
from flask.json import JSONEncoder
from flask_api import FlaskAPI
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import Column, Integer
cf = configparser.ConfigParser()
cf.read("./app.ini")
app = FlaskAPI(__name__)
class CustomJSONEncoder(JSONEncoder):
def default(self, obj):
try:
if isinstance(obj, Decimal):
return float(obj)
if isinstance(obj, datetime.datetime):
return time.mktime(obj.timetuple())
iterable = iter(obj)
except TypeError:
pass
else:
return list(iterable)
return JSONEncoder.default(self, obj)
app.json_encoder = CustomJSONEncoder
app.config["SQLALCHEMY_COMMIT_ON_TEARDOWN"] = True
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config["SQLALCHEMY_DATABASE_URI"] = cf.get("sqlalchemy", "pool")
app.config["SQLALCHEMY_POOL_SIZE"] = 100
app.config["SQLALCHEMY_POOL_RECYCLE"] = 280
app.config["DEFAULT_RENDERERS"] = ["flask_api.renderers.JSONRenderer"]
app.config["TESTING"] = cf.get("env", "is_testing")
app.logger.setLevel(logging.INFO)
@app.teardown_appcontext
def shutdown_session(exception=None):
db.session.remove()
db = SQLAlchemy(app)
class User(db.Model):
__tablename__ = "user"
id = Column(Integer, primary_key=True)
@app.route("/index")
def index():
User.query.all()
import time
time.sleep(100)
return {"index": "ok"}
A timeout was reported during the first requets, and then an error was reported. This is normal because the program itself timed out. But why close the connection during the second request?
[2020-04-24 03:36:44 +0000] [10] [CRITICAL] WORKER TIMEOUT (pid:16)
[2020-04-24 03:36:44 +0000] [16] [ERROR] Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/app/app/__init__.py", line 61, in index
time.sleep(100)
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 201, in handle_abort
sys.exit(1)
SystemExit: 1
2020/04/24 03:36:56 [error] 12#12: *3 upstream prematurely closed connection while reading response header from upstream, client: 192.168.176.1, server: , request: "GET /index HTTP/1.1", upstream: "http://127.0.0.1:8080/index", host: "127.0.0.1:3000"
[2020-04-24 03:36:56 +0000] [20] [INFO] Booting worker with pid: 20
What does upstream prematurely closed connection mean?
I still have this problem after removing db session remove according to the previous issue, if anyone knows why I hope it can help me
Hi there, I noticed that I can't run your image on my new Macbook with M1 chip.
This is probably due to the amd64
architecture vs arm64
. I noticed I was able to use a python 3.10 image. Maybe it's just a matter to upgrade to python 3.10? I saw also there's a PR open here: https://github.com/tiangolo/meinheld-gunicorn-docker/pull/52/files
hi @tiangolo , awesome project here but I am running into issues running this app with blueprints from routes? Is there anything specific that I need to do here to get it to run ?
After a docker update on Ubuntu to version 20.10.22, build 3a2c30b my worker processes get killed in an endless loop every 30 seconds. This has rendered my project unusable.
I get the following container log section repeating:
[2022-12-17 14:33:48 +0000] [27] [INFO] Booting worker with pid: 27
[2022-12-17 14:33:46 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:20)
[2022-12-17 14:33:46 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:21)
[2022-12-17 14:33:46 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:22)
[2022-12-17 14:33:46 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:23)
[2022-12-17 14:33:47 +0000] [1] [WARNING] Worker with pid 20 was terminated due to signal 9
[2022-12-17 14:33:47 +0000] [1] [WARNING] Worker with pid 23 was terminated due to signal 9
[2022-12-17 14:33:47 +0000] [1] [WARNING] Worker with pid 21 was terminated due to signal 9
[2022-12-17 14:33:47 +0000] [1] [WARNING] Worker with pid 22 was terminated due to signal 9
[2022-12-17 14:33:47 +0000] [24] [INFO] Booting worker with pid: 24
[2022-12-17 14:33:47 +0000] [25] [INFO] Booting worker with pid: 25
[2022-12-17 14:33:48 +0000] [26] [INFO] Booting worker with pid: 26
What I managed to figure out:
Any idea what might be going on?
Is there a debug mode available for this image? Tried to set environment variable in docker-compose.yml to FLASK_DEBUG: 1, and tried to use
if name == 'main':
app.run(debug=True,host='0.0.0.0',port=80)
in python file. But everything does not seem to work. Only restarting the container works.
Any plans to add python 3.8 soon?
Hello.
I am trying to migrate a web application from using the tiangolo/uwsgi-nginx-flask image to this one, based on Meinheld, Gunicorn and Flask.
My application runs behind a reverse proxy implemented with Traefik, and is mounted on a subpath. For example, on "/demo".
I have a cookiecutter template that easily implements most of this setup, except for the Traefik bits, which I add on deployment using docker-compose and a docker-compose.yml
file like this:
version: '3'
services:
# The reverse proxy service (Træfik)
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker # Enables the web UI and tells Træfik to listen to docker
restart: always
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
myapp:
build:
context: ./myapp/
dockerfile: Dockerfile.nginx-uwsgi # just change to Dockerfile.meinheld-gunicorn to use this image as base
labels:
- "traefik.frontend.rule=PathPrefix:/demo"
- "traefik.frontend.priority=1"
depends_on:
- reverse-proxy
When using nginx and uwgsi, I need to add a couple of lines to the uwsgi.ini config file, following hints from Flask uwsgi docs:
[uwsgi]
; Uncomment following lines if not mounting the application on root "/" (see http://flask.pocoo.org/docs/1.0/deploying/uwsgi/)
manage-script-name <--- ADDED FOR MOUNTING ON SUBPATH /demo
mount = /demo=myapp:app <--- ADDED FOR MOUNTING ON SUBPATH /demo
module = myapp
callable = app
uid = uwsgi
gid = uwsgi
How can I achieve the same thing when using this Meinheld + Gunicorn image?
Gunicorn docs briefly mentions that SCRIPT_NAME can be set in the environment for Gunicorn to pick up. meinheld-gunicorn-flask image does not provide SCRIPT_NAME as an environment variable to set (I tried it anyway), so the next thing I tried is adding this setting in the gunicorn_conf.py file. Following this issue comment for format, I added the following to gunicorn_conf.py
(tried with and without the slash):
# Environment variables
raw_env = [
"SCRIPT_NAME=/demo"
]
Not working.
Can you help me with this one? Is this a problem with the image or am I just trying to implement this subpath feature the wrong way here?
Hey,
To be honest I have no idea if the issue below is related to the docker image at all.
In AWS ECR I get the following critical vulnerabilities when pushing my Docker image to my ECR repo.
In the Linux kernel 5.0.21, mounting a crafted btrfs filesystem image and performing some operations can cause slab-out-of-bounds write access in __btrfs_map_block in fs/btrfs/volumes.c, because a value of 1 for the number of data stripes is mishandled.
In the Linux kernel 5.0.21, mounting a crafted f2fs filesystem image can cause __remove_dirty_segment slab-out-of-bounds write access because an array is bounded by the number of dirty types (8) but the array index can exceed this.
My laptop uses btrfs as filesystem but I never would have thought that it would have a impact on my docker containers (I thought that it was the whole point of containers to be kinda host-agnostic).
Has anyone ever experienced that ? Will I have to use another machine then ?
When using the flask request
object (which comes from the werkzeug
library), I find that using the tiangolo/meinheld-gunicorn-flask:python3.8
docker image the request.query_string
attribute is of type str
, while the werkzeug
documentation clearly states that it should be of type bytes
:
query_string
The URL parameters as raw bytestring.
This resulted in issues in migrating an existing application to the tiangolo/meinheld-gunicorn-flask:python3.8
docker image, since it tried to decode request.query_string
(str
objects have no .decode
method).
I suspect this comes somehow from the combination with meinheld
, but I open the issue here since this makes it easy to reproduce:
Dockerfile
FROM tiangolo/meinheld-gunicorn-flask:python3.8
COPY main.py /app/main.py
main.py
from flask import Flask
from flask import request
app = Flask(__name__)
@app.route("/")
def hello():
msg = " request.query_string is of type {}\n".format(type(request.query_string))
return msg
docker build -t query-string .
docker run -d -p 4321:80 query-string
curl localhost:4321
# returns: request.query_string is of type <class 'str'>
It seems that the default is set to 30, and whenever any process runs over this threshold it gets killed and this error is thrown:
[2019-09-22 09:26:01 -0400] [1] [CRITICAL] WORKER TIMEOUT (pid:16)
Per gunicorn's documentation, the --timeout INT
argument can be passed, however when updating my docker compose to the following, I am still receiving the above error message after anything runs for longer than 30 seconds.
fitly:
build:
context: ./dockercontrol-master
dockerfile: fitly-dockerfile
container_name: fitly
restart: always
depends_on:
- mariadb
- letsencrypt
ports:
- "8050:80"
environment:
- MODULE_NAME=index
- VARIABLE_NAME=app
- TZ=America/New_York
- GUNICORN_CMD_ARGS='--timeout 60'
- PUID=1001
- PGID=100
volumes:
- /share/CACHEDEV2_DATA/Container/Fitly/config.ini:/app/config.ini
- /share/CACHEDEV2_DATA/Container/Fitly/log.log:/app/log.log
- /share/CACHEDEV2_DATA/Container/LetsEncrypt/keys:/app/keys
Is there another way to update the timeout?
I also saw some posts about trying to use with gevents but got the same results, although I'm not really sure this argument is truly being passed into the docker container either:
- GUNICORN_CMD_ARGS='--worker-class gevent --timeout 60'
I'm also running this behind NGINX, and have tried putting the following in the conf:
location / {
include /config/nginx/proxy.conf;
resolver 127.0.0.11 valid=30s;
set $upstream_fitly fitly;
proxy_pass http://$upstream_fitly:80;
}
Where in the proxy.conf I have the following lines:
send_timeout 5m;
proxy_read_timeout 240;
proxy_send_timeout 240;
proxy_connect_timeout 240;
...Still no dice
I'm trying to dockerise my react-flask application and for some reason the flask container is not accepting requests.
Dockerfile:
FROM tiangolo/meinheld-gunicorn-flask:python3.7
# RUN mkdir -p /app
COPY ./server /app
RUN pip install -r /app/requirements.txt
RUN pip install greenlet==0.4.17
Docker Compose:
backend:
restart: always
container_name: backend
build:
context: ./backend
dockerfile: Dockerfile.deploy
environment:
PORT: "5000"
LOG_LEVEL: "debug"
expose:
- 5000
Request Error:
[HPM] Error occurred while proxying request localhost:3000/authenticate to https://0.0.0.0:5000/ [ECONNREFUSED] (https://nodejs.org/api/errors.html#errors_common_system_errors)
Container log dump:
[2021-05-31 09:51:09 +0000] [1] [DEBUG] Current configuration:
config: /app/gunicorn_conf.py
bind: ['0.0.0.0:5000']
backlog: 2048
workers: 12
worker_class: egg:meinheld#gunicorn_worker
threads: 1
worker_connections: 1000
max_requests: 0
max_requests_jitter: 0
timeout: 30
graceful_timeout: 30
keepalive: 120
limit_request_line: 4094
limit_request_fields: 100
limit_request_field_size: 8190
reload: False
reload_engine: auto
reload_extra_files: []
spew: False
check_config: False
preload_app: False
sendfile: None
reuse_port: False
chdir: /app
daemon: False
raw_env: []
pidfile: None
worker_tmp_dir: None
user: 0
group: 0
umask: 0
initgroups: False
tmp_upload_dir: None
secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
forwarded_allow_ips: ['127.0.0.1']
accesslog: None
disable_redirect_access_to_syslog: False
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
errorlog: -
loglevel: debug
capture_output: False
logger_class: gunicorn.glogging.Logger
logconfig: None
logconfig_dict: {}
syslog_addr: udp://localhost:514
syslog: False
syslog_prefix: None
syslog_facility: user
enable_stdio_inheritance: False
statsd_host: None
dogstatsd_tags:
statsd_prefix:
proc_name: None
default_proc_name: main:app
pythonpath: None
paste: None
on_starting: <function OnStarting.on_starting at 0x7febca2a87a0>
on_reload: <function OnReload.on_reload at 0x7febca2a88c0>
when_ready: <function WhenReady.when_ready at 0x7febca2a89e0>
pre_fork: <function Prefork.pre_fork at 0x7febca2a8b00>
post_fork: <function Postfork.post_fork at 0x7febca2a8c20>
post_worker_init: <function PostWorkerInit.post_worker_init at 0x7febca2a8d40>
worker_int: <function WorkerInt.worker_int at 0x7febca2a8e60>
worker_abort: <function WorkerAbort.worker_abort at 0x7febca2a8f80>
pre_exec: <function PreExec.pre_exec at 0x7febca2c50e0>
pre_request: <function PreRequest.pre_request at 0x7febca2c5200>
post_request: <function PostRequest.post_request at 0x7febca2c5290>
child_exit: <function ChildExit.child_exit at 0x7febca2c53b0>
worker_exit: <function WorkerExit.worker_exit at 0x7febca2c54d0>
nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7febca2c55f0>
on_exit: <function OnExit.on_exit at 0x7febca2c5710>
proxy_protocol: False
proxy_allow_ips: ['127.0.0.1']
keyfile: None
certfile: None
ssl_version: 2
cert_reqs: 0
ca_certs: None
suppress_ragged_eofs: True
do_handshake_on_connect: False
ciphers: None
raw_paste_global_conf: []
strip_header_spaces: False
[2021-05-31 09:51:09 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2021-05-31 09:51:09 +0000] [1] [DEBUG] Arbiter booted
[2021-05-31 09:51:09 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
[2021-05-31 09:51:09 +0000] [1] [INFO] Using worker: egg:meinheld#gunicorn_worker
[2021-05-31 09:51:09 +0000] [10] [INFO] Booting worker with pid: 10
[2021-05-31 09:51:09 +0000] [11] [INFO] Booting worker with pid: 11
[2021-05-31 09:51:09 +0000] [12] [INFO] Booting worker with pid: 12
[2021-05-31 09:51:09 +0000] [13] [INFO] Booting worker with pid: 13
[2021-05-31 09:51:09 +0000] [14] [INFO] Booting worker with pid: 14
[2021-05-31 09:51:09 +0000] [15] [INFO] Booting worker with pid: 15
[2021-05-31 09:51:09 +0000] [16] [INFO] Booting worker with pid: 16
[2021-05-31 09:51:10 +0000] [17] [INFO] Booting worker with pid: 17
[2021-05-31 09:51:10 +0000] [18] [INFO] Booting worker with pid: 18
[2021-05-31 09:51:10 +0000] [19] [INFO] Booting worker with pid: 19
[2021-05-31 09:51:10 +0000] [20] [INFO] Booting worker with pid: 20
[2021-05-31 09:51:10 +0000] [21] [INFO] Booting worker with pid: 21
[2021-05-31 09:51:10 +0000] [1] [DEBUG] 12 workers
I'm not sure if this is a bug or something I'm doing wrong, please help
Hi, thanks for sharing this!
Just researching whether we could use it, and - please forgive my ignorance - I can't work out where in the stack I'd do compression. When there's nginx it seems like that's the obvious place, but here I haven't yet got my head around how responsibilities are divided between meinheld and gunicorn.
Any pointers in the right direction would be much appreciated. Thanks!
been struggling with this — is there a good way to deploy an app using a conda environment into this container?
current dockerfile
ADD ./install/environment.yml /tmp/environment.yml
RUN conda env create -f /tmp/environment.yml
RUN echo "source activate $(head -1 /tmp/environment.yml | cut -d' ' -f2)" > ~/.bashrc
ENV PATH /opt/conda/envs/$(head -1 /tmp/environment.yml | cut -d' ' -f2)/bin:$PATH
WORKDIR /
COPY ./app /app
an the container cannot find dash when it tries to start the app
File "/app/main.py", line 2, in <module>
import dash
ModuleNotFoundError: No module named 'dash'
I am using meinheld-gunicorn-flask base image and getting errors while using gunicorn asyn workers ("-k eventlet). If I remove “"-k eventlet”, (i.e. sync workers) every thing works fine.
Please suggest why I am having tough time with async workers? I have also attached requirement.txt for your referenmce.
Would really appreciate your help.
FROM tiangolo/meinheld-gunicorn-flask:python3.7
CMD ["--timeout=325", "--workers=5", "-k eventlet", "--bind", "0.0.0.0:5000", "wsgi:app"]
bash-4.2$ kubectl logs c3-dev1-semanticsearch-77fb799b6f-mmc5h -n pedatabase
[2019-12-11 00:27:31 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2019-12-11 00:27:31 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
[2019-12-11 00:27:31 +0000] [1] [INFO] Using worker: eventlet
[2019-12-11 00:27:31 +0000] [10] [INFO] Booting worker with pid: 10
[2019-12-11 00:27:32 +0000] [11] [INFO] Booting worker with pid: 11
[2019-12-11 00:27:32 +0000] [12] [INFO] Booting worker with pid: 12
[2019-12-11 00:27:32 +0000] [13] [INFO] Booting worker with pid: 13
[2019-12-11 00:27:32 +0000] [14] [INFO] Booting worker with pid: 14
2019-12-11 00:27:33,464 - config_helper - MainThread - WARNING - [config_helper.py:38] redshift connection string not found in secret manager
[2019-12-11 00:27:33 +0000] [10] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/geventlet.py", line 99, in init_process
super().init_process()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.7/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in _call_with_frames_removed
File "/semanticsearch/app/wsgi.py", line 1, in
from runapp import app
File "/semanticsearch/app/runapp.py", line 29, in
from app import config
File "../app/config.py", line 25, in
SQL_ALCHEMY_ENGINE = create_engine(config_helper.getRedshiftConnectionString())
File "../util/config_helper.py", line 43, in getRedshiftConnectionString
'user': os.environ['REDSHIFT_USER'],
File "/usr/local/lib/python3.7/os.py", line 678, in getitem
raise KeyError(key) from None
KeyError: 'REDSHIFT_USER'
[2019-12-11 00:27:33 +0000] [10] [INFO] Worker exiting (pid: 10)
2019-12-11 00:27:33,541 - config_helper - MainThread - WARNING - [config_helper.py:38] redshift connection string not found in secret manager
[2019-12-11 00:27:33 +0000] [11] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/geventlet.py", line 99, in init_process
super().init_process()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.7/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.7/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in _call_with_frames_removed
File "/semanticsearch/app/wsgi.py", line 1, in
from runapp import app
File "/semanticsearch/app/runapp.py", line 29, in
from app import config
File "../app/config.py", line 25, in
SQL_ALCHEMY_ENGINE = create_engine(config_helper.getRedshiftConnectionString())
File "../util/config_helper.py", line 43, in getRedshiftConnectionString
'user': os.environ['REDSHIFT_USER'],
File "/usr/local/lib/python3.7/os.py", line 678, in getitem
raise KeyError(key) from None
KeyError: 'REDSHIFT_USER'
[2019-12-11 00:27:33 +0000] [10] [INFO] Worker exiting (pid: 10)
2019-12-11 00:27:33,541 - config_helper - MainThread - WARNING - [config_helper.py:38] redshift connection string not found in secret manager
[2019-12-11 00:27:33 +0000] [11] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/geventlet.py", line 99, in init_process
super().init_process()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.7/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in _call_with_frames_removed
File "/semanticsearch/app/wsgi.py", line 1, in
from runapp import app
File "/semanticsearch/app/runapp.py", line 29, in
from app import config
File "../app/config.py", line 25, in
SQL_ALCHEMY_ENGINE = create_engine(config_helper.getRedshiftConnectionString())
File "../util/config_helper.py", line 43, in getRedshiftConnectionString
'user': os.environ['REDSHIFT_USER'],
File "/usr/local/lib/python3.7/os.py", line 678, in getitem
raise KeyError(key) from None
KeyError: 'REDSHIFT_USER'
[2019-12-11 00:27:33 +0000] [11] [INFO] Worker exiting (pid: 11)
2019-12-11 00:27:33,615 - config_helper - MainThread - WARNING - [config_helper.py:38] redshift connection string not found in secret manager
[2019-12-11 00:27:33 +0000] [12] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/geventlet.py", line 99, in init_process
super().init_process()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
I'm currently using this image to run a Flask-based application that processes some large JSON files, but it is giving me HTTP 413 error. Which would be the best way to increase the Request Entity Size Limit?
Hello,
how would one go on with mounting a volume with this image?
In the nginx example it would be required to use a uwsgi file. What about here?
Python 2.7
Do you need support for Python 2.7?
Let me know in an issue and I'll add it.
But only after knowing that someone actually needs it.
I found that python 2.7 has been supported in this commit .But this chapter still there.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.