ryandotsmith / nginx-buildpack Goto Github PK
View Code? Open in Web Editor NEWThis project forked from heroku/heroku-buildpack-nginx
Run NGINX in front of your app server on Heroku
This project forked from heroku/heroku-buildpack-nginx
Run NGINX in front of your app server on Heroku
It would be more consistent for me to use same config on my local machine and heroku, but readme says i should use .erb for heroku, why?
I'm having trouble getting this buildpack to work in combination with the pgbouncer buildpack.
Can you add a section in the readme to demonstrate how to make that combo to work?
I am trying to host the Tensorboard on a Heroku instance, and to secure it, I have added nginx using the nginx-buildpack in front of it.
The idea is that Tensorboard will create the app on port 6006, and Nginx will redirect this port to the external port provided by Heroku $Port
.
When I start the app, I have the following error:
TensorBoard attempted to bind to port 6006, but it was already in use
My config files are as follows:
Procfile
web: bin/start-nginx tensorboard --logdir="/app/" --host=http://127.0.0.1 --port=6006
config/nginx.conf.erb
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time
request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
#Must read the body in 5 seconds.
client_body_timeout 5;
#upstream app_server {
# server unix:/tmp/nginx.socket fail_timeout=0;
#}
server {
listen <%= ENV["PORT"] %>;
server_name http://127.0.0.1;
keepalive_timeout 5;
root /app;
port_in_redirect off;
#index index.html index.htm;
location = / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:6006;
}
}
}
Hi,
I can't make it work. Seems my node.js app (port 3000) is called in spite of nginx (port 3001).
$ heroku config:set BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
$ echo 'https://github.com/heroku/heroku-buildpack-nodejs.git' >> .buildpacks
$ echo 'https://github.com/Gionni/nginx-buildpack.git' >> .buildpacks [minimal fork ryandotsmith]
2.
Procfile: web: bin/start-nginx ./node_modules/.bin/forever app.js
3.
$ git add... / commit... / push heroku...
All goes ok but nginx.conf, even if executed, seems to have no effects. My app pops out, anyway.
http {
server {
listen <%= ENV["NGINX_PORT"] %>;
location / {
proxy_pass http://www.nytimes.com/;
}
}
}
Any suggestion for making nginx king rather than what should be hidden?
Thanks, Johnny
But specifying https://github.com/ryandotsmith/nginx-buildpack.git in .buildpack
works.
Is the tarball stale?
upgrade please
I just have some static html files that i want to deploy with nginx. I dont understand how to startup nginx for this.
I seriously hesitate to post this here, but figure others may have the same question, and it is so specific to this particular implementation on Heroku... Please just let me know if there's a better place to have this answered.
I agree with others, most will be using this with Unicorn. Those experienced with configuring that server have probably discovered and tuned the backlog
parameter.
Example:
pipe = if (ENV['RACK_ENV'] == 'production' || ENV['RAILS_ENV'] == 'production')
puts '=> Unicorn listening to NGINX socket for requests'
'/tmp/nginx.socket'
else
puts '=> Unicorn listening to TCP port for requests'
ENV['PORT'] || 5000
end
# If the router has more than N=backlog requests for our
# workers, we want the queueing to show up at the router and have the
# chance to go to another dyno. In case our workers are dead or slow
# we don't want requests sitting in the unicorn backlog timing out.
# Also, on restart, we don't want more requests than the dyno can
# clear before exit timeout
listen pipe, :backlog => 36
Advantages described in the comment, but in short, this most importantly allows the server/dyno to feedback to the routing layer that it can't handle more.
Short version of question: What is the best way to maintain these benefits with NGINX in the stack?
Longer version:
With NGIX in as reverse proxy, we lose much of this benefit from tuning backlog.
I assume events.worker_connections
is the key config param here. From NGINX doc:
The worker_connections and worker_processes from the main section allows you to calculate max clients you can handle:
max clients = worker_processes * worker_connections
In a reverse proxy situation, max clients becomes
max clients = worker_processes * worker_connections/4
Since a browser opens 2 connections by default to a server and nginx uses the fds (file descriptors) from the same pool to connect to the upstream backend
worker_connections
?I think this is a case of maintaining dev/prod parody. In production, the app server will be receiving requests from NGINX over a UNIX socket, it would be nice to have this work locally as well.
Recent documentation suggests that HTTP_X_REQUEST_ID is now included in headers, not HTTP_HEROKU_REQUEST_ID. The default config.erb should probably be updated.
Hi, I am trying to use this buildpack for my Rails project.
I've followed the instruction. But after I deploy, I keep on getting a R10 error on Heorku.
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch.
I've tried to copy the nginx.config.erb file and put it under my config directory, but it didn't work.
Any idea as to why this is happening?
Thanks!
i really like this buildpack, especially since it plays very nicely with multi, and the fact that i can provide my own nginx.conf.erb.
however, if i am using nginx to proxy to another server, it's almost impossible to touch /tmp/app-initialized.
is it possible to add a command-line option to start-nginx that would skip over that particular check?
thanks.
Or is it Heroku's router ignoring the socket close?
TL;DR - Does nginx closes the socket after client_body_timeout? or Heroku's router chooses to ignore the connection close from the dyno side? and if so, how to overcome it?
If the connection is idle for more then 5 seconds I get the exp expected output:
measure#nginx.service=5.006 request_id=xxxxxxx
But, the router does not seem to consider this request as served. I Have no indication that the socket actually closes so that the router would report an error to the user, the router still waits for the request to complete after these 5 seconds.
If any byte is transmitted after that the routed does picks up the connection closed by the dyno, but that is not my scenario, I'm assuming in some point of time the client stops sending the body altogether (let's say a mobile client).
If the socket does close, how come the router ignores it? Or at least untill it gets any additional data byte from the user... I have to wait for the standard 55 seconds before the router cuts off due to idle connection, missing the whole point of my need for the nginx.
My guess is that most of the time people will be using this buildpack with RoR apps running Unicorn. So we should optimize the setup process for that case.
One large item we can remove from the setup process is the multi-buildpack step. It would be ideal if by default, this buildpack included Heroku's Ruby buildpack. Then, the .buildpacks file would be optional.
cc: @mattsoldo
I'm trying to tweak this buildpack so that it usses the nginx ssl module.
I've added the right line to the build_nginx.ssh file, and I'm pointing my custom app at a fork of your buildpack with only that line changed.
However, I'm not seeing any ./configure changes reflected in my app. Running heroku run nginx -V shows that the exact same modules are running, even if I add or delete lines in build_nginx.sh
Is there an extra step to changing configure options beyond just changing this file?
Push rejected, no Cedar-supported app detected
I followed the instructions given in the Readme.
I am using a Python / Gunicorn setup.
Is my Procfile correct?
web: bin/start-nginx newrelic-admin run-program gunicorn someapp:app
While invoking git push heroku master
, I run into a problem with my configuration of .nginx-buildpack.
remote: -----> nginx-buildpack app detected
remote: cp: cannot stat 'bin/nginx-heroku-16': No such file or directory
remote: ! Push rejected, failed to compile nginx-buildpack app.
I tried creating the bin/nginx-heroku-16 directory manually, then pushing changes to my repository, but the problem was not fixed. here's a look at my root directory.
Here's a look at my app configuration on heroku:
I have no idea what is causing this. Please assist :)
I'm running this (awesome) buildpack with Puma, and Heroku deployments seem to work perfectly with nginx waiting for puma to start before accepting requests. However I notice that when I do a "heroku restart", I end up getting Bad Gateway 502 errors for the time between nginx re-starting and puma re-starting (at least that's what I think is happening).
Could this be because the /tmp/app-initialized file has already been created from the first deploy, and is not deleted? Should I be deleting the file on restart to prevent this? Sorry if this is an obvious issue - I don't fully understand how heroku's "ephemeral file system" works.
I've had to stop using nginx because I was getting a significant number of h13 errors after exactly 5 seconds of service on a request (according to the nginx log). Occasionally I would see 13 after 10 or 15 seconds but mostly it was after exactly 5. Cutting out nginx fixed the problem. I would really welcome any thoughts about why this might be happening. The only 5 second timeout I know about is the database pool connection timeout, but I don't see why adding nginx would affect that (and it is supposed to log a message in my app, although maybe it's just failing).
I have a config which works on my local machine with proxy reverse cache. It bypass the hitting the box every time.
I used nginx-build pack with proxy cache but on nginx every time it hit the box i can observe heroku[router] on logs
providing the config for reference in bottom.
Sample config
daemon off;
# Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
#Must read the body in 5 seconds.
client_body_timeout 5;
upstream app_server {
server unix:/tmp/nginx.socket fail_timeout=0;
}
proxy_cache_path /tmp/nginx levels=1:2 keys_zone=STATIC:10m
inactive=24h max_size=1g;
server {
listen <%= ENV["PORT"] %>;
server_name _;
keepalive_timeout 5;
location / {
proxy_pass http://app_server;
}
location ~* ^/(.*)/(.*)/(.*).(jpe?g|png|[tg]iff?|svg) {
proxy_pass http://app_server;
add_header Cache-Control public;
add_header X-Cache-Status $upstream_cache_status
proxy_set_header Host $host;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
}
}
}
Hi,
Adding pagespeed to nginx can make a significant difference to the forntend performance. It can be built from source for nginx:
https://developers.google.com/speed/pagespeed/module/build_ngx_pagespeed_from_source
Would you consider this or consider this outside the scope of this project? I have this working for a site of ours, and can send a PR.
Does anyone know how to add modules to buildpack. I woul like to add ngx_brotli and gzip_static_module but I have no idea how to do it?
Did some searching on google didn't find much. Any ideas? Is it possible that nginx upstream pgk hasn't been built for heroku-16 yet?
-----> nginx-buildpack app detected
cp: cannot stat 'bin/nginx-heroku-16': No such file or directory
! Push rejected, failed to compile nginx-buildpack app.
! Push failed
I'm getting this error when trying to start up:
-----> Fetching custom git buildpack... done
-----> nginx-buildpack app detected
cp: cannot stat `bin/nginx': No such file or directory
! Push rejected, failed to compile nginx-buildpack app
Any idea what could cause this?
Giving more nginx power in the buildpack reduces the chagen that users would have to build their own nginx binary.
However, application (.netcore2.2) actually can be reached through ngnix.
Specifying other file in $HOME directory with -f params didn't give any result
I've been using this package to proxy to a couple of different express servers without issue. However, I found that when I try and use a websockets server it is not working.
I'm using the default config with a few modifications...
daemon off;
#Heroku dynos have 4 cores.
worker_processes 4;
events {
use epoll;
accept_mutex on;
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
log_format l2met 'measure.nginx.service=$request_time request_id=$http_heroku_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
map $http_upgrade $connection_upgrade {
default upgrade;
}
server {
listen <%= ENV["PORT"] %>;
server_name _;
keepalive_timeout 5;
location /test {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:3000;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_pass http://127.0.0.1:3001;
proxy_redirect off;
proxy_ignore_client_abort on;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
I changed the part under default upgrade to not close on empty string, since this was causing issues. However, I still have an error once doing that which is:
2017/02/20 06: 6 connect() failed (111: Connection refused) while connecting to upstream, client: 10.186.58.177, server: _, request:"GET /?encoding=text HTTP/1.1", upstream: "http://127.0.0.1
:3001/?encoding=text", host: "www.mydomain.com"
My websocket server is listening on port 3001 as well, so I'm not sure what the issue is. Does this buildpack do anything different config wise, i.e. is there any reason why websockets shouldn't work over this buildpack?
I started noticing peaks in request queuing times whenever we increased the number of dynos.
I believe it's due to the fact that nginx binds to the port as soon as it's up and so Heroku thinks the dyno is ready, when in fact unicorn (or whatever is behind nginx) is still booting.
Is there any fear of having the nginx access.log
and error.log
files filling up in the dyno?
I came across this buildpack mostly due the fact that I wanted to GZip my assets. I figured nginx would be good for this (as well as the reasons listed in your README).
Anyways, I added the default nginx.conf.erb
to my config
folder. Then I attempted adding the following from the Asset Pipeline Guide to the server
section like this:
server {
...
location ~ ^/(assets)/ {
root /path/to/public;
gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
}
}
I get an application error. Here's the most important part from the logs:
nginx: [emerg] unknown directive "gzip_static" in ./config/nginx.conf
Of course, /path/to/public
is just placeholder, so, as a side note, I'm wondering if this will work:
root <%= Rails.root.join('public').to_s %>
Moving forward, it seems that nginx needs to be configured with http_gzip_static_module
. I'm not exactly sure how to do that or if it fits in the scope of this repo.
Does anyone have any experience with trying to GZip precompiled assets using nginx? Better way? Am I on to something we could merge in with this repo? This module seems to be configurable in other, less popular build packs I've seen around here on GitHub.
Also, this method is documented here using Passenger: http://dennisreimann.de/blog/configuring-nginx-for-the-asset-pipeline/
Heroku newbie here, trying to deploy NGINX to work with my Ruby/Unicorn application. I've carefully followed the setup steps for an existing app, yet receive the following error in response to git push heroku master
:
Fetching repository, done.
Counting objects: 13, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (11/11), done.
Writing objects: 100% (11/11), 1.29 KiB, done.
Total 11 (delta 4), reused 0 (delta 0)
-----> Deleting 116 files matching .slugignore patterns.
-----> Fetching custom git buildpack... done
-----> Multipack app detected
=====> Downloading Buildpack: https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/ruby.tgz
! Push rejected, failed to compile Multipack app
To [email protected]:my-repo.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to '[email protected]:my-repo.git'
Any thoughts as to what is going wrong?
I found this:
"The Vulcan build service is no longer maintained or supported, and itβs no longer recommended for building binaries. Use heroku run instead."
here:
https://devcenter.heroku.com/articles/buildpack-binaries
What's your take?
P.S. Ben and Eric at Remind101 say hello.
I am deploying a node.js web server using this buildpack on heroku.
And also configured our custom nginx.config.erb file in our config directory,
But I found that when the nginx-buildpach runs up, it just wipes out our custom
config file, and writes its default config in the file. It looks weird.
Am I missing something?
Hi, wondering if its possible to use this setup to proxy certain paths to another heroku app. For example, the following nginx.conf works locally, but fails when deployed to Heroku.
daemon off;
# Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 3;
gzip_min_length 150;
gzip_proxied any;
gzip_types text/plain text/css text/json text/javascript
application/javascript application/x-javascript application/json
application/rss+xml application/vnd.ms-fontobject application/x-font-ttf
application/xml font/opentype image/svg+xml text/xml;
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
# Must read the body in 5 seconds.
client_body_timeout 5;
upstream app_server {
server unix:/tmp/nginx.socket fail_timeout=0;
}
server {
listen <%= ENV["PORT"] %>;
server_name _;
keepalive_timeout 5;
root /app/public; # path to your app
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
location /static/ {
rewrite ^/static/?(.*)$ /$1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://my-other-app.herokuapp.com;
}
}
}
There seems to be an incompatibility if there is a bin/ folder in an app that has this buildpack.
For reference:
Hey all,
Thanks for your work!
I'm having trouble getting gzip to work on heroku. Here is my nginx config file and here is my web app deployed on heroku http://o.hackernews.im and my test command curl -I -v -H "Accept-Encoding: gzip,deflate" http://o.hackernews.im
The weird thing is with the same config file and same run command, I can get a gzipped response from my development environment, but when deployed on heroku, no matter how hard I try, I just cannot get the response gzipped. Any one else has the same problem with me?
Hey Ryan,
I realize this may or may not be a buildpack issue but I figured I'd start here and at least get some ideas on how to debug it.
We have a Rails 4 app on Heroku that consists of several engines and a wrapper Rails app. If I remove the heroku_12factor gem and change serve_static_assets = false in production.rb, nothing in the public folder will load. I simply get a Heroku level 404. My assumption, which could definitely be wrong, is that if we're using Nginx we should turn off serving of static assets at the Rails level.
EDIT: I also forgot to mention I've setup our Unicorn config and Procfile to match the readme.
Our buildpacks file looks like:
https://github.com/ryandotsmith/nginx-buildpack.git#v0.6
https://github.com/heroku/heroku-buildpack-ruby.git
We're using https://github.com/ddollar/heroku-buildpack-multi.git as our Heroku buildpack config var.
I'm not sure if my meager logs are helpful. As an example I've got
2014-09-30T22:33:01.214157+00:00 heroku[router]: at=info method=GET path="/favicon.ico" host=staging.allovue.com request_id=59481393-53e6-4520-8bbe-12d9f7b4bf4c fwd="71.198.140.98" dyno=web.1 connect=0ms service=9ms status=404 bytes=1651
2014-09-30T22:33:02.063708+00:00 app[web.1]: measure#nginx.service=0.009 request_id=59481393-53e6-4520-8bbe-12d9f7b4bf4c
I've tried changing tagged versions of the buildpack, setting up a nginx.conf.erb with a defined root under system, and animal sacrifice. So far none have worked.
Thanks for your time creating and maintaining this buildpack. Any advice you've got is appreciated.
Hi guys,
We started running Nginx + Puma on Heroku, and we noticed that some dynos are being restarted automatically. I assume this is one of the buildpack's features, can you please explain when and what triggers this restart? Our dyno was performing well before it got restarted.
Thanks
By default only html will be gzipped. I imagine lots of heroku customers are using it to deploy api's. It would be great to have json, xml, others, enabled.
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml image/svg+xml;
We trap SIGCHLD signals in starg-nginx
so that if Unicorn, Logs, or NGINX should die, we will crash the parent process. However, there is a race condition in which Unicorn touches the app/initialization file and then immediately dies.
This is unlikely and I have not ran into the bug. However, I will leave it documented here.
Hi Ryan
As mentioned on Twitter, the Rails 3.2 app I'm running for flying-sphinx.com on Ruby 2.0 has seen a drop in performance since switching to this buildpack. In particular: the request queue times have jumped noticeably, and I'm also seeing 503's crop up regularly as well (mostly timeouts, but also some H13's / 'Connection closed without response').
My Procfile:
web: bin/start-nginx bundle exec unicorn -c config/unicorn.rb
worker: bundle exec sidekiq -c 5
My config/unicorn.rb file:
require 'fileutils'
preload_app true
worker_processes 2 # amount of unicorn workers to spin up
timeout 30 # restarts workers that hang for 30 seconds
listen '/tmp/nginx.socket'
before_fork do |server, worker|
FileUtils.touch '/tmp/app-initialized'
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
end
Should you want to examine things from the Heroku side of the fence, the app's name is tolerant-thebes. I'm about to roll back to the pre-nginx setup though.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.