Coder Social home page Coder Social logo

nginx-buildpack's Introduction

Heroku Buildpack: NGINX

Nginx-buildpack vendors NGINX inside a dyno and connects NGINX to an app server via UNIX domain sockets.

Motivation

Some application servers (e.g. Ruby's Unicorn) halt progress when dealing with network I/O. Heroku's Cedar routing stack buffers only the headers of inbound requests. (The Cedar router will buffer the headers and body of a response up to 1MB) Thus, the Heroku router engages the dyno during the entire body transfer –from the client to dyno. For applications servers with blocking I/O, the latency per request will be degraded by the content transfer. By using NGINX in front of the application server, we can eliminate a great deal of transfer time from the application server. In addition to making request body transfers more efficient, all other I/O should be improved since the application server need only communicate with a UNIX socket on localhost. Basically, for webservers that are not designed for efficient, non-blocking I/O, we will benefit from having NGINX to handle all I/O operations.

Versions

  • Buildpack Version: 0.4
  • NGINX Version: 1.5.7

Requirements

  • Your webserver listens to the socket at /tmp/nginx.socket.
  • You touch /tmp/app-initialized when you are ready for traffic.
  • You can start your web server with a shell command.

Features

  • Unified NXNG/App Server logs.
  • L2met friendly NGINX log format.
  • Heroku request ids embedded in NGINX logs.
  • Crashes dyno if NGINX or App server crashes. Safety first.
  • Language/App Server agnostic.
  • Customizable NGINX config.
  • Application coordinated dyno starts.

Logging

NGINX will output the following style of logs:

measure.nginx.service=0.007 request_id=e2c79e86b3260b9c703756ec93f8a66d

You can correlate this id with your Heroku router logs:

at=info method=GET path=/ host=salty-earth-7125.herokuapp.com request_id=e2c79e86b3260b9c703756ec93f8a66d fwd="67.180.77.184" dyno=web.1 connect=1ms service=8ms status=200 bytes=21

Language/App Server Agnostic

Nginx-buildpack provides a command named bin/start-nginx this command takes another command as an argument. You must pass your app server's startup command to start-nginx.

For example, to get NGINX and Unicorn up and running:

$ cat Procfile
web: bin/start-nginx bundle exec unicorn -c config/unicorn.rb

Setting the Worker Processes

You can configure NGINX's worker_processes directive via the NGINX_WORKERS environment variable.

For example, to set your NGINX_WORKERS to 8 on a PX dyno:

$ heroku config:set NGINX_WORKERS=8

Customizable NGINX Config

You can provide your own NGINX config by creating a file named nginx.conf.erb in the config directory of your app. Start by copying the buildpack's default config file.

Customizable NGINX Compile Options

See scripts/build_nginx.sh for the build steps. Configuring is as easy as changing the "./configure" options.

Application/Dyno coordination

The buildpack will not start NGINX until a file has been written to /tmp/app-initialized. Since NGINX binds to the dyno's $PORT and since the $PORT determines if the app can receive traffic, you can delay NGINX accepting traffic until your application is ready to handle it. The examples below show how/when you should write the file when working with Unicorn.

Setup

Here are 2 setup examples. One example for a new app, another for an existing app. In both cases, we are working with ruby & unicorn. Keep in mind that this buildpack is not ruby specific.

Existing App

Update Buildpacks

$ heroku config:set BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
$ echo 'https://github.com/ryandotsmith/nginx-buildpack.git' >> .buildpacks
$ echo 'https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/ruby.tgz' >> .buildpacks
$ git add .buildpacks
$ git commit -m 'Add multi-buildpack'

Update Procfile:

web: bin/start-nginx bundle exec unicorn -c config/unicorn.rb
$ git add Procfile
$ git commit -m 'Update procfile for NGINX buildpack'

Update Unicorn Config

require 'fileutils'
listen '/tmp/nginx.socket'
before_fork do |server,worker|
	FileUtils.touch('/tmp/app-initialized')
end
$ git add config/unicorn.rb
$ git commit -m 'Update unicorn config to listen on NGINX socket.'

Deploy Changes

$ git push heroku master

New App

$ mkdir myapp; cd myapp
$ git init

Gemfile

source 'https://rubygems.org'
gem 'unicorn'

config.ru

run Proc.new {[200,{'Content-Type' => 'text/plain'}, ["hello world"]]}

config/unicorn.rb

require 'fileutils'
preload_app true
timeout 5
worker_processes 4
listen '/tmp/nginx.socket', backlog: 1024

before_fork do |server,worker|
	FileUtils.touch('/tmp/app-initialized')
end

Install Gems

$ bundle install

Create Procfile

web: bin/start-nginx bundle exec unicorn -c config/unicorn.rb

Create & Push Heroku App:

$ heroku create --buildpack https://github.com/ddollar/heroku-buildpack-multi.git
$ echo 'https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/ruby.tgz' >> .buildpacks
$ echo 'https://github.com/ryandotsmith/nginx-buildpack.git' >> .buildpacks
$ git add .
$ git commit -am "init"
$ git push heroku master
$ heroku logs -t

Visit App

$ heroku open

License

Copyright (c) 2013 Ryan R. Smith Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

nginx-buildpack's People

Contributors

a-warner avatar jcmuller avatar kanzure avatar kr avatar marcgg avatar orip avatar ryandotsmith avatar tt avatar venables avatar yingted avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nginx-buildpack's Issues

How to constrain client connections?

I seriously hesitate to post this here, but figure others may have the same question, and it is so specific to this particular implementation on Heroku... Please just let me know if there's a better place to have this answered.

I agree with others, most will be using this with Unicorn. Those experienced with configuring that server have probably discovered and tuned the backlog parameter.

Example:

pipe = if (ENV['RACK_ENV'] == 'production' || ENV['RAILS_ENV'] == 'production')
  puts '=> Unicorn listening to NGINX socket for requests'
  '/tmp/nginx.socket'
else
  puts '=> Unicorn listening to TCP port for requests'
  ENV['PORT'] || 5000
end

# If the router has more than N=backlog requests for our
# workers, we want the queueing to show up at the router and have the
# chance to go to another dyno. In case our workers are dead or slow
# we don't want requests sitting in the unicorn backlog timing out.
# Also, on restart, we don't want more requests than the dyno can
# clear before exit timeout
listen pipe, :backlog => 36

Advantages described in the comment, but in short, this most importantly allows the server/dyno to feedback to the routing layer that it can't handle more.

Short version of question: What is the best way to maintain these benefits with NGINX in the stack?

Longer version:

With NGIX in as reverse proxy, we lose much of this benefit from tuning backlog.

I assume events.worker_connections is the key config param here. From NGINX doc:

The worker_connections and worker_processes from the main section allows you to calculate max clients you can handle:

max clients = worker_processes * worker_connections

In a reverse proxy situation, max clients becomes

max clients = worker_processes * worker_connections/4

Since a browser opens 2 connections by default to a server and nginx uses the fds (file descriptors) from the same pool to connect to the upstream backend
  • Does this mean in the case of this buildpack's configuration that the NGINX 'backlog depth' = max number of clients is 1024 = ( 4 * ( 1024 / 4) ), or does this math change given the router layer's handling of connections to and from the client?
  • Do connections for which NGINX is buffering the request body or responses count against this worker_connections?
  • What happens when NGINX max clients is hit? (e.g. is the next request gracefully redistributed to another dyno?)

bin/nginx: no such file or directory

I'm getting this error when trying to start up:

-----> Fetching custom git buildpack... done
-----> nginx-buildpack app detected
cp: cannot stat `bin/nginx': No such file or directory

! Push rejected, failed to compile nginx-buildpack app

Any idea what could cause this?

proxy_pass to another heroku app

Hi, wondering if its possible to use this setup to proxy certain paths to another heroku app. For example, the following nginx.conf works locally, but fails when deployed to Heroku.

daemon off;
# Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;

events {
  use epoll;
  accept_mutex on;
  worker_connections 1024;
}

http {
  gzip on;
  gzip_comp_level 3;
  gzip_min_length 150;
  gzip_proxied any;
  gzip_types text/plain text/css text/json text/javascript
    application/javascript application/x-javascript application/json
    application/rss+xml application/vnd.ms-fontobject application/x-font-ttf
    application/xml font/opentype image/svg+xml text/xml;

  server_tokens off;

  log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
  access_log logs/nginx/access.log l2met;
  error_log logs/nginx/error.log;

  include mime.types;
  default_type application/octet-stream;
  sendfile on;

  # Must read the body in 5 seconds.
  client_body_timeout 5;

  upstream app_server {
    server unix:/tmp/nginx.socket fail_timeout=0;
  }

  server {
    listen <%= ENV["PORT"] %>;
    server_name _;
    keepalive_timeout 5;

    root /app/public; # path to your app

    location / {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_redirect off;
      proxy_pass http://app_server;
    }

    location /static/ {
      rewrite ^/static/?(.*)$ /$1 break;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_pass http://my-other-app.herokuapp.com;
    }
  }
}

Managing Nginx Memory Usage

image

I've got this running alongside Gunicorn and Django on a Heroku 2X dyno and I'm seeing a bunch of memory warnings and what not. Didn't change anything except add the buildpack and the Procfile.

Anyway to manage the nginx memory usage? Thanks

Does nginx actually closes connections after client_body_timeout?

Or is it Heroku's router ignoring the socket close?

TL;DR - Does nginx closes the socket after client_body_timeout? or Heroku's router chooses to ignore the connection close from the dyno side? and if so, how to overcome it?

If the connection is idle for more then 5 seconds I get the exp expected output:
measure#nginx.service=5.006 request_id=xxxxxxx

But, the router does not seem to consider this request as served. I Have no indication that the socket actually closes so that the router would report an error to the user, the router still waits for the request to complete after these 5 seconds.
If any byte is transmitted after that the routed does picks up the connection closed by the dyno, but that is not my scenario, I'm assuming in some point of time the client stops sending the body altogether (let's say a mobile client).
If the socket does close, how come the router ignores it? Or at least untill it gets any additional data byte from the user... I have to wait for the standard 55 seconds before the router cuts off due to idle connection, missing the whole point of my need for the nginx.

nginx proxy_pass blocks the port

I am trying to host the Tensorboard on a Heroku instance, and to secure it, I have added nginx using the nginx-buildpack in front of it.
The idea is that Tensorboard will create the app on port 6006, and Nginx will redirect this port to the external port provided by Heroku $Port.

When I start the app, I have the following error:

TensorBoard attempted to bind to port 6006, but it was already in use

My config files are as follows:

Procfile

web: bin/start-nginx tensorboard --logdir="/app/" --host=http://127.0.0.1 --port=6006

config/nginx.conf.erb

daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;

events {
    use epoll;
    accept_mutex on;
    worker_connections 1024;
}
http {
    gzip on;
    gzip_comp_level 2;
    gzip_min_length 512;

        server_tokens off;

        log_format l2met 'measure#nginx.service=$request_time 
        request_id=$http_x_request_id';
        access_log logs/nginx/access.log l2met;
        error_log logs/nginx/error.log;

        include mime.types;
        default_type application/octet-stream;
        sendfile on;

        #Must read the body in 5 seconds.
        client_body_timeout 5;

        #upstream app_server {
        #	server unix:/tmp/nginx.socket fail_timeout=0;
        #}

        server {
	        listen <%= ENV["PORT"] %>;
	        server_name http://127.0.0.1;
	        keepalive_timeout 5;
	        root   /app;
	        port_in_redirect off;
        #index  index.html index.htm;

	    location = / {
		    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		    proxy_set_header Host $http_host;
		    proxy_redirect off;
		    proxy_pass http://127.0.0.1:6006;
	    }
    }
}

Provide better support for local development

I think this is a case of maintaining dev/prod parody. In production, the app server will be receiving requests from NGINX over a UNIX socket, it would be nice to have this work locally as well.

! Push rejected, failed to compile Multipack app

Heroku newbie here, trying to deploy NGINX to work with my Ruby/Unicorn application. I've carefully followed the setup steps for an existing app, yet receive the following error in response to git push heroku master:

Fetching repository, done.
Counting objects: 13, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (11/11), done.
Writing objects: 100% (11/11), 1.29 KiB, done.
Total 11 (delta 4), reused 0 (delta 0)

-----> Deleting 116 files matching .slugignore patterns.
-----> Fetching custom git buildpack... done
-----> Multipack app detected
=====> Downloading Buildpack: https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/ruby.tgz

 !     Push rejected, failed to compile Multipack app

To [email protected]:my-repo.git
 ! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to '[email protected]:my-repo.git'

Any thoughts as to what is going wrong?

Changing ./configure options in build_nginx.sh doesn't do anything when compiling on heroku server.

I'm trying to tweak this buildpack so that it usses the nginx ssl module.

I've added the right line to the build_nginx.ssh file, and I'm pointing my custom app at a fork of your buildpack with only that line changed.

However, I'm not seeing any ./configure changes reflected in my app. Running heroku run nginx -V shows that the exact same modules are running, even if I add or delete lines in build_nginx.sh

Is there an extra step to changing configure options beyond just changing this file?

h13 errors after 5 seconds

I've had to stop using nginx because I was getting a significant number of h13 errors after exactly 5 seconds of service on a request (according to the nginx log). Occasionally I would see 13 after 10 or 15 seconds but mostly it was after exactly 5. Cutting out nginx fixed the problem. I would really welcome any thoughts about why this might be happening. The only 5 second timeout I know about is the database pool connection timeout, but I don't see why adding nginx would affect that (and it is supposed to log a message in my app, although maybe it's just failing).

Novice question

Hi,

I can't make it work. Seems my node.js app (port 3000) is called in spite of nginx (port 3001).

$ heroku config:set BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
$ echo 'https://github.com/heroku/heroku-buildpack-nodejs.git' >> .buildpacks
$ echo 'https://github.com/Gionni/nginx-buildpack.git' >> .buildpacks [minimal fork ryandotsmith]
2.
Procfile: web: bin/start-nginx ./node_modules/.bin/forever app.js
3.
$ git add... / commit... / push heroku...
All goes ok but nginx.conf, even if executed, seems to have no effects. My app pops out, anyway.

http {
server {
listen <%= ENV["NGINX_PORT"] %>;
location / {
proxy_pass http://www.nytimes.com/;
}
}
}

Any suggestion for making nginx king rather than what should be hidden?
Thanks, Johnny

custom nginx.config.erb not working

I am deploying a node.js web server using this buildpack on heroku.
And also configured our custom nginx.config.erb file in our config directory,
But I found that when the nginx-buildpach runs up, it just wipes out our custom
config file, and writes its default config in the file. It looks weird.
Am I missing something?

GZip Assets

I came across this buildpack mostly due the fact that I wanted to GZip my assets. I figured nginx would be good for this (as well as the reasons listed in your README).

Anyways, I added the default nginx.conf.erb to my config folder. Then I attempted adding the following from the Asset Pipeline Guide to the server section like this:

server {
  ...
  location ~ ^/(assets)/  {
    root /path/to/public;
    gzip_static on; # to serve pre-gzipped version
    expires max;
    add_header Cache-Control public;
  }
}

I get an application error. Here's the most important part from the logs:

nginx: [emerg] unknown directive "gzip_static" in ./config/nginx.conf

Of course, /path/to/public is just placeholder, so, as a side note, I'm wondering if this will work:

root <%= Rails.root.join('public').to_s %>

Moving forward, it seems that nginx needs to be configured with http_gzip_static_module. I'm not exactly sure how to do that or if it fits in the scope of this repo.

Does anyone have any experience with trying to GZip precompiled assets using nginx? Better way? Am I on to something we could merge in with this repo? This module seems to be configurable in other, less popular build packs I've seen around here on GitHub.

Also, this method is documented here using Passenger: http://dennisreimann.de/blog/configuring-nginx-for-the-asset-pipeline/

websockets over nginx

I've been using this package to proxy to a couple of different express servers without issue. However, I found that when I try and use a websockets server it is not working.

I'm using the default config with a few modifications...

daemon off;
#Heroku dynos have 4 cores.
worker_processes 4;

events {
	use epoll;
	accept_mutex on;
	worker_connections 1024;
}

http {
    gzip on;
    gzip_comp_level 2;
    gzip_min_length 512;

	log_format l2met 'measure.nginx.service=$request_time request_id=$http_heroku_request_id';
	access_log logs/nginx/access.log l2met;
	error_log logs/nginx/error.log;

	include mime.types;
	default_type application/octet-stream;
	sendfile on;

    map $http_upgrade $connection_upgrade {
        default upgrade;
    }

	server {
		listen <%= ENV["PORT"] %>;
		server_name _;
		keepalive_timeout 5;

		location /test {
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			proxy_set_header Host $http_host;
			proxy_redirect off;
			proxy_pass http://127.0.0.1:3000;
		}
        location / {
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_set_header X-NginX-Proxy true;
  
            proxy_buffers 8 32k;
            proxy_buffer_size 64k;

            proxy_pass http://127.0.0.1:3001;
            proxy_redirect off;
            proxy_ignore_client_abort on;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
	}
}

I changed the part under default upgrade to not close on empty string, since this was causing issues. However, I still have an error once doing that which is:

2017/02/20 06: 6 connect() failed (111: Connection refused) while connecting to upstream, client: 10.186.58.177, server: _, request:"GET /?encoding=text HTTP/1.1", upstream: "http://127.0.0.1
:3001/?encoding=text", host: "www.mydomain.com"

My websocket server is listening on port 3001 as well, so I'm not sure what the issue is. Does this buildpack do anything different config wise, i.e. is there any reason why websockets shouldn't work over this buildpack?

Heroku premature routing

I started noticing peaks in request queuing times whenever we increased the number of dynos.
I believe it's due to the fact that nginx binds to the port as soon as it's up and so Heroku thinks the dyno is ready, when in fact unicorn (or whatever is behind nginx) is still booting.

failed to compile nginx-buildpack app

While invoking git push heroku master, I run into a problem with my configuration of .nginx-buildpack.

remote: -----> nginx-buildpack app detected
remote: cp: cannot stat 'bin/nginx-heroku-16': No such file or directory
remote:  !     Push rejected, failed to compile nginx-buildpack app.

I tried creating the bin/nginx-heroku-16 directory manually, then pushing changes to my repository, but the problem was not fixed. here's a look at my root directory.

image

Here's a look at my app configuration on heroku:

image

I have no idea what is causing this. Please assist :)

Static files

I just have some static html files that i want to deploy with nginx. I dont understand how to startup nginx for this.

When does this buildpack crash the dyno?

Hi guys,

We started running Nginx + Puma on Heroku, and we noticed that some dynos are being restarted automatically. I assume this is one of the buildpack's features, can you please explain when and what triggers this restart? Our dyno was performing well before it got restarted.

Thanks

Nginx isn't able to respond to requests.

Hey Ryan,

I realize this may or may not be a buildpack issue but I figured I'd start here and at least get some ideas on how to debug it.

We have a Rails 4 app on Heroku that consists of several engines and a wrapper Rails app. If I remove the heroku_12factor gem and change serve_static_assets = false in production.rb, nothing in the public folder will load. I simply get a Heroku level 404. My assumption, which could definitely be wrong, is that if we're using Nginx we should turn off serving of static assets at the Rails level.

EDIT: I also forgot to mention I've setup our Unicorn config and Procfile to match the readme.

Our buildpacks file looks like:

https://github.com/ryandotsmith/nginx-buildpack.git#v0.6
https://github.com/heroku/heroku-buildpack-ruby.git

We're using https://github.com/ddollar/heroku-buildpack-multi.git as our Heroku buildpack config var.

I'm not sure if my meager logs are helpful. As an example I've got

2014-09-30T22:33:01.214157+00:00 heroku[router]: at=info method=GET path="/favicon.ico" host=staging.allovue.com request_id=59481393-53e6-4520-8bbe-12d9f7b4bf4c fwd="71.198.140.98" dyno=web.1 connect=0ms service=9ms status=404 bytes=1651

2014-09-30T22:33:02.063708+00:00 app[web.1]: measure#nginx.service=0.009 request_id=59481393-53e6-4520-8bbe-12d9f7b4bf4c

I've tried changing tagged versions of the buildpack, setting up a nginx.conf.erb with a defined root under system, and animal sacrifice. So far none have worked.

Thanks for your time creating and maintaining this buildpack. Any advice you've got is appreciated.

Bad Gateway on Heroku Restart

I'm running this (awesome) buildpack with Puma, and Heroku deployments seem to work perfectly with nginx waiting for puma to start before accepting requests. However I notice that when I do a "heroku restart", I end up getting Bad Gateway 502 errors for the time between nginx re-starting and puma re-starting (at least that's what I think is happening).

Could this be because the /tmp/app-initialized file has already been created from the first deploy, and is not deleted? Should I be deleting the file on restart to prevent this? Sorry if this is an obvious issue - I don't fully understand how heroku's "ephemeral file system" works.

Proxy Cache not working on heroku

I have a config which works on my local machine with proxy reverse cache. It bypass the hitting the box every time.

I used nginx-build pack with proxy cache but on nginx every time it hit the box i can observe heroku[router] on logs

providing the config for reference in bottom.

Sample config

daemon off; # Heroku dynos have at least 4 cores.

worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;

events {
use epoll;
accept_mutex on;
worker_connections 1024;
}

http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;

server_tokens off;

log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;

include mime.types;
default_type application/octet-stream;
sendfile on;

#Must read the body in 5 seconds.
client_body_timeout 5;

upstream app_server {
    server unix:/tmp/nginx.socket fail_timeout=0;
}

proxy_cache_path  /tmp/nginx  levels=1:2    keys_zone=STATIC:10m
                                 inactive=24h  max_size=1g;

server {
    listen <%= ENV["PORT"] %>;
    server_name _;
    keepalive_timeout 5;

    location  / {
        proxy_pass             http://app_server;
    }


    location  ~* ^/(.*)/(.*)/(.*).(jpe?g|png|[tg]iff?|svg) {
        proxy_pass             http://app_server;
        add_header Cache-Control public;
        add_header X-Cache-Status $upstream_cache_status
        proxy_set_header       Host $host;
        proxy_cache            STATIC;
        proxy_cache_valid      200  1d;
        proxy_cache_use_stale  error timeout invalid_header updating
                               http_500 http_502 http_503 http_504;
    }
}

}

Heroku Cedar 16 throwing new error on first build.

Did some searching on google didn't find much. Any ideas? Is it possible that nginx upstream pgk hasn't been built for heroku-16 yet?

-----> nginx-buildpack app detected
cp: cannot stat 'bin/nginx-heroku-16': No such file or directory
 !     Push rejected, failed to compile nginx-buildpack app.
 !     Push failed

Log Rotation

Is there any fear of having the nginx access.log and error.log files filling up in the dyno?

Push rejected

Push rejected, no Cedar-supported app detected

I followed the instructions given in the Readme.

I am using a Python / Gunicorn setup.

Is my Procfile correct?
web: bin/start-nginx newrelic-admin run-program gunicorn someapp:app

gzip additional types

By default only html will be gzipped. I imagine lots of heroku customers are using it to deploy api's. It would be great to have json, xml, others, enabled.

gzip_types text/plain text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml image/svg+xml;

Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch.

Hi, I am trying to use this buildpack for my Rails project.

I've followed the instruction. But after I deploy, I keep on getting a R10 error on Heorku.
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch.

I've tried to copy the nginx.config.erb file and put it under my config directory, but it didn't work.

Any idea as to why this is happening?

Thanks!

Performance has degraded

Hi Ryan

As mentioned on Twitter, the Rails 3.2 app I'm running for flying-sphinx.com on Ruby 2.0 has seen a drop in performance since switching to this buildpack. In particular: the request queue times have jumped noticeably, and I'm also seeing 503's crop up regularly as well (mostly timeouts, but also some H13's / 'Connection closed without response').

My Procfile:

web: bin/start-nginx bundle exec unicorn -c config/unicorn.rb
worker: bundle exec sidekiq -c 5

My config/unicorn.rb file:

require 'fileutils'

preload_app true
worker_processes 2 # amount of unicorn workers to spin up
timeout 30         # restarts workers that hang for 30 seconds

listen '/tmp/nginx.socket'

before_fork do |server, worker|
  FileUtils.touch '/tmp/app-initialized'

  Signal.trap 'TERM' do
    puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
    Process.kill 'QUIT', Process.pid
  end

  defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!
end

after_fork do |server, worker|
  Signal.trap 'TERM' do
    puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
  end

  defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
end

Should you want to examine things from the Heroku side of the fence, the app's name is tolerant-thebes. I'm about to roll back to the pre-nginx setup though.

Race condition with Unicorn and SIGCHLD trap

We trap SIGCHLD signals in starg-nginx so that if Unicorn, Logs, or NGINX should die, we will crash the parent process. However, there is a race condition in which Unicorn touches the app/initialization file and then immediately dies.

This is unlikely and I have not ran into the bug. However, I will leave it documented here.

Default to Heroku's Ruby buildpack

My guess is that most of the time people will be using this buildpack with RoR apps running Unicorn. So we should optimize the setup process for that case.

One large item we can remove from the setup process is the multi-buildpack step. It would be ideal if by default, this buildpack included Heroku's Ruby buildpack. Then, the .buildpacks file would be optional.

cc: @mattsoldo

allow an option to skip touching /tmp/app-initialized

i really like this buildpack, especially since it plays very nicely with multi, and the fact that i can provide my own nginx.conf.erb.

however, if i am using nginx to proxy to another server, it's almost impossible to touch /tmp/app-initialized.

is it possible to add a command-line option to start-nginx that would skip over that particular check?

thanks.

gzip doesn't work

Hey all,
Thanks for your work!

I'm having trouble getting gzip to work on heroku. Here is my nginx config file and here is my web app deployed on heroku http://o.hackernews.im and my test command curl -I -v -H "Accept-Encoding: gzip,deflate" http://o.hackernews.im

The weird thing is with the same config file and same run command, I can get a gzipped response from my development environment, but when deployed on heroku, no matter how hard I try, I just cannot get the response gzipped. Any one else has the same problem with me?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.