Comments (44)
Update: Nope, it's not nginx related at all. I made a different socket file that nginx was not aware of.
from puma.
This is an old bug related to puma not removing the unix socket path when it shutdown previously. It is fixed now.
from puma.
I'm still having this issue! It's not deleting the socket file after bad crashes and then refuses to start instead of overriding it. This is not a safe default, as it will not start back up after a failure!
from puma.
What kind of bad crash? I can put some code in to try and detect an unused unix socket, but didn't initially because it has some inherent problems.
from puma.
For example if I have to kill -9 the process, it likely won't remove the file. I resolved it for now by running an rm command before I try to start the application. For some reason regular kill is ignored with 1.2.2 on our Ubuntu production box.. very strange. I may be able to get more information about it.
from puma.
Sorry, I'm in the middle of a hackathon right now so I'm not providing very useful information :)
from puma.
Well, yeah, if you kill -9 it, no time for cleanup. SIGTERM does a graceful shutdown, which means waiting for all requests to finish. Sounds like I at least need another sig that does a fast (but clean) shutdown.
- Evan // via iPhone
On Apr 29, 2012, at 4:19 PM, Kyle [email protected] wrote:
For example if I have to kill -9 the process, it likely won't remove the file. I resolved it for now by running an rm command before I try to start the application. For some reason regular kill is ignored with 1.2.2 on our Ubuntu production box.. very strange. I may be able to get more information about it.
Reply to this email directly or view it on GitHub:
#73 (comment)
from puma.
I have an issue similar to this when I start puma and Ctrl-C it to stop. Then when I try to restart it refuses
IOError: bind failed: Address already in use
from puma.
Are you having that problem now? When you do Ctrl-C, is there no other ruby process running on that port, check "netstat -tln" to find out.
from puma.
I'm trying to listen with a unix socket, fwiw. And now I'm not even sure how I got it working earlier, now I can't seem to get puma started at all. It always insists that the address is already in use. I'm trying to start with:
RAILS_ENV=production bundle exec puma -b 'unix:///tmp/.sock' -S /some/path/puma.state --control 'unix:///tmp/.sock'
I tried running netstat -x and I get:
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
unix 4 [ ] DGRAM 7425 /dev/log
unix 3 [ ] STREAM CONNECTED 8164
unix 3 [ ] STREAM CONNECTED 8163
unix 2 [ ] DGRAM 7990
unix 3 [ ] STREAM CONNECTED 7632 /var/run/dbus/system_bus_socket
unix 3 [ ] STREAM CONNECTED 7631
unix 2 [ ] DGRAM 7543
unix 3 [ ] STREAM CONNECTED 7404 /var/run/dbus/system_bus_socket
unix 3 [ ] STREAM CONNECTED 7403
unix 3 [ ] STREAM CONNECTED 7389
unix 3 [ ] STREAM CONNECTED 7388
unix 3 [ ] STREAM CONNECTED 6587 @/com/ubuntu/upstart
unix 3 [ ] STREAM CONNECTED 6583
unix 3 [ ] DGRAM 6261
unix 3 [ ] DGRAM 6260
unix 3 [ ] STREAM CONNECTED 6200 @/com/ubuntu/upstart
unix 3 [ ] STREAM CONNECTED 6197
I've restarted my box and tried changing from /tmp/.sock to /tmp/puma.sock but it's still telling me Address already in use now. I have no idea how I got it to start up earlier today but not now.
from puma.
Are you actually binding the main server and the control to the same path? That won't work at all, they must be 2 seperate paths. If you change that, what happens?
from puma.
D'oh, that was really stupid. So yeah, puma starts up just fine now. If I Ctrl-C it and then restart it, I get the bind failed error though. If I remove /tmp/.sock then restart it then it seems to start just fine.
from puma.
Which server is using /tmp/.sock? The main one? Try not turning on the control server and see what happens
from puma.
I actually still encounter quite a few cases where the socket file will still be around, thus puma can't be started without removing it by hand first.
I checked how unicorn handles this. And it seems they check if the socket file is actually still in use, if not it doesn't matter if it exists.
I think this is how puma should handle this as well...
Would that be possible?
from puma.
What are the cases where it will still be around?
--Evan Phoenix // [email protected]
On Friday, November 16, 2012 at 5:33 AM, Jan wrote:
I actually still encounter quite a few cases where the socket file will still be around, thus puma can't be started without removing it by hand first.
I checked how unicorn handles this. And it seems they check if the socket file is actually still in use, if not it doesn't matter if it exists.
I think this is how puma should handle this as well...
Would that be possible?—
Reply to this email directly or view it on GitHub (#73 (comment)).
from puma.
I just realized that this only happens with JRuby. On MRI i indeed don't have a problem at all.
With JRuby i just have to start puma, let it bind on a unix socket and exit out with CMD-C.
On MRI you will see `- Gracefully stopping, waiting for requests to finish``
On JRuby it just exists, killing the process and leaving the socket behind.
(JRuby 1.7.0 is the version i use)
from puma.
Oh interesting. I'll check out why JRuby isn't shutting down cleanly.
from puma.
My guess is there is a connection to this jruby issue: http://jira.codehaus.org/browse/JRUBY-4637
Fixing this probably isn't that easy because the JVM handles Ctrl-C?
Maybe @headius knows.
Anyway, i still think all of this wouldn't be a problem if an unsused .sock file wouldn't cause "socket already in use" and puma would check that it is not used first.
from puma.
I'm having this problem with MRI 1.9.3-p374, OSX 10.8.2
environment "development"
bind "tcp://0.0.0.0:5000"
bind "unix:///tmp/puma.sock"
pidfile "/tmp/puma.pid"
# daemonize true
workers 2
threads 1, 16
rackup "config.ru"
activate_control_app "tcp://127.0.0.1:9293", { auth_token: "foo" }
Starting puma the first time is fine, CTRL+C, starting again I get:
^C[97402] - Gracefully shutting down workers...
[97402] - Goodbye!
$ bundle exec puma -C config/puma/development.rb
[97520] Puma 2.0.0.b6 starting in cluster mode...
[97520] * Process workers: 2
[97520] * Min threads: 1, max threads: 16
[97520] * Environment: development
[97520] * Listening on tcp://0.0.0.0:5000
[97520] * Listening on unix:///tmp/puma.sock
/Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bundler/gems/puma-2e2440024623/lib/puma/binder.rb:234:in `initialize': Address already in use - /tmp/puma.sock (Errno::EADDRINUSE)
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bundler/gems/puma-2e2440024623/lib/puma/binder.rb:234:in `new'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bundler/gems/puma-2e2440024623/lib/puma/binder.rb:234:in `add_unix_listener'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bundler/gems/puma-2e2440024623/lib/puma/binder.rb:96:in `block in parse'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bundler/gems/puma-2e2440024623/lib/puma/binder.rb:64:in `each'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bundler/gems/puma-2e2440024623/lib/puma/binder.rb:64:in `parse'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bundler/gems/puma-2e2440024623/lib/puma/cli.rb:637:in `run_cluster'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bundler/gems/puma-2e2440024623/lib/puma/cli.rb:391:in `run'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bundler/gems/puma-2e2440024623/bin/puma:10:in `<top (required)>'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bin/puma:19:in `load'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bin/puma:19:in `<main>'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bin/ruby_noexec_wrapper:14:in `eval'
from /Users/kain/.rvm/gems/ruby-1.9.3-p374@mygemset/bin/ruby_noexec_wrapper:14:in `<main>'
Using master.
from puma.
Since someone's having an issue with MRI as well, perhaps this is simply the closed connection taking a while to go away? Or is something getting spun up that doesn't die right away?
Ctrl-C on JVM would normally do a forced shutdown of the JVM, but it's possible through normal Ruby signal APIs to rebind it. I'm not sure what Puma does, but Mongrel did bind it to a shutdown sequence.
from puma.
I randomly have the socket file stick around after the server crashes on MRI 1.9.3p327 on Ubuntu 12.04.
from puma.
I'm having the same issue: OS X 10.8, MRI 1.9.3-p385. Running 2.0.0.b6
from puma.
Happens to me on debian 7, MRI 1.9.3-p392 running 2.0.0b7.
from puma.
Same issue here. Ubuntu 10.04, Ruby 2.0.0-p0, puma 2.0.0b7.
I am running the app using your own jungle/upstart files and managing restarts with puma/capistrano.rb.
Capistrano automatically runs puma:restart
, which sometimes results in the puma.sock and puma.state files not being deleted.
from puma.
Happens to me too, albeit randomly. Ruby 2.0.0-p0, puma 2.0.0b7 on current Gentoo, Rails 4 app deployed via Capistrano. Socket file sticks around after restart (implemented as pumactl -S puma.state restart
) and prevents new instance of server from starting.
Error msg: /shared/bundle/ruby/2.0.0/gems/puma-2.0.0.b7/lib/puma/binder.rb:234:in 'initialize': Address already in use - "/tmp/puma-production.sock" (Errno::EADDRINUSE)
from puma.
Same to me, with puma 2.0.0.b7, ruby 1.9.3-p125, rails 3.2.11. Socket do not removing after stop server(with kill -9 [pid]) and sometimes after restart (kill -s USR2 [pid])
from puma.
Same here using jungle. Removing the puma tmp files then stopping/starting puma (and in my case force-reloading nginx) ensured that everything works again after redeploying the application.
from puma.
This happens to me, too, intermittently. I use cap deploy
and have require 'puma/capistrano
in my deploy.rb so puma is automatically restarted by bundle exec pumactl -S /path/to/app/shared/sockets/puma.state restart
. This works 4 out of 5 times. When it does not, puma is stopped but not started and the socket file is not removed.
from puma.
Hi. This happens to me too. The socket never gets removed. In OSX and Ubuntu 12.10.
I use to start:
bin/puma -C config/puma.rb -S tmp/sockets/puma.state (configuration is fine)
I stop with:
bin/pumactl -S tmp/sockets/puma.state stop
When I try starting again it fails with the 'Address already in use' (Errno::EADDRINUSE)
'restart' always works for me.
Regards and thanks !
from puma.
I know it's not optimal, but the best solution I've found so far is to listen on a port instead of the socket file. I've not been able to track down a consistent pattern of when it happens. It does seem that it's about 50% of the time.
from puma.
As I understand, after executing "pumactl ... stop" the .pid, .sock and .state files must go away, but they don't.
[root@redmine rm2.3.0]# puma -C ./config/puma.rb [6292] Puma 2.0.1 starting in cluster mode... [6292] * Process workers: 1 [6292] * Min threads: 0, max threads: 16 [6292] * Environment: production [6292] * Listening on unix://./tmp/puma/puma.sock [root@redmine rm2.3.0]# [root@redmine puma]# ls -la total 16 drwxr-xr-x 2 nginx root 4096 Jun 19 09:25 . drwxr-xr-x 11 nginx root 4096 Jun 18 16:07 .. -rw-r--r-- 1 root root 5 Jun 19 09:25 puma.pid srwxrwxrwx 1 root root 0 Jun 19 09:25 puma.sock -rw-r--r-- 1 root root 613 Jun 19 09:25 puma.state [root@redmine puma]# puma -V puma version 2.0.1 [root@redmine puma]# ps ax | grep puma 6297 ? Sl 0:00 /usr/local/rvm/gems/ruby-1.9.3-p327/bin/puma 6300 ? Sl 0:03 puma: cluster worker: 6297 6341 pts/1 S+ 0:00 grep puma [root@redmine rm2.3.0]# pumactl -P ./tmp/puma/puma.pid status Puma is started [root@redmine rm2.3.0]# pumactl -P ./tmp/puma/puma.pid stop Command stop sent success [root@redmine rm2.3.0]# pumactl -P ./tmp/puma/puma.pid status No pid '6297' found [root@redmine puma]# ls -la total 16 drwxr-xr-x 2 nginx root 4096 Jun 19 09:25 . drwxr-xr-x 11 nginx root 4096 Jun 18 16:07 .. -rw-r--r-- 1 root root 5 Jun 19 09:25 puma.pid srwxrwxrwx 1 root root 0 Jun 19 09:25 puma.sock -rw-r--r-- 1 root root 613 Jun 19 09:25 puma.state [root@redmine rm2.3.0]# puma -C ./config/puma.rb [6426] Puma 2.0.1 starting in cluster mode... [6426] * Process workers: 1 [6426] * Min threads: 0, max threads: 16 [6426] * Environment: production [6426] * Listening on unix://./tmp/puma/puma.sock /usr/local/rvm/gems/ruby-1.9.3-p327/gems/puma-2.0.1/lib/puma/binder.rb:235:in
initialize': Address already in use - ./tmp/puma/puma.sock (Errno::EADDRINUSE)
from /usr/local/rvm/gems/ruby-1.9.3-p327/gems/puma-2.0.1/lib/puma/binder.rb:235:in new' from /usr/local/rvm/gems/ruby-1.9.3-p327/gems/puma-2.0.1/lib/puma/binder.rb:235:in
add_unix_listener'
from /usr/local/rvm/gems/ruby-1.9.3-p327/gems/puma-2.0.1/lib/puma/binder.rb:96:in block in parse' from /usr/local/rvm/gems/ruby-1.9.3-p327/gems/puma-2.0.1/lib/puma/binder.rb:64:in
each'
from /usr/local/rvm/gems/ruby-1.9.3-p327/gems/puma-2.0.1/lib/puma/binder.rb:64:in parse' from /usr/local/rvm/gems/ruby-1.9.3-p327/gems/puma-2.0.1/lib/puma/cli.rb:652:in
run_cluster'
from /usr/local/rvm/gems/ruby-1.9.3-p327/gems/puma-2.0.1/lib/puma/cli.rb:406:in run' from /usr/local/rvm/gems/ruby-1.9.3-p327/gems/puma-2.0.1/bin/puma:10:in
<top (required)>'
from /usr/local/rvm/gems/ruby-1.9.3-p327/bin/puma:19:in load' from /usr/local/rvm/gems/ruby-1.9.3-p327/bin/puma:19:in
from /usr/local/rvm/gems/ruby-1.9.3-p327/bin/ruby_noexec_wrapper:14:in
eval' from /usr/local/rvm/gems/ruby-1.9.3-p327/bin/ruby_noexec_wrapper:14:in
'[root@redmine rm2.3.0]#`
from puma.
Long standing bug, I noticed that Rainbows! cleans the previous socket file upon starting, perhaps it's better than trying to clean it at shutdown?
from puma.
Until puma is fixed, the upstart script could clean the sockets on shutdown or check if the process is still running on startup and clean them if it's not. The latter solution is what I implemented as a stopgap measure.
from puma.
I did the same thing as @mcmoyer: switched to using ports. Even adding correct cleanup code occasionally gave me problems. I'd rather take the perf hit than have to deal with the sockets.
from puma.
Same issue. How can I help?
from puma.
This should be fixed in 2.3.0. Puma now cleans up stale unix socket files.
from puma.
Just tested on OSX and Ubuntu 12.10 (with puma 2.3.1)
Works great
Thanks a lot !
from puma.
I have this issue in 2.3.2.
from puma.
Issue is also present in 2.4.0. JRuby 1.7.4 if it matters.
from puma.
I'm having the same problem with the same versions of Puma and JRuby as @cmer mentioned.
from puma.
I am having the same problem with Puma 2.5 and JRuby in 1.7.4
from puma.
Same here as @adamhunter, Puma 2.5.1 and JRuby 1.7.4 running on Debian 7.0.
from puma.
Would love to see this reopened and fixed.
from puma.
Please open a new issue with your details and backtrace if you still see this.
from puma.
Related Issues (20)
- Puma and Puma Dev Linux issue
- Phased-Restart Causes Action Cable Connection Loss on Rails App Restart HOT 3
- Add a hook for "before shutdown" HOT 6
- Systemd watchdog kills puma during hot restart. HOT 4
- NoMethodError: undefined method `release' for Rack:Module error at "rails s" initialization time HOT 5
- Staging environment, request.ENV['HTTP_HOST'] displays 2 urls when it should be a single url HOT 1
- HttpParserError raised when body (chunk) exceeds 4096 bytes does not end with "\r\n" HOT 4
- undefined method `log_writer' for #<Puma::Launcher on usin plugin :solid_queue in puma HOT 1
- `TestWorkerGemIndependence#test_changing_nio4r_version_during_phased_restart` fails in head builds
- Option to restrict cipher suites for TLS1.3
- Requests with a caret (`<` or `>`) in a query parameter raise a `Puma::HttpParserError` HOT 10
- Random freezes in dev mode
- Puma doesn't start the Rails app after run
- "undefined method `on_booted` for nil:NilClass" on "pumactl start" HOT 2
- Large number of workers are booting too long or not booting the first time HOT 4
- What options are not available when using `rails s`? HOT 1
- MAX_CHUNK_HEADER_SIZE = 4096 is too small for podman HOT 6
- Puma config get evaluated twice with pumactl
- question about puma architecture
- Windows - Puma compiled without ssl support HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from puma.