Puma and Nginx 502 Bad Gateway Error (Ubuntu Server 14.04)

502 Bad Gateway - Rails App, Puma, Capistrano, Nginx

There are two puma config files, in the first one you've specified a port instead of a socket. Does it affect the production environment?

port        ENV.fetch("PORT") { 3000 }

Please double check that the socket file, as well as the puma process, exists after a deploy.

ps ax|grep puma
ls -la /var/www/blog/shared/tmp/sockets/puma.sock

If so have a look at the puma's and application's log files

Update after the discussion in the comments below:

It turned out that there is no puma process after a deployment. The puma's log files were empty too. That's why we decided to try run it manually on the server by going to the applications root path /var/www/blog/current and executing

bundle exec puma -b /var/www/blog/shared/tmp/sockets/puma.sock

The result was a permission error shown on the STDOUT. So the problem disappeared after we fixed log and pid files location in /var/www/blog/shared/config/puma.rb as follows:

stdout_redirect "/var/www/shared/logs/puma.stdout.log", "/var/www/shared/logs/puma.stderr.log", true

pidfile "/var/www/blog/shared/tmp/pids/puma.pid"
state_path "/var/www/blog/shared/tmp/pids/puma.state"

Rails 5 app with Puma and Nginx - 111: Connection refused while connecting to upstream, client

I scrapped the AWS EC2 instance and recreate it using and Ubuntu 14.04 implementation which I upgraded to 16.04. I followed strictly the guidance found here

http://codepany.com/blog/rails-5-puma-capistrano-nginx-jungle-upstart/
and related links from the same blog.

Now Nginx and Puma are working properly together and my app runs perfectly here:

http://ec2-54-159-156-217.compute-1.amazonaws.com/
The only difference form the guidelines is that I kept the AWS RDS instance for the database. I used RVM in the production server although I am using RBENV on my Mac. I used ubuntu user (like root) for deployment since I am suspecting all the troubles I had were related to permissions and I did not know how to fix them.

Many errors encountered earlier trying to start properly Puma on socket and make it working with Nginx, especially not restarting after

cap production deploy
were related to generating the secret and putting this value in the appropiate file. For me it worked the best writing it in the /etc/environment file.

I also did changes in the file /etc/ssh/sshd_config in order to get root or ubuntu access through ssh. In this matter this link

https://forums.aws.amazon.com/thread.jspa?threadID=86876
was very useful.

Nginx, Puma, Ubuntu 20.04 error 111: Connection refused

I think you have too many / in the upstream configuration entry. It's trying to connect as if it was an HTTP url.

Try changing:

upstream myapp {
server unix:///home/deploy/app/tmp/pids/puma.sock;
}

to

upstream myapp {
server unix:/home/deploy/app/tmp/pids/puma.sock;
}

Source: nginx ngx_http_upstream_module documentation

502 bad gateway and codeigniter/nginx/apache. Code or server issue?

I would recommend checking Apache server status right after failure (service httpd status). Not much details for better judgement, but one of likely scenarios would be that Apache fails on Nth request, so Nginx cannot forward request to Apache and returns "502 Bad Gateway" to you.

The Apache failure may be both programmatic and misconfiguration issue. i.e. it takes too much RAM on Nth request (my understanding that you keep some date between requests) and then is killed by virtualization engine (in case you have VPS rather than dedicated hardware server). Just a hypothesis so far, but I have experienced cases like that before.

502 Bad gateway error for rails production environment?

The problem is I configured my unicorn server

timeout 5

but the response from the service call taking time so increased time

timeout 15

now it is working

Strange issue with unicorn and nginx caused 502 error

Looks like your unicorn server wasn't running when nginx tried to access it.

This can be caused by VPS restart, some exception in unicorn process, or killing of unicorn process due to low free memory. (IMHO VPS restart is the most possible reason)
Check unicorn by

ps aux | grep unicorn

Also you can check server uptime with

uptime

Then you can:

  • add script that would start unicorn on VPS boot
  • add it as service
  • run some monitoring process (like monit)


Related Topics



Leave a reply



Submit