Rails Ssl Issue: (Https://Example.Com) Didn't Match Request.Base_Url (Http://Example.Com)

Rails SSL issue: (https://example.com) didn't match request.base_url (http://example.com)

Just run in to the same error. In config/environments/production.rb make sure you have set:

config.force_ssl = true

While not strictly related to this issue, after setting this setting you will need to ensure that your reverse proxy (if you have one) is set up to forward the protocol used to rails by sending the X-Forwarded-Proto header from the proxy to rails. The way this is done depends on which reverse proxy you use (Apache, nginx, etc) and how you have configured it so it's best you look up the specific documentation for the reverse proxy you are using.

HTTP Origin header (https://myapp.com) didn't match request.base_url (http://myapp.com)

I've finally figured this out and users can now log in and log out. As suggested in the comments, the HTTP Origin header warning was the source of the issue and the solution was to resolve this rather than anything to do with managing cookies or the cache (what I originally thought).

The warning WARN -- : HTTP Origin header (https://myapp.ie) didn't match request.base_url (http://myapp.ie) was resolved by including proxy_set_header origin 'http://myapp.ie'; in the .conf file in order to correctly configure the NGINX server.

The myapp.ie.conf file is below:

upstream docker {
server web:3000 fail_timeout=0;
}

server {
listen 443 ssl;
server_name myapp.ie;
ssl_certificate /etc/letsencrypt/live/myapp.ie/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.ie/privkey.pem;
try_files $uri/index.html $uri @docker;
client_max_body_size 4G;

location @docker {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header origin 'http://myapp.ie';
proxy_redirect off;
proxy_pass http://docker;
}
}

Source of solution here: https://github.com/heartcombo/devise/issues/4847

HTTP Origin header didn't match request.base_url

You need add header to the nginx configuration(there is another file with server configuration, not nginx.conf), here is an example:

server {
listen 80;
server_name server.com www.server.com;
# some configuration here

location @server {
# ... some configuration here
# this set proper header
proxy_set_header Host www.my_actual_domain_name.com;
# ... some configuration here
}

}

Source

How to secure with SSL HTTPS a Nginx front-end and Puma back-end in Amazon EC2 AWS

You’re going a wrong way.

In most cases you don’t need to secure your backend app, here's why:

  1. The traffic would go either via a local connection or an internal network so it doesn’t make sense to route it over a secure connection
  2. You’ll hamper nginx <-> backend communication performance (it's negligible but still)
  3. You’ll get all sorts of issues with additional internal certificates
  4. And even more - not all backend servers even support https processing - because it’s simply not their job
  5. From development perspective, your backend web app shouldn’t care whether it’s running over http or httpS - these are environment issues that should be completely dissected from your app’s logic

So the task boils down to configuring https in nginx and doing proxy_pass (https://nginx.ru/en/docs/http/ngx_http_proxy_module.html#proxy_pass) over an insecure connection to your HTTP-aware backend server.

The question is how to deal with the following:

  1. web host name your backend server knows nothing about
  2. whether to generate http or https urls
  3. what’s the real client’s ip because your backend app would see nginx's ip address

Here’s how it’s usually solved:

  1. Host header is passed via proxy_set_header and picked up by the backend server
  2. X-Forwarded-Proto header is passed and usually respected by backend servers
  3. X-Forwarded-For header contains the original user’s ip

I googled such setups related to puma, this is very close to how it might eventually look like (borrowed from here https://gist.github.com/rkjha/d898e225266f6bbe75d8),@myapp_puma section is of particular interest in your case:

upstream myapp_puma {
server 127.0.0.1:8080 fail_timeout=0;
}
server {
listen 443 default ssl;
server_name example.com;
root /home/username/example.com/current/public;
ssl on;
ssl_certificate /home/username/.comodo_certs/example.com.crt;
ssl_certificate_key /home/username/.comodo_certs/example.com.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

try_files $uri/index.html $uri @myapp_puma;

location @myapp_puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_pass http://myapp_puma;
}

error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}

Now about your angular app.
It needs to know what url it should use for the API requests.
It can be solved in numerous ways, but the main point is that NO URLS should be hard-coded in the code, you can use i.e environment variables instead. Here's one of the approaches described - it uses an env.js file included before your angular app and then relies on the constants defined throughout the code. In your case apiUrl should point at the desired https endpoint

(function (window) {
window.__env = window.__env || {};

// API url
window.__env.apiUrl = 'http://dev.your-api.com';

// Base url
window.__env.baseUrl = '/';

// Whether or not to enable debug mode
// Setting this to false will disable console output
window.__env.enableDebug = true;
}(this));

Addition:
as your backend appears to be working on URLs without API prefix,
you can use the following trick with nginx:

location /api/ {        
...
proxy_pass http://api.development/;
...
}

notice a trailing slash after proxy_pass - in this case the location prefix will be removed. From here:

If the proxy_pass directive is specified with a URI, then when a
request is passed to the server, the part of a normalized request URI
matching the location is replaced by a URI specified in the directive

How do I refactor an API call to use HTTParty?

You could just replace NET::HTTP with HTTParty to get the benefits of the latter, or you could go the extra mile and make your Twitter model include HTTParty so that it responds to an ActiveRecord-like interface while it abstracts that in the background is issuing all these API requests.

The decision really depends on your needs. Do you just need to issue a specific request to Twitter and display the results or you want to interact more heavily with Twitter and treat it as a model where you can create, retrieve, delete etc.

Regardless of your choice, I believe that the official readme has all the information you might need (it even has a great example with StackExchange!).

SignatureDoesNotMatch $cordovaFileTransfer

It seems that there is a conflict between the endpoint you get from AWS and the way you use $cordovaFileTransfer.

When you get the endpoint with s3.getSignedUrl in your server, only the params of the the response URL are encoded (the base URL keeps non-encoded).

When you use $cordovaFileTransfer That function encodes again the params, that's why you see that %252F which match to //.

There are 3 options:

  • Instead, before your $cordovaFileTransfer calling, decode the params

    var host = server.split('?')[0] + '?' + decodeURIComponent(server.split('?')[1]);
  • Use options.params as described in the documentation

The option encodeURI: false suggested in other help comments didn't work for me

Hope it helps.



Related Topics



Leave a reply



Submit