How to Serve Requests Concurrently with Rails 4

How can I serve requests concurrently with Rails 4?

I invite you to read about the configuration options of config.threadsafe! in this article Removing config.threadsafe!
It will help you to understand better the options of config.threadsafe!, in particular to allow concurrency.

In Rails 4 config.threadsafe! is set by default.


Now to the answer

In Rails 4 requests are wrapped around a Mutex by the Rack::Lock middleware in DEV environments by default.

If you want to enable concurrency you can set config.allow_concurrency=true. This will disable the Rack::Lock middleware. I would not delete it as mentioned in another answer to your question; that looks like a hack to me.

Note: If you have config.cache_classes=true then an assignment to config.allow_concurrency (Rack::Lock request mutex) won't take
effect, concurrent requests are allowed by default. If you have
config.cache_classes=false, then you can set
config.allow_concurrency to either true or false. In DEV
environment you would want to have it like this

config.cache_classes=false
config.allow_concurrency=true

The statement: Which means that if config.cache_classes = false
(which it is by default in dev env) we can't have concurrent
requests.
is not correct.

Appendix

You can refer to this answer, which sets up an experiment testing concurrency using MRI and JRuby. The results are surprising. MRI was faster than JRuby.

The experiment with MRI concurrency is on GitHub.
The experiment only tests concurrent request. There are no race conditions in the controller. However, I think it is not too difficult to implement example from the article above to test race conditions in a controller.

Concurrent requests with MRI Ruby

I invite you to read the series of Jesse Storimer's Nobody understands the GIL
It might help you understand better some MRI internals.

I have also found Pragmatic Concurrency with Ruby, which reads interesting. It has some examples of testing concurrently.

EDIT:
In addition I can recommend the article Removing config.threadsafe!
Might not be relevant for Rails 4, but it explains the configuration options, one of which you can use to allow concurrency.


Let's discuss the answer to your question.

You can have several threads (using MRI), even with Puma. The GIL ensures that only one thread is active at a time, that is the constraint that developers dub as restrictive (because of no real parallel execution). Bear in mind that GIL does not guarantee thread safety.
This does not mean that the other threads are not running, they are waiting for their turn. They can interleave (the articles can help understanding better).

Let me clear up some terms: worker process, thread.
A process runs in a separate memory space and can serve several threads.
Threads of the same process run in a shared memory space, which is that of their process. With threads we mean Ruby threads in this context, not CPU threads.

In regards to your question's configuration and the GitHub repo you shared, I think an appropriate configuration (I used Puma) is to set up 4 workers and 1 to 40 threads. The idea is that one worker serves one tab. Each tab sends up to 10 requests.

So let's get started:

I work on Ubuntu on a virtual machine. So first I enabled the 4 cores in my virtual machine's setting (and some other settings of which I thought it might help).
I could verify this on my machine. So I went with that.

Linux command --> lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 69
Stepping: 1
CPU MHz: 2306.141
BogoMIPS: 4612.28
L1d cache: 32K
L1d cache: 32K
L2d cache: 6144K
NUMA node0 CPU(s): 0-3

I used your shared GitHub project and modified it slightly. I created a Puma configuration file named puma.rb (put it in the config directory) with the following content:

workers Integer(ENV['WEB_CONCURRENCY'] || 1)
threads_count = Integer(ENV['MAX_THREADS'] || 1)
threads 1, threads_count

preload_app!

rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'

on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
#ActiveRecord::Base.establish_connection
end

By default Puma is started with 1 worker and 1 thread. You can use environment variables to modify those parameters. I did so:

export MAX_THREADS=40
export WEB_CONCURRENCY=4

To start Puma with this configuration I typed

bundle exec puma -C config/puma.rb

in the Rails app directory.

I opened the browser with four tabs to call the app's URL.

The first request started around 15:45:05 and the last request was around 15h49:44. That is an elapsed time of 4 minutes and 39 seconds.
Also you can see the request's id's in non sorted order in the log file. (See below)

Each API call in the GitHub project sleeps for 15 seconds. We have four 4 tabs, each with 10 API calls. That makes a maximum elapsed time of 600 seconds, i.e. 10 minutes (in a strictly serial mode).

The ideal result in theory would be all in parallel and an elapsed time not far from 15 seconds, but I did not expect that at all.
I was not sure what to expect as a result exactly, but I was still positively surprised (considering that I ran on a virtual machine and MRI is restrained by the GIL and some other factors). The elapsed time of this test was less than half the maximum elapsed time (in strictly serial mode), we cut the result into less than half.

EDIT I read further about the Rack::Lock that wraps a mutex around each request (Third article above). I found the option
config.allow_concurrency = true to be a time saver. A little caveat
was to increase the connection pool (though the request do no query
the database had to be set accordingly); the number of maximum threads is
a good default. 40 in this case.

I tested the app with jRuby and the actual elapsed time was 2mins,
with allow_concurrency=true.

I tested the app with MRI and the actual elapsed time was 1min47s,
with allow_concurrency=true. This was a big surprise to me.
This really surprised me, because I expected MRI to be slower than JRuby. It was not. This makes me questioning the widespread discussion about the speed differences between MRI and JRuby.

Watching the responses on the different tabs are "more random" now. It happens that tab 3 or 4 completes before tab 1, which I requested first.

I think because you don't have race conditions the test seems to be
OK. However, I am not sure about the application wide consequences if
you set config.allow_concurrency=true in a real world application.

Feel free to check it out and let me know any feedback you readers might have.
I still have the clone on my machine. Let me know if you are interested.

To answer your questions in order:

  • I think your example is valid by result. For concurrency however, it is better to test with shared resources (as for example in the second article).
  • In regards to your statements, as mentioned in the beginning of this
    answer, MRI is multi-threaded, but restricted by GIL to one active
    thread at a time. This raises the question: With MRI isn't it better
    to test with more processes and less threads? I do not know really, a
    first guess would be rather no or not much of a difference. Maybe someone can shed light on this.
  • Your example is just fine I think. Just needed some slight
    modifications.

Appendix

Log file Rails app:

**config.allow_concurrency = false (by default)**
-> Ideally 1 worker per core, each worker servers up to 10 threads.

[3045] Puma starting in cluster mode...
[3045] * Version 2.11.2 (ruby 2.1.5-p273), codename: Intrepid Squirrel
[3045] * Min threads: 1, max threads: 40
[3045] * Environment: development
[3045] * Process workers: 4
[3045] * Preloading application
[3045] * Listening on tcp://0.0.0.0:3000
[3045] Use Ctrl-C to stop
[3045] - Worker 0 (pid: 3075) booted, phase: 0
[3045] - Worker 1 (pid: 3080) booted, phase: 0
[3045] - Worker 2 (pid: 3087) booted, phase: 0
[3045] - Worker 3 (pid: 3098) booted, phase: 0
Started GET "/assets/angular-ui-router/release/angular-ui-router.js?body=1" for 127.0.0.1 at 2015-05-11 15:45:05 +0800
...
...
...
Processing by ApplicationController#api_call as JSON
Parameters: {"t"=>"15?id=9"}
Completed 200 OK in 15002ms (Views: 0.2ms | ActiveRecord: 0.0ms)
[3075] 127.0.0.1 - - [11/May/2015:15:49:44 +0800] "GET /api_call.json?t=15?id=9 HTTP/1.1" 304 - 60.0230

**config.allow_concurrency = true**
-> Ideally 1 worker per core, each worker servers up to 10 threads.

[22802] Puma starting in cluster mode...
[22802] * Version 2.11.2 (ruby 2.2.0-p0), codename: Intrepid Squirrel
[22802] * Min threads: 1, max threads: 40
[22802] * Environment: development
[22802] * Process workers: 4
[22802] * Preloading application
[22802] * Listening on tcp://0.0.0.0:3000
[22802] Use Ctrl-C to stop
[22802] - Worker 0 (pid: 22832) booted, phase: 0
[22802] - Worker 1 (pid: 22835) booted, phase: 0
[22802] - Worker 3 (pid: 22852) booted, phase: 0
[22802] - Worker 2 (pid: 22843) booted, phase: 0
Started GET "/" for 127.0.0.1 at 2015-05-13 17:58:20 +0800
Processing by ApplicationController#index as HTML
Rendered application/index.html.erb within layouts/application (3.6ms)
Completed 200 OK in 216ms (Views: 200.0ms | ActiveRecord: 0.0ms)
[22832] 127.0.0.1 - - [13/May/2015:17:58:20 +0800] "GET / HTTP/1.1" 200 - 0.8190
...
...
...
Completed 200 OK in 15003ms (Views: 0.1ms | ActiveRecord: 0.0ms)
[22852] 127.0.0.1 - - [13/May/2015:18:00:07 +0800] "GET /api_call.json?t=15?id=10 HTTP/1.1" 304 - 15.0103

**config.allow_concurrency = true (by default)**
-> Ideally each thread serves a request.

Puma starting in single mode...
* Version 2.11.2 (jruby 2.2.2), codename: Intrepid Squirrel
* Min threads: 1, max threads: 40
* Environment: development
NOTE: ActiveRecord 4.2 is not (yet) fully supported by AR-JDBC, please help us finish 4.2 support - check http://bit.ly/jruby-42 for starters
* Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop
Started GET "/" for 127.0.0.1 at 2015-05-13 18:23:04 +0800
Processing by ApplicationController#index as HTML
Rendered application/index.html.erb within layouts/application (35.0ms)
...
...
...
Completed 200 OK in 15020ms (Views: 0.7ms | ActiveRecord: 0.0ms)
127.0.0.1 - - [13/May/2015:18:25:19 +0800] "GET /api_call.json?t=15?id=9 HTTP/1.1" 304 - 15.0640

Rails concurrent requests

You can do this, yes. I have done it, yes.

You can run into some gotchas with multi-threaded use of ActiveRecord, if you're using ActiveRecord. For doing multi-threaded stuff in ruby, I recommend the concurrent-ruby gem, which provides thread pools and higher level abstractions like Futures. https://github.com/ruby-concurrency/concurrent-ruby

You can do this, although, yeah, it can get tricky -- if a background job queue works, that might be a less reinventing-the-wheel solution for you. Whether a background job queue or threads, you're going to have to figure out how to display 'results' -- although with the straight threads approach, I guess your request can wait on all the threads and their results, before displaying results (yeah, I've done this too). That can cause it's own problems with having slow-running requests (you'll probably want to deploy with puma or passenger enterprise to avoid blocking your entire app process on them).

From my own experience doing similar things, I don't have an easy answer, you'll run into some challenges no matter what route you take, I think. If you can figure out a way to make a background job queue work, I think that will generally give you less gotchas than directly creating threads in the app process.

How to allow only one request to perform action and reject all other concurrent requests?

You need to lock an order before checking it's status.

If 2 concurrent requests execute check for request.rejected? || order.rejected? simultaneously with your solution, both gets false and proceed consequentially under the lock.

Changes for your model

class Request < ApplicationRecord
def self.reject_request(key)
request = Request.where(:reject_key => key).first
order = request&.order

# make sure both request and order objects are not empty
raise ActiveRecord::Rollback if request.blank? || order.blank?

# lock your object
order.lock!
request.lock!

# put your conditions here after the `lock`
raise ActiveRecord::Rollback if request.rejected? || order.rejected?


request.status = 'rejected'
request.save!
order.update(status: 'rejected')
true
end
end

How does Rails handle multiple incoming requests?

One thing to bear in mind here is that even though users are using the site simultaneously in their browsers, the server may still only be handling a single request at a time. Request processing may take less than a second, so requests can be queued up to be processed without causing significant delays for the users. Each response starts from a blank slate, taking only information from the request and using it to look up data from the database. it does not carry anything over from one request to the next. This is called the "stateless" paradigm.

If the load increases, more rails servers can be added. Because each response starts from scratch anyway, adding more servers doesn't create any problems to do with "sharing of information", since all information is either sent in the request or loaded from the database. It just means that more requests can be handled per second.

When their is a feeling of "continuity" for the user, for example staying logged into a website, this is done via cookies, which are stored on their machine and sent through as part of the request. The server can read this cookie information from the request and, for example, NOT redirect someone to the login page as the cookie is telling them they have logged in already as user 123 or whatever.

How rails resolve multi-requests at the same time?

Are you using a WEBrick server? That must be because your server is a single threaded server and is capable of fulfilling one request at a time (because of the single worker thread). Now in case of multiple requests, it runs the action part of the request and before running the view renderer it checks to see if there are any pending requests. Now if 10 requests are lined up, it would first complete all of them before actually rendering the views. When all of these requests are completed, the views would now be rendered sequentially.

You can switch to Passenger or Unicorn server if you want multi-threaded environment.

Hope that makes sense.



Related Topics



Leave a reply



Submit