Can Multiple Sidekiq Instances Process The Same Queue

Can multiple sidekiq instances process the same queue

Yes, sidekiq can absolutely run many processes against the same queue. Redis will just give the message to a random process.

Multiple sidekiq daemons on one machine

You can use sidekiq namespace to fix this problem. From sidekiq's wiki

NOTE: The :namespace parameter is optional, but recommended if Sidekiq is sharing access to a Redis database.

One more thing you can do is to have a separate queue for workers.

:queues:
- [default, 1]
- [new_comments, 1]
- [email_alerts, 1]
- [new_messages, 1]

Maintaining different concurrency for different queues in Sidekiq?

Checkout the sidekiq-limit_fetch gem, which you can use to limit the number of jobs that are executed in a particular queue.

You can create a new queue for the job that you want executing one at a time, and give it a limit of one. This way you'll still be able to submit jobs to queue, but guarantee only one of them will be executing at a given time.

Does sidekiq use multiple cpu cores and can it be run on multiple machines?

Sidekiq is limited to whatever Ruby can do. If you are running Sidekiq in MRI Ruby, each Sidekiq process is limited to one core. You can utilize all cores by running multiple MRI processes.

If you are running Sidekiq in JRuby, that one process can take advantage of all cores.

All Sidekiq processes can process any queue, regardless of what machine they are running on.

Work with two separate redis instances with sidekiq?

So one thing is that According to the FAQ, "The Sidekiq message format is quite simple and stable: it's just a Hash in JSON format." Emphasis mine-- I don't think sending JSON to sidekiq is too brittle to do. Especially when you want fine-grained control around which Redis instance you send the jobs to, as in the OP's situation, I'd probably just write a little wrapper that would let me indicate a Redis instance along with the job being enqueued.

For Kevin Bedell's more general situation to round-robin jobs into Redis instances, I'd imagine you don't want to have the control of which Redis instance is used-- you just want to enqueue and have the distribution be managed automatically. It looks like only one person has requested this so far, and they came up with a solution that uses Redis::Distributed:

datastore_config = YAML.load(ERB.new(File.read(File.join(Rails.root, "config", "redis.yml"))).result)

datastore_config = datastore_config["defaults"].merge(datastore_config[::Rails.env])

if datastore_config[:host].is_a?(Array)
if datastore_config[:host].length == 1
datastore_config[:host] = datastore_config[:host].first
else
datastore_config = datastore_config[:host].map do |host|
host_has_port = host =~ /:\d+\z/

if host_has_port
"redis://#{host}/#{datastore_config[:db] || 0}"
else
"redis://#{host}:#{datastore_config[:port] || 6379}/#{datastore_config[:db] || 0}"
end
end
end
end

Sidekiq.configure_server do |config|
config.redis = ::ConnectionPool.new(:size => Sidekiq.options[:concurrency] + 2, :timeout => 2) do
redis = if datastore_config.is_a? Array
Redis::Distributed.new(datastore_config)
else
Redis.new(datastore_config)
end

Redis::Namespace.new('resque', :redis => redis)
end
end

Another thing to consider in your quest to get high-availability and fail-over is to get Sidekiq Pro which includes reliability features: "The Sidekiq Pro client can withstand transient Redis outages. It will enqueue jobs locally upon error and attempt to deliver those jobs once connectivity is restored." Since sidekiq is for background processes anyway, a short delay if a Redis instance goes down should not affect your application. If one of your two Redis instances goes down and you're using round robin, you've still lost some jobs unless you're using this feature.



Related Topics



Leave a reply



Submit