Mongoid Query Caching

mongoid query caching

The other answer is obviously wrong. Not only mongoid or mongo driver doesn't cache the query, even if mongo would - it still might be on other machine across the network.

My solution was to wrap the receive_message in Mongo::Connection.
Pros: one definite place
Cons: deserialization still takes place


require 'mongo'
module Mongo
class Connection
module QueryCache
extend ActiveSupport::Concern

module InstanceMethods

# Enable the selector cache within the block.
def cache
@query_cache ||= {}
old, @query_cache_enabled = @query_cache_enabled, true
yield
ensure
clear_query_cache
@query_cache_enabled = old
end

# Disable the selector cache within the block.
def uncached
old, @query_cache_enabled = @query_cache_enabled, false
yield
ensure
@query_cache_enabled = old
end

def clear_query_cache
@query_cache.clear
end

def cache_receive_message(operation, message)
@query_cache[operation] ||= {}
key = message.to_s.hash
log = "[MONGO] CACHE %s"
if entry = @query_cache[operation][key]
Mongoid.logger.debug log % 'HIT'
entry
else
Mongoid.logger.debug log % 'MISS'
@query_cache[operation][key] = yield
end
end

def receive_message_with_cache(operation, message, log_message=nil, socket=nil, command=false)
if query_cache_enabled
cache_receive_message(operation, message) do
receive_message_without_cache(operation, message, log_message, socket, command)
end
else
receive_message_without_cache(operation, message, log_message, socket, command)
end
end
end # module InstanceMethods

included do
alias_method_chain :receive_message, :cache
attr_reader :query_cache, :query_cache_enabled
end
end # module QueryCache
end # class Connection
end

Mongo::Connection.send(:include, Mongo::Connection::QueryCache)

Caching repeating query results in MongoDB

The author of the Motor (MOngo + TORnado) package gives an example of caching his list of categories here: http://emptysquare.net/blog/refactoring-tornado-code-with-gen-engine/

Basically, he defines a global list of categories and queries the database to fill it in; then, whenever he need the categories in his pages, he checks the list: if it exists, he uses it, if not, he queries again and fills it in. He has it set up to invalidate the list whenever he inserts to the database, but depending on your usage you could create a global timeout variable to keep track of when you need to re-query next. If you're doing something complicated, this could get out of hand, but if it's just a list of the most recent posts or something, I think it would be fine.

Mongoid cache doesn't seem to work

Take a look to mongoid query caching

"rehevkor5 Jul 31 at 23:13" <- read this comment at the bottom !

How to detect if my query result is retrieved from WiredTiger cache?

The WiredTiger cache is a generic term for memory reserved for use by WiredTiger. This memory contains uncompressed collection data, indexes, "dirty" content (modified documents), etc.

To answer your questions:

Is there any way I can check if my query result is served from cache or not ?

Any query will result in WiredTiger loading the document from disk into its part of memory (the so-called "cache") in an uncompressed form. Therefore all query replies are served from the "cache".

What if my working set is larger than RAM ? Would Mongo clear the cache, or would it evict some data ?

It will evict data from the cache as necessary. If the data that needs to be evicted are "dirty", it will also need to be written to disk as part of the eviction process.

Any suggestion for good resources to learn about Mongo caching mechanism

There are some general description of it in MongoDB documentation:

  • FAQ: MongoDB Storage
  • WiredTiger Storage Engine

For more in-depth documentation, please see the WiredTiger documentation.



Related Topics



Leave a reply



Submit