Cross-Cutting Logging in Ruby

Cross-cutting logging in Ruby

Another alternative is to use unbound methods:

class A
original_test = instance_method(:test)
define_method(:test) do
puts "Log Message!"
original_test.bind(self).call
end
end

class A
original_test = instance_method(:test)
counter = 0
define_method(:test) do
counter += 1
puts "Counter = #{counter}"
original_test.bind(self).call
end
end

irb> A.new.test
Counter = 1
Log Message!
#=> #....
irb> A.new.test
Counter = 2
Log Message!
#=> #.....

This has the advantage that it doesn't pollute the namespace with additional method names, and is fairly easily abstracted, if you want to make a class method add_logging or what have you.

class Module
def add_logging(*method_names)
method_names.each do |method_name|
original_method = instance_method(method_name)
define_method(method_name) do |*args,&blk|
puts "logging #{method_name}"
original_method.bind(self).call(*args,&blk)
end
end
end
end

class A
add_logging :test
end

Or, if you wanted to be able to do a bunch of aspects w/o a lot of boiler plate, you could write a method that writes aspect-adding methods!

class Module
def self.define_aspect(aspect_name, &definition)
define_method(:"add_#{aspect_name}") do |*method_names|
method_names.each do |method_name|
original_method = instance_method(method_name)
define_method(method_name, &(definition[method_name, original_method]))
end
end
end
# make an add_logging method
define_aspect :logging do |method_name, original_method|
lambda do |*args, &blk|
puts "Logging #{method_name}"
original_method.bind(self).call(*args, &blk)
end
end
# make an add_counting method
global_counter = 0
define_aspect :counting do |method_name, original_method|
local_counter = 0
lambda do |*args, &blk|
global_counter += 1
local_counter += 1
puts "Counters: global@#{global_counter}, local@#{local_counter}"
original_method.bind(self).call(*args, &blk)
end
end
end

class A
def test
puts "I'm Doing something..."
end
def test1
puts "I'm Doing something once..."
end
def test2
puts "I'm Doing something twice..."
puts "I'm Doing something twice..."
end
def test3
puts "I'm Doing something thrice..."
puts "I'm Doing something thrice..."
puts "I'm Doing something thrice..."
end
def other_tests
puts "I'm Doing something else..."
end

add_logging :test, :test2, :test3
add_counting :other_tests, :test1, :test3
end

Using methods from one Ruby module in another

In Ruby, modules can act as mixins from which other modules can inherit. If you want to use the instance methods of a module, you have to mix in that module. This is achieved by the Module#include method:

module Connection
include Configuration, Logging
end

Now, Connection inherits from Configuration and Logging, and instances of Connection can thus use all methods from Configuration and Logging in addition to the ones of Connection (and Object, BasicObject, Kernel).

If you also want access to those instance methods within Connection's module methods, then you need to additionally extend the Connection with Configuration and Logging as well:

module Connection
extend Configuration, Logging
end

Singleton method error from bind when called with the same metaclass

This should work for you:

class Module
def add_logging(*method_names)
method_names.each do |method_name|
original_method = method(method_name).unbind
define_singleton_method(method_name) do |*args, &blk|
puts "#{self}.#{method_name} called"
original_method.bind(self).call(*args, &blk)
end
end
end
end

# class method example
module MyModule
def self.module_method1
puts "hello"
end

add_logging :module_method1
end

MyModule.module_method1

# output:
#
# MyModule.module_method1 called
# hello

Why is log and throw considered an anti-pattern?

I assume the answer is largely because why are you catching it if you can't handle it? Why not let whomever can handle it (or whomever is left with no choice but to handle it) log it, if they feel that it is log-worthy?

If you catch it and log it and rethrow it, then there's no way for the upstream code to know that you've already logged the exception, and so the same exception might get logged twice. Or worse, if all the upstream code follows this same pattern, the exception might be logged an arbitrary number of times, once for each level in the code that decides to catch it, log it, and then throw it again.

Also some might argue that since throwing and catching exceptions are relatively costly operations, all this catching and rethrowing isn't helping your runtime performance. Nor is it helping your code in terms of conciseness or maintainability.

Code to logging ratio?

There is actually a nice library for adding in logging after the fact as you say, PostSharp. It lets you do it via attribute-based programming, among many other very useful things beyond just logging.

I agree that what you say is a little excessive for logging.

Some others bring up some good points, especially the banking scenario and other mission critical apps. It may be necessary for extreme logging, or at least be able to turn it on and off if needed, or have various levels set.

How to reflect process on before_update?

In general, you should avoid running expensive, cross-cutting code in callbacks. A time will come when you want to update one of those records without running that code, and then you'll start adding flags to determine when that callback should run, and all sorts of other nastiness. Also, if the record is being updated during a request, the expensive callback code will slow the whole request down, and potentially time out and/or block other visitors from accessing your application.

The way to architect this would be to create the record first (perhaps with a flag/state that tells the rest of your app that the update hasn't been "processed" yet - meaning that related code currently in your callback hasn't run yet). Then, you'd enqueue a background job that does whatever is in your callback. If you are using Sidekiq, you can use the sidekiq-status gem to update the job's status as it's running.

You'd then add a controller/action that checks up on the job's status and returns it in JSON, and some JS that pings that action every few seconds to check up on the status of the job and update your interface accordingly.

Even if you didn't want to update your users on the status of the job, a background job would probably still be in order here - especially if that code is very expensive, or involves third-party API calls. If not, it likely belongs in the controller, and you could run it all in a transaction. But if you need to update your users on the status of that work, a background job is the way to go.

Update updated_at on related has_and_belongs_to_many models in Mongoid

This is discussed in this GitHub issue. The base object is not updated when a new related object is added. If you were using belongs_to you could add touch: true, but you’re not.

In the discussion of the issue, they recommend adding an after_save to the related object. In this case, you’d have to add it to both sides of the relation:

class Person
after_save do
stories.each(&:touch)
end
end

class Story
after_save do
people.each(&:touch)
end
end

Less than elegant, but should work?



Related Topics



Leave a reply



Submit