Time-traveling Ruby Logger
Wouldn't it be great if you could log all of the gory details of a request, but only if it fails?
Choosing the appropriate logging level for our web app can be challenging. Specifying the DEBUG
level yields detailed information, but the signal for failed requests can get lost in the noise of successful ones. A WARN
level reduces volume but lacks detail to help track down bugs.
Take this example:
Rails.log_level = :warn
class CriticalBusinessCode
def calculate_grumpkin_quotient(apples, bananas)
log.debug "calling fruit api with #{apples}, #{bananas}" # :( no info
FruitApi.call(apples, bananas) # <= Throws mysterious "400 Bad Request"
end
end
If we knew at the beginning of a request that it would fail, we'd set the log level to DEBUG
(or even TRACE
), right?
We don't have the technology to build a psychic Logger, but we can build the next-best thing: a Time-traveling Logger.
A first version
Let's start with the core Ruby Logger
. Here's how it is used without modification.
require 'logger'
logger = Logger.new($stdout)
logger.level = Logger::WARN
logger.debug "DEBUG"
logger.info "INFO"
logger.warn "WARN"
logger.error "ERROR"
We told the logger ahead of time that we wanted WARN
as our level. Because we set the level to WARN
, we only see the WARN
and ERROR
output, not the DEBUG
and INFO
output.
How can we build on top of the core logger to allow specifying the level later in the program's execution — after some logging statements have already executed?
Queued logging
The key idea is to defer flushing the logs until the end of the request. We don’t have the context we need to make the verbosity decision up-front, but when the request finishes, we will. We just need a place to stash these logs while the request processes.
Building on top of the core Logger
, we can implement a queue by overriding the add method
require 'logger'
class QueuedLogger < Logger
def initialize(*args)
super
@queue = []
end
def add(severity, message = nil, progname = nil)
@queue << -> { super(severity, message, progname) }
end
def flush!(level = nil)
old_level = self.level
self.level = level if level
@queue.each(&:call)
@queue = []
ensure
self.level = old_level
end
end
The implementation here isn't too exciting (in a good way). .debug
, .info
, etc. call add
. With this change, we don’t immediately write the message but instead we throw it in a queue to be logged later.
When we're ready to dump out the logs, we can specify the level we want.
logger = QueuedLogger.new($stdout)
logger.level = Logger::WARN
logger.debug "DEBUG"
logger.info "INFO"
logger.warn "WARN"
logger.error "ERROR"
logger.flush!(Logger::DEBUG)
At flush!
time, we look back through history and evaluate the log statements with the provided level.
Despite the level being set to WARN
initially, our DEBUG and INFO lines will show up since we're flushing at the DEBUG
level.
Example usage
Let’s imagine we’re writing software for managing appointments for physicians. We have a background job to remind patients about upcoming appointments via SMS.
class AppointmentReminderJob
def perform(doctor, patient, time, timezone)
logger.debug "Scheduling appointment for #{doctor} and #{patient} on #{time} in #{timezone}"
message = compose_message(doctor, patient, time, timezone)
SMS.send(message, patient.phone)
end
def compose_message(doctor, patient, time, timezone)
# ...
end
def logger
@logger ||= Logger.new($stdout).tap do |logger|
logger.level = Logger::WARN
end
end
end
If an exception happens when sending the reminder (perform
), we’d like as much detail as possible. Unfortunately our default level of WARN means that our logger.debug
statement is never written.
We could set the default level to DEBUG. While the logger.debug
line here is useful if something goes wrong, it clutters our logs with noise when everything is working as intended.
We can get the best of both worlds by applying our QueuedLogger
to capture debug-level messages only when an exception occurs.
Here’s the updated class:
class AppointmentReminderJob
def perform(doctor, patient, time, timezone)
logger.debug "Scheduling appointment for #{doctor} and #{patient} on #{time} in #{timezone}"
message = compose_message(doctor, patient, time, timezone)
SMS.send(message, patient.phone)
rescue => ex
logger.error(ex)
# write at the debug level so we get all the details
logger.flush!(:debug)
ensure
# write at the default level we specified (WARN in this example)
logger.flush!
end
def compose_message(doctor, patient, time, timezone)
# ...
end
def logger
@logger ||= QueuedLogger.new($stdout).tap do |logger|
logger.level = Logger::WARN
end
end
end
When Where do we go from here?
QueuedLogger
isn't quite ready to be thrown into your Rails app yet with config.logger = QueuedLogger.new(...)
. To get ready for prime-time, we'd want to isolate the queue for the request (maybe using ActiveSupport::CurrentAttributes, concurrent-ruby, or RequestStore).
If you're interested in a battle-hardened version of this logger that is ready to use, let us know and I'll see what I can put together :)
There's some interesting discussion happening on Reddit.
Have a great day!