Dynamically Changing log4j or logback log levels in Java
Changing the log level in Java
Programmatically changing the log level in Java is easy. The only up-front hitch is that you just can't do it via SLF4J -- you'll need to use the underlying logging framework.
Log4j has a Configurator class we can use and in Logback we simply get the logger and set the level. Here's what that looks like.
// Log4J
Configurator.setLevel(loggerName, Level.INFO);
// Logback
Logger logger = (Logger)LoggerFactory.getLogger(loggerName);
logger.setLevel(Level.INFO);
But that's not really why you're here, is it?
What you're actually after is changing the log level without restarting the application. What you're actually after is getting more debug
information from a production system without having to redeploy or turn off excessive logging that is costing you money in log ingestion costs.
Changing the log level programmatically is only half the battle (or maybe a 1/10th the battle). In any real application, we now we need to figure out how this code gets called.
This challenge is more complex, but it breaks down into two big parts:
- Where does the user input the logger and level?
- How does this system broadcast this information to all services?
Capturing the user's desire to change the log level
The developer wants more debug
information about FooClass
. How do we capture this intent? Collecting the changes comes in a few flavors.
An API request to change the levels
Your developer can POST
to an internal URL.
Something like POST /log-levels?logger=com.example.FooClass&level=DEBUG
would work. The question here will be how to broadcast this change to all instances of the service.
A JMX Request to change the levels
Log4j supports JMX to trigger configuration reloads or edit the config file itself. There's even a client GUI! As with using an API request to a custom endpoint, that'll only work one application instance at a time.
A Shared File
Your developer can edit a shared log4j.properties
or logback.xml
somewhere. This is the most common way to do this, but sharing the file around can be an annoying infrastructure challenge and the behavior if you corrupt the file is not always obvious. Log4j also supports (re)loading configuration from a URI which is still basically just a remote shared file or files.
A Centralized UI
Maybe you build an internal UI or you use something like SpringBoot Admin or Prefab. The Spring Boot Admin UI looks like this:
The Prefab UI is similar, giving us a hierarchical view as well as some feedback on log volume at each level.
A centralized UI may or may not have knowledge about the loggers that you're using. Prefab has knowledge of both the logger name as well as the frequency they are being used so it is able to render this helpful UI. This is helpful so you don't have to correctly type the logger name, but also so you know how many message/sec are happening at each logger. To do this, you'll need to send aggregate telemetry back to a central location on logging calls.
Broadcasting the change to all instances of the service
Once we've captured the intent to change the log level, we need to broadcast this change to all instances of the service. While some of the approaches like shared files or URI-based configuration will handle this easily, others like the API endpoint approach have shortcomings in this department. Hit that custom logging configuration API, and our load balancer delivered the POST to one instance of our service, and we changed the log level there. But... what about the other instances? Hope that was the one you wanted!
Broadcasting the changes is typically:
Ignoring this problem
We could skip this step and ony changing the log levels on the one instance that received the user intent. Depending on your use case, this may be sufficient. If you only have a single instance, or you are just looking for a sampling of logs it should serve you well. If you have multiple instances or you are looking for a specific log message, this will not work and it may lead to confusion.
Service Discovery
We could implement some type of service discovery to propagate the change to peer application instances. If each instance is aware of the other instances, then we can broadcast the change to all of them. It's unlikely you're going to want to build this out yourself just for this logging use case, but if this is technology you already have in place, it could be a reasonable option.
NFS File Sync / URI Sync
Having our log4j.properties
synced over something like NFS and setting clients to monitor/scan the file/URI for changes. Log4j uses <Configuration monitorInterval="30">
to refresh. In Logback this looks like <configuration scan="true" scanPeriod="30 seconds">
.
This may be a reasonable option if you already have a shared file system in place, but wouldn't be the first choice for most people.
PubSub
We can listening to a PubSub system for changes and programmatically configuring the logging system using the code above. Your API endpoint could publish a Redis pub/sub and all instances would listen for changes. Sandeep has a full fledged example of this (in Python) in route triggered state update across workers.
This is likely the easiest solution if you are going to DIY. Redis Pub/Sub is pretty easy to use. You'll need to decide on the message format, but it's a pretty straightforward solution.
One wrinkle is that the log level change will reset as instances reboot. Pub/Sub is fleeting. Instances that are listening at the time of the change will set their levels, but new instances that start afterwards will not see the change. If you want new instances to pick up the change, you'll need use something like Kafka or Redis Streams which will persist the message and allow new instances to pick up the change.
Filter & Dynamic Configuration
The last option is a bit different. Rather than listen for changes and set the log level, we can use a log filter to decide whether or not to log a message and have this filter look up the current log level by looking into a dynamic configuration system.
This dynamic configuration system will use some form of pub sub to ensure its working from the most current configuration, so this could also be seen as a variation of #4, but it's distinct in that this pub/sub system is not a 1-off just for logging. This is what Prefab does, specifically, we use SSE to get the changes and then a Logback TurboFilter that looks up the current value. Pseudo code for this looks like:
public FilterReply decide(
Marker marker,
Logger logger,
Level level,
String s,
Object[] objects,
Throwable throwable
) {
Optional<Prefab.LogLevel> loglevelMaybe = dynamicConfig.getLogLevel(logger);
if (loglevelMaybe.isPresent()) {
Level calculatedMinLogLevelToAccept = LogbackLevelMapper.LEVEL_MAP.get(
loglevelMaybe.get()
);
if (level.isGreaterOrEqual(calculatedMinLogLevelToAccept)) {
return FilterReply.ACCEPT;
}
return FilterReply.DENY;
}
return FilterReply.NEUTRAL;
}
Implementation as a filter has the advantages of
- Gathering information about log requests, whether or not they make it to a log file
- Allows custom log levels based on information beyond classname, like application state or local context.
- Filters have more context so you can log at
DEBUG
only for a certain user, instead of just a certain class.
To install this filter check out the set up dynamic logging with logback.
Summary
Dynamic logging is a great way to get more information out of your production system without having to redeploy.
At first, the problem looks like a simple question of figuring out how to change the log level programatically, but as we get into the real world we see that the real challenge is:
- Building an ergonomic solution for our developers to input their desired log levels
- Propagating this change across our service instances
Building this ourselves is very doable. Adding a POST endpoint to our services and using a Redis Pub/Sub to broadcast the change is a perfectly viable solution. Alternatively a service like Prefab will give us the capability out of the box, with some nice bells and whistles like a UI, log volume information and advanced targeting.