Skip to main content

6 posts tagged with "Product"

View All Tags

Β· 2 min read
Jeff Dwyer

Structured logging is great. It works just like you'd expect it to. I got into a ton of detail about Tagged vs Structured Logging last week, but the short version is that structured logging is fabulous for searching and analyzing your logs.

I'm happy to say that prefab-cloud-ruby 1.1.0 supports structured logging for ruby or rails.

Here's what that looks like, in our controller we can

class CalculatorController < ApplicationController
def index
@results = logic(height, weight)

logger.debug "😞😞😞 finished calc results height=#{height} weight=#{weight} results=#{@results.size} "

logger.debug "🍏🍏🍏 finished calc results", height: height, weight: weight, results: @results.size

Even with co-pilot assistance, this is so much nice than the old way of string formatting log output.

Running the server locally, we get the following output:

DEBUG  2023-09-19 14:30:18 -0400: app.controllers.calculator_controller.index 🍏🍏🍏 finished calc results height=19.0 results=6 weight=0.0

If you're using log_formatter: Prefab::Options::JSON_LOG_FORMATTER then you'll get JSON output instead.

"message":"🍏🍏🍏 finished calc results",

Of course the real reason to do this is to make it easier to search and analyze your logs. So I'll deploy and then change the log level for our controller to debug to make sure we see our output.

Dogs in the right boxes

Now we can see how nicely these show up in Datadog:

Dogs in the right boxes

Structured logging is great, and with prefab-cloud-ruby you're just a few minutes away from having it in your app. Check out the docs or learn more about dynamic logging. Happy logging!

Β· 3 min read
Jeff Dwyer

Prefab Summer

As the sun blazed outside, our team was hard at work (mostly inside). We're excited to share the fruits of our labor with you. Here's 7 Big Features we shipped this summer:

JS Dynamic Logging in the Browser​

We've always had the ability to dynamically change your log levels. But now you can do it from the front end. This is a great way to debug a specific user or context and turns browser logging collection from an expensive firehose into a targeted laser.

Dynamic Javascript Logging

Timed Loggers​

Ever turned logging to debug and forgotten to turn it back to warn? That can be an expensive mistake. Save yourself from necessary logging costs due to "oops" by setting a logger to be debug for just an hour. It will automatically revert when then hour is up.


Evaluation Charts​

Are your feature flags really evaluating like you think they are? How often is this dynamic config being used? Is this flag a zombie? Our new charts report the evaluation telemetry so you can see a clear and concise view of how your configurations are running in the wild.



Feature Flags aren't just for product teams anymore. Engineers, we've got you covered too! With our new multi-context feature, you can now tailor your FF experience to your specific role and needs. I wrote more about how better feature flags for engineers in Feature Flags for a Redis migration.

This is something that only the top-tier Feature Flag players support so we're excited to level up and join them. Here's an example of targeting a feature flag to a availability zone:

multi context

Dynamic Config Rules​

Our dynamic configuration has taken a leap forward as well. Detailed targeting rules aren't just for feature flags anymore. With our enhanced rule & criteria capabilities, you can now customize and target your dynamic configuration. Combine this with multi-context and dynamic config really gives developers a new level of power working with their deployed systems.

config rules

Context Search & Variant Assignment​

Prefab now lets you search the contexts that have been evaluated. This means you can search for any user and find the associated contexts that they're evaluated with.

But there's more! From the context details page, you can now assign specific variants, giving you a convenient way to put people in a feature. Read more in The Joy of Variant Assignment with Prefab Feature Flags

Context Search

Improved Code Samples​

We understand the importance of context. That's why we've enriched our code samples to help see which context attributes need to be present for you flag to evaluate according to the rules.


Shared Segments​

Shared segments is a power feature that lets you re-use segments across your flags and configs. Our user interface now supports this which allows you to DRY up your rules.

Shared segments

Next Steps​

Next up? We asked ourself what the best way to use Feature Flags would be and we came up with something we didn't know was possible. Turns out it's possible :)

Stay tuned for more exciting updates! Or Book a Demo to get a tour of the latest and greatest.

Β· 8 min read
Jeff Dwyer

Introduction: Feature flags are phenomenally useful, but most examples are from a product perspective and focus on the user context. That's great, but developer use cases are equally powerful if we make the tools understand the engineering specific contexts. In this post, we'll explore what that looks like by walking through an example of a redis cache migration. /imagine canadian geese migrating with the power of rocketships.

feature flag migration

How to Manage a Redis Migration Safely Using Feature Flags​

For this example, let's imagine that we're using 3rd party Redis provider and we've been unhappy with the reliability. We've spun up a helm chart to run redis ourselves and we're ready to migrate, but of course we want to do this in a way that's safe and allows us to roll back if we encounter any issues.

We're using this Redis as a cache so we don't need to worry about migrating data. It is under heavy load however and we don't trust our internal Redis just yet, so we'd like to move over slowly. Let's imagine we've set the new Redis up in a particular availability zone, but we aren't sure what that will mean for latency.

Here's the migration plan we've come up with, we'd like to take 10% of the nodes in US East A and point them at the internal Redis. If that goes well, we'll scale up to the entire AZ. If that goes well, we'll move the rest of the nodes in this availability zone. Then we'll move on to US East B and finally US West. The diagram here shows Step 1 of the rollout pictorially.


  1. 10% of Nodes in the same AZ will test basic functionality
  2. 100% of Nodes in the same AZ will start to test performance & load
  3. Adding US East B will test cross AZ latency
  4. Adding US West will test cross region latency

So, how do we do this?

Step 1: Code the Feature Flag​

For this example let's use the connection string as the value of the feature flag. We can have the current value be redis:// and the new one be redis:// To use the value, we'll just modify the creation of the Redis client.

In simple terms, this will look something like:

String connStr = featureFlagClient.get("redis.connection-string");
return RedisClient.create(connStr).connect();

Implementing this is a bit more complex. The Redis connection is something you'll want to create as a Singleton, but if it's a singleton it will get created once and won't change until we restart the service. Since that's no fun, we'll need a strategy to have our code use a new connection when there is a change. We could do this by listening to Prefab change events, or we could use a Provider pattern that caches the connection until the connection string changes. Here's a Java example of the provider pattern:

Full Code of Provider Pattern
public Provider<StatefulRedisConnection<String, String>> getConfigControlledConnection(
ConfigClient configClient
) {
record ConnectionStringConnectionPair(
String connectionString,
StatefulRedisConnection<String, String> connection
) {}
AtomicReference<ConnectionStringConnectionPair> currentConfigurationReference = new AtomicReference<>();
return () ->
currentConfigurationReference.updateAndGet(connPair -> {
String currentConfigString = configClient
if (connPair == null || !connPair.connectionString.equals(currentConfigString)) {
return new ConnectionStringConnectionPair(
return connPair;

Now, the bigger question. How do we:

  1. Configure our feature tool to randomize the 10% of nodes.
  2. Target the feature flag to just the nodes in US East A.

Step 2: Giving the Feature Flag Tool Infra Context​

A brief detour into some context on "context". Most feature tools think of context as a simple map of strings. It usually looks something like this:

Typical Context for Product​

"key": "1454f868-9a41-4419-a242-d5a872ec5f04",
"user_id": "123",
"user_name": "John Doe",
"team_id": "456",
"team_name": "Foo Corp",
"tier": "enterprise"

This starts out fine, but it can start to feel icky when you start to add more and more things to the context. Is it ok to push the details of our deployment like the in here? How much is too much? This is the "bag o' stuff" model and it's a bit like a junk drawer.

The bigger issue however is whether the tool allow us to randomize by something other than key? Because our rollout plan is to test the new Redis on 10% of nodes, not 10% of users. If your feature tools isn't written with this use case in mind this can be hard or impossible to do.

So yes, as developers, we need more information. Oftentimes we are operating with a user and team context, but we also need to know about things like the deployment, the cloud resource, and the device. A proper context or "multi-context" should look something like this:

Context For Developers​

"user": {
"key": "1454f868-9a41-4419-a242-d5a872ec5f04", // a unique key for the user / anonymous user
"name": "John Doe",
"id": "123" // for non anonymous users
"team": {
"key": 1,
"name": "Team 1",
"tier": "enterprise"
"device": {
"ip": "",
"locale": "en_US",
"appVersion": "1.0.1",
"systemName": "Android",
"systemVersion": "11.6"
"request": {
"key": "c820567a-9f2d-4b3d-85e5-9ff4132d0e08", // a unique key for the request
"url": "",
"path": "/user/123"
"deployment": {
"key": "pod/user-service-bcddb8c8d-mxz6v",
"namespace": "production",
"instance-type": "m4.xlarge",
"instance-id": "i-0a5b2c3d4e5f6g7h8",
"SHA": "27bdd4f9-3530-46a6-8188-9d90467f086e"
"cloud": {
"key": "i-07d3301208fe0a55a", //host-id
"platform": "aws_ec2",
"region": "us-east-1",
"availability-zone": "us-east-1a",
"host-type": "c4.large"

This is a lot more information, but it's also a lot more useful and structured. The key piece here is that we are specifying cloud.key as the host id. This is the unique identifier for the host that we can use to randomize the rollout.

We can use this as our "sticky property" in the UI. This is the property that the client will use to randomize on. The host doesn't change so each node will stay in the bucket that it is originally assigned.

feature flag by cloud host

Step 3: Targeting the Feature Flag to and Availability Zone​

The next step is simply to add a targeting rule. We should have all of the context attributes available to select from in the UI. We can select cloud.availability-zone and set it to us-east-1a.

feature flag by cloud host

Altogether that looks like this:

feature flag by cloud host

The feature flag is now enabled for 10% of the nodes in us-east-1a. If the node is not in that availability zone, it will get the default value.

Step 4: Observe Our Results​

It's always nice to observe that things are working as expected. What percent of all of the nodes are receiving the new Redis connection string?

As we go through the steps of our rollout plan and verify performance and latency along the way, we should see the number of evaluations increasing for our internal redis install.

feature flag by cloud host

Each bump in the graph should be related to our incremental approach to changing the rollout. If there are changes that aren't related, we may need to look into our assignment of the cloud.key since that is how we are randomizing. We should expect that the percentages here do not exactly match the percentages we set. For example if we split 80/20 and we have 50 hosts, each of them will have a 80% chance of being in the group, but we might certainly expect to see 38-42 hosts in that group due to random distributions.

If we want to dig into the numbers we're seeing here, we can select "show rules" to see all configuration changes on the chart and determine which feature flag rules lead to our output numbers.

feature flag by cloud host


This is a simple example, but it shows how we can use feature flags to manage a migration in a safe and incremental way. It also shows how we can use multi-context feature flags to make feature flags more powerful for developers and engineers.

Finally, we see how nice it is to have a feature flag tool that gives us insight into breakdown of our evaluations and the ability to dig into the numbers to see what's going on.

Β· 4 min read
Jeff Dwyer


Feature flags have become an integral part of our software development process, providing us the flexibility to release and test new features efficiently. While many feature flag tools offer similar functionalities, we had a unique vision of what would make our experience truly delightful. In this blog post, we'll share our personal journey with Prefab and how the introduction of variant assignment transformed our day-to-day feature flag management, leaving us happier with our own product.

What is Variant Assignment​

Variant assignment is a task we encounter frequently - as developers, product managers, or sales representatives. Setting up a feature flag is great, but we almost always want to test them out. Variant assignment is when we specify that person X should be in a particular bucket. When I develop with a flag I'll usually assign myself to one variant and then then flip-flop a few times to verify functionality. Once it's released I'll usually do the same for internal and external users, so they can check out the new feature. Unfortunately this seemingly simple task can become tedious when dealing with tracking IDs and user data.

The Struggle with User Identification​

The best practice for identifying a user is something that looks like:

prefab_context = {
user: {
key: "7971f1c7-30ba-456f-be00-ab798f03d3b8", // GUID assigned to cookie before user is created
id: 4233, // User ID in our database
name: "Jane Doe",

This let's us keep a consistent view of someone from visitor to user, respects PII by not using email and gives us a useful handle to the person by using their name.

But there's a problem.

When we want to put a user into a feature bucket we typically know their name, but not their tracking GUID. Using Prefab for ourselves, we found remembering or locating their unique tracking IDs often disrupted our workflow. The constant back-and-forth between tools to find user information became a pain point we wanted to address.

I need to know my GUID

Prefab's Empowering Solution: Our Own Creation​

As creators of Prefab, we had the freedom to build the feature we craved - a seamless variant assignment process. What that should look like was pretty clear. I'd like to be able to search for a user by name, and then click to assign them to a variant.

Search for our user

There's a reason that most open-source and home-grown solutions don't have this feature though. A typical feature flag back end doesn't have any knowledge of the user context. Simple feature flag systems just store the flag rules and ship them out to the client SDKS. In order to build this feature, we need the client SDKs to start phoning home with the contexts that they've seen.

Building the Feature​

Building the clients correctly was a good challenge. We wanted to make sure that we didn't add any unnecessary overhead to the client SDKs. To do that, the clients are built to only send new contexts that they haven't uploaded and to behave appropriately under heavy load.

By capturing the context server side, we were able to create profile pages for each user. This change allowed us to search for users effortlessly, using their names or any other information from the context. No more obscure tracking GUIDs - just straightforward assignment from the user's profile page.

Assign the variant to the user

Staying in the Flow with Prefab​

The addition of variant assignment in Prefab was a transformative moment for us. The ability to swiftly assign variants without any context switching or hunting for IDs made a remarkable difference in our day-to-day experience. We found ourselves in a delightful flow, focused on core development work, testing, and refining features.

As you evaluate Feature Flag tools or consider building your own, I'd encourage you to strongly consider adding this capability to your decision process. You can live without it, but you deserve nice things. Luckily with Prefab there's no reason not to give yourself a present.

Giving ourself a present

Β· 11 min read
Jeff Dwyer

Let's compare 7 top FeatureFlag providers to see how they compare.

Feature Flags are great, but there are so many tools to choose. In this comparison, we will evaluate 7 tools against the same test case. We'll test: Flipper, Prefab, Unleash, Flagsmith, LaunchDarkly, ConfigCat & Devcycle. We'll try to perform the same test case in each tool and we'll share our results & screenshots. This should be a good way to quickly compare the UIs and features.

The Test Case​

In order to put these tools through their paces, we'll use a straightforward test case. Here's our scenario:

As a developer on the checkout team I would like to test 2 new checkout flows:

  1. A new multi-page-version.
  2. A new single-page-version.
  3. A control of the existing checkout experience.

We have 4 targeting requirements:

  1. We don't test on our enterprise customers, so I want team.tier = enterprise to get the control.
  2. I want the existing beta-users and internal-users to try the multi-page-version. Beta users is a list of user ids. Ideally I can store this list in one place and reuse it. Internal users is anyone matching an email address ending in
  3. Two teams and complained about complexity so they should evaluate the single-page-version.
  4. Everyone else should get a 33/33/33 split of the 3 versions.

Okay, let's see how our contestants do! Choose your feature flags


FlipperCloud is a feature flagging system born out of a popular open source ruby gem. It's particularly popular among Ruby on Rails developers due to its ergonomic design and tight integration with the Rails ecosystem. Flipper has both an open-source library and a cloud-based service.

Trying to setup our test case in Flipper.Cloud was a bit of a challenge. Flipper does not support multi-variate flags, each flag can only be a boolean. To fully nail our test case I would need to hack around this and setup 3 flags, one for each variant. Let's lower the bar a bit and change the test case to just have 2 variants on and off.

Our next requirement was to avoid the enterprise tier. Flipper.Cloud does not support specifying off for an actor or group. So we'll probably need to do this in code:

if team.tier != :enterprise && flipper.enabled?(:experiment)
# do experiment

For our beta customers and target customers, we are able to target groups. Flipper is interesting in that again the group definition happens in code. This is pretty different from the other tools we are looking at, but could be convenient for a Rails monolith with complex targeting logic.

Flipper.register(:beta_customers) do |actor|
Flipper.register(:target_customers) do |actor|"")||"")

The resulting UI looks like: Flipper.Cloud

FlipperCloud Takeaways​

  • Best for: Teams committed to being a Rails monolith and who don't need flags in JS.
  • Price: $20 / Seat
  • Test Case: πŸ™ No support for multi-variate flags.
  • Features: Audit logs.
  • Architecture: Uses server-side evaluation, with adapters. Updates are polling.
  • Notes:
  1. No support for non boolean flags
  2. Can only use targeting to force into flag, not to exclude.
  3. Uses unusual "actors" and "groups" terminology.
  4. Group definitions are in code, not in UI.


Unleash is an open-source feature flagging system with a strong focus on privacy. Let's see how it does with our test case.

Unleash has a pretty different approach to setting up our beta group and enterprise segments. My initial approach was to add these in as "strategies" like this.

Unleash Overrides

I was able to setup segments and the matching rules as you would expect, however this doesn't work! Strategies don't include a value. These fine grained rules only determine whether we should return the whole variant set or not.

Instead, we are meant to set overrides on the variants themselves. Unleash Overrides

This works for our Enterprise tier which was a simple property match.

But for our beta-group this functionality doesn't allow us to use our shared segments.

For the, we aren't able to use an ends-with operator on this screen. We can only use an equality match.

So Unleash passes on the team-tier and fails on the other two.

The last note on UI here is that overview page here is, in my subjective opinion, confusing. Unleash Overview It's very hard to understand what's going on because the logic is split between the rules and variant pages. And if we dive into the variants page, we still can't see the overrides without going to the edit screen. Unleash Variant

Unleash Takeaways​

  • Best for: Privacy / EU Compliance.
  • Price: Starts at $80 per month for 5 users.
  • Test Case: 😐 Challenges with targeting UI.
  • Architecture: Interesting architecture supporting enhanced privacy because customer data stays on-premises or in cloud proxies you run.
  • Notes:
  1. Problematic targeting UI doesn't pass our test case.
  2. No streaming updates, polling only.
  3. Nice Demo instance you can play with.


Prefab is a newer entrant into the FeatureFlag market. I'm biased, but I think it passed the test with flying colors.

  • We are able to define the 3 variants for our flag.
  • We can setup a property match for the enterprise tier.
  • We can use shared segments to target the beta and internal customers.
  • We can do a 33% rollout across the rest of our customers.


Prefab has a flexible context system that allows you to set context at the beginning of the request so you don't need to specify the context for every flag evaluation.

Prefab explains how to do this in the UI with helpful code suggestions. You can see the context you'll need to evaluate the flag. Prefab

Prefab Takeaways​

  • Best for: Teams looking for real-time updates, robust resiliency, and competitive pricing.
  • Price: Super competitive pricing. $1 / pod charged minutely. $1 / 10k client MAU.
  • Test Case: πŸ˜€ Strong Pass.
  • Features: Robust audit logging, shared segments, real-time updates. Missing features: Full experimentation suite, reporting & advanced ACL / roles.
  • Architecture: Server-side evaluation. Real-time updates with SSE. CDN backed reliability story.
  • Notes:
  1. Clients in Ruby, Java, Node, Python, JS & React.
  2. Also provides other developer experience feature like dynamic log levels.
  3. Good story around local testing with default files.


Our next comparison is with Flagsmith. Flagsmith is also open-source and touts itself as a good option for cloud or on-premises deployments.

Flagsmith has good support for multivariate flags, so that's a relief.


The actual overrides is interesting. We specify rules and then specify the weights for each variant. This worked, but lead to a very long page of rules.

I also found the UI unclear for how to create the beta group. If I want the beta group to be 1 or 2. It wasn't clear to me whether to use = and comma-delimit or use a regex. Flagsmith

  • Best for: Flexible deployments & on-premises hosting.
  • Price: Starts from $45 for 3 users per month. A free version with limited functionalities for a single user is available.
  • Test Case: Strong Pass πŸ˜ƒ Shared Segments and multivariate support.
  • Features: Shared Segments. Remote configs, A/B testing, integration with popular analytics engines.
  • Architecture: Open source, provides hosted API for easier deployment during development cycles.
  • Notes:
  1. Flag targeting is split across multiple UI tabs, can make it difficult to get an overview of flag settings.
  2. Targeting individual users only available on higher plan tier.
  3. No streaming updates, polling only.


LaunchDarkly is a well known name in the feature flagging space. They have a robust feature set and a strong focus on enterprise customers. Let's see how they do with our test case.

Unsurprisingly LaunchDarkly does a great job of handling our test case. We can setup a property match for the enterprise tier, a shared segment for the beta group and internal users, and a user attribute for the email.

The UI is powerful, and you can see some of the more advanced enterprise features like workflows and prerequisites.


LaunchDarkly has all the features you'll need, but you're going to pay for it. The pricing is based on the number of users you have. Anecdotally this makes it quite challenging for larger orgs to rationalize the cost. A number of teams I've talked to end up sharing accounts, or building internal tools around the API in order to save money on seats, though of course this is a bad idea and negates the benefits of permissions and audit logging.

  • Best for: Price Insensitive Enterprise.
  • Test Case: πŸ˜€ Strong Pass.
  • Price: Starts from $10 per user per month, however this is a low-ball. Many features / kickers force enterprise adoption. I've heard quotes in the range of "25 users for 30k a bucket" which is roughly $100/user/month.
  • Features: All the basics plus: scheduling and workflows. AB testing & advanced permissions.
  • Architecture: Enables the dev team to wrap code with feature flags and deploy it safely, ability to segment user base based on various attributes
  • Notes: Flag editing view gets long, no read only overview to see flag rules at a glance.


ConfigCat is a feature flagging service that also supports remote configuration.

ConfigCat did a good job supporting our test case. We can setup shared segments for the beta group and internal users, and a user attribute for the email. Config cat ui One important note is that it does not have a concept of "variants" each rule is returning a simple string. This means that you could mistype a variant from one rule to another, which is just something to be aware of.

ConfigCat Takeaways​

  • Best for: Developer focussed configuration.
  • Test Case: πŸ˜€ Strong Pass.
  • Price: Free for up to 10 flags. Tiers at $99 and then $299 for unlimited flags and > 3 segments.
  • Features: Shared segments, Webhooks, ZombieFlags report
  • Architecture: Polling, server side evaluation.
  • Notes:
  1. No concept of variants.
  2. I couldn’t do email ends_with_one_of. I could do match one of or contains.
  3. Updates via polling, not real-time streaming.


Devcycle is a feature flagging service focussed on developers. Let's see how it fares on our test case.

Overall, Devcycle did well and I was able to setup our test case. The main knock was the lack of shared segments, meaning that I'll need to define the Beta group in multiple flags and risk them getting out of sync. Devcycle UI The resulting UI does end up very long making it a bit of a challenge to get an overview of the test.

The UI was generally straightforward, however found it annoying to have to specify a name for each of my rules. Devcycle UI

DevCycle Takeaways​

  • Best for: Developer focussed configuration.
  • Test Case: 😐 Pass, but no shared segments.
  • Price: Free for up to 1000 MAU. $500 for 50k MAU+. Pricing axis on client side MAU and Events.
  • Features: AB Testing with Metrics
  • Architecture: Streaming & Polling, server side evaluation.
  • Notes:
  1. No shared segments
  2. Redundancy in UI meant it was hard to get an overview of our test.
  3. Offers typed context


That's a wrap. We looked at 7 different flag providers and how they handle a common test case. We gave "Strong pass" to 4 of the tools. 2 of the tools got a "Pass" because they lacked segments and 1 got a "Fail" for not supporting multi-variate flags.

CompetitorBest ForTestCostPricing
Flipper CloudRails Only.πŸ™πŸ’°πŸ’°$20 / Seat Link
PrefabFunctionality for Less at any ScaleπŸ˜ƒπŸ’°$1 / Connection. Usage based. Link
UnleashPrivacy / EU ComplianceπŸ˜πŸ’°πŸ’°$15 / seat Link
FlagsmithFlexible deployments & On-Prem DeploymentsπŸ˜ƒπŸ’°πŸ’°$20 / seat Link
LaunchDarklyPrice Insensitive EnterpriseπŸ˜ƒπŸ’°πŸ’°πŸ’°πŸ’°$17 / seat, $70+ / seat for all features Link
ConfigCatDeveloper focussed configurationπŸ˜ƒπŸ’°$99 for > 10 flags. Usage based. Link
DevcycleDeveloper focussed with MetricsπŸ˜πŸ’°πŸ’°$25 for 1000 MAU. $500 for 50k MAU. Usage based. Link

Β· 4 min read
Jeff Dwyer

Question: how do you change the log level of a running Rails application?

Answer: You don't.

Technically the Rails docs inform us that we just need to do.

config.log_level = :warn # In any environment initializer, or
Rails.logger.level = 0 # at any time

However, that "at any time" is doing a lot of heavy lifting. You'd have to ssh into each server, edit the config file, restart the server, and then hope that you didn't make a typo.

As you can imagine, I'm here to show you something better, but first let's think about why you might want to change the log level of a running application in the first place and "what problem are we trying to solve".

Why change log levels?​

Typicaly we want to change log levels because there's a bug that's hard to reproduce locally and there's just no substitute for understanding just exactly what is happening in our production or staging environment. Trying to use config.log_level = :debug to solve this is like trying to do surgery with a sledgehammer. You're going to end up with a lot of collateral damage (log aggregation expense) and a lot of wasted time.

This is because config.log_level = :debug is so non-specific. If you're trying to debug a user 4234's billing issue, do you need to see the logging from every single template render for every user? Probably not.

If you had a magic wand, what you'd really love would be able to target the logging to:

  • The billing code
  • The billing Sidekiq job
  • User 4234
  • Just for the next hour while you're debugging

But can we really do that? Yes, we can, and you're 10 minutes away from trying it yourself.

So, how do we do that?​

First things first. We aren't going to change your log aggregator. You can still use DataDog of Logtail or whatever you like. We're just going to help you get much more value out of them.

Second, we aren't going to change your logging code. All of your or Rails.logger.debug are perfect just the way they are.

Here's what it does. Psuedo code is worth 1000 words, so here's the psuedo of what happens:

class PrefabLogger < ::Logger
# path = app.models.billing.calculate_tax
# level = :debug
def log(message, path, level)
Prefab.get("log-level-#{}") > level the logging

class Prefab
def get(key)
@dynamic_config_map.get(key, current_context)

Rails.logger = PrefabLogger

Now of course we've moved the heavy lifting to our @dynamic_config_map, but this is pretty simple to conceptualize. The map is a threadsafe It will be populated from a CDN with the latest values, and it will be updated in real time as the values change. The values in the map can have rules inside them so that we can filter the log levels based on the context of the request.

To get the current_context in the above pseudo code, we'll just set some properties in an around_action in our ApplicationController.

Dynamic logging at it's core is just a special case of dynamic configuration. It's a simple solution that gives us a ton of power.

The User Experience​

So, how do we actually use this? The Prefab UI has us covered. In the LogLevel UI, we'll see a list of all the log levels that are currently being used in our application. For any package, any class and even any method, we can simply click and change log level of any of these loggers.

Change Log Levels

We can also target specific loggers by using a targeted logger. This has the same targeting power as the Prefab Feature Flag system, so you'll have no problem laser targeting the loggers that you want to change.

Change Log Levels

That's it! Truly it's that simple. It's also very very easy to give this a try. Cut a branch, Signup, throw in your API key and set the logger, run your app and start using dynamic logging in just a few minutes. Full documentation for the ruby-sdk.


I hope you enjoyed this quick tour of dynamic logging. It's a simple solution to a common problem and once you get used to it, you'll wonder how you ever lived without it. Over time it will start to change how you think about logging. Without it, there's not a ton of point putting in Rails.logger.debug statements, because you're never going to see them, but with it, you can start to think about logging as an as-needed tool that you have in your pocket for when you need it most.

So, I encourage you to give dynamic logging a try, and experience the benefits of fine-tuning your log output. Happy debugging! πŸš€