Skip to main content

20 posts tagged with "Product"

View All Tags

Release: Secret Management Across Languages

· 3 min read
Jeff Dwyer
Jeff Dwyer
Prefab Founder & Engineer

I'm really excited to unveil our latest product: a robust and easy-to-use Secret Management system.

Secrets are... a pain in the ass. Even at Prefab, where we are ostensibly experts on dynamic configuration of applications, secrets were the fly in our ointment. Our code and practices around secrets were secure, but they were a pain to manage. Everytime we wanted to add another 3rd party API key it meant re-learning how to do things and PRs into an infrastructure repo that wasn't part of our normal flow.

I am so much happier with how we are handling secrets today and I'm excited to share.

CLI-Based Workflow for Enhanced Security

When we went to build secrets, we had one big guardrail: don't screw it up. With this in mind we had a strong desire that Prefab should have zero-knowledge of your secrets.

The best way to achieve this was with a CLI-based approach, because that ensures your secrets are always encrypted locally, with an encryption key that we never see.

Change the log level for this express route

Cost-Effective Solution for Shared Secrets

There are some other good secret management platforms out there, but to us paying per user for something simple like this just didn't feel right.

In our mind, secrets are just another piece of configuration, albeit one that needs to be decrypted before you use it.

Change the log level for this express route

Do you really need to pay per user so that a library can run AES.decrypt() on your configuration? We don't think so.

Say Goodbye to Insecure Practices

Secrets can feel like a shell game and it can be frustrating getting a new developer the secrets they need to run the application. It's not uncommon for a developer to slack a secret around to get someone unstuck quickly.

Having a single secret means that when new secrets are added, all of your developers need to do... nothing at all. With just their regular Prefab API Key and the single shared secret, your developers can focus on what they do best, free from the hassles of managing .env files.

Change the log level for this express route

Flexible and Cross-Language Secret Sharing

We like Rails a lot at Prefab and our solution is heavily inspired by how Rails does Credentials. The problem with Rails credentials for us is simply that we aren't just a Rails monolith. We need secrets in all our code which meant that Rails credentials didn't work for us.

All we needed to do to build this was to make sure that all of our clients can consistently decrypt in the same way.

Change the log level for this express route

With Prefab secrets, whether you're working in Rails, Node, Java or Python, you can now hava a unified solution for all your applications.

Wrap

If you're interested in a simple improvement to the way your organization handles secrets, take a minute to check out the secret documentation or create a free account to try it out. We'd love to get your feedback on what we've built.

Dynamic Config for OpenAI & Python

· 4 min read
Jeff Dwyer
Jeff Dwyer
Prefab Founder & Engineer

OpenAI's GPT models are great. However, as with any powerful tool, there are many settings to tweak to get the desired output. Parameters like temperature, top_p, model, and frequency_penalty can significantly influence the results. Changing these settings often requires code changes and redeployment, which can be cumbersome in a production environment.

Feel like watching instead of reading? Check out the video instead.

Prefab is a dynamic configuration service that allows you to modify these settings on the fly without redeploying your application. By the end of the blog post we'll have a working example of an OpenAI storyteller that can be reconfigured without restarting.

How Can I Change OpenAI Parameters Instantly?

We won't cover the full python documentation here, but you'll need to have the openai and prefab_cloud_python libraries installed. If you haven't already, you can install them using pip:

pip install openai prefab-cloud-python

Next, set up your OpenAI API key and initialize the Prefab client:

import openai
import os
from prefab_cloud_python import Options, Client

openai.api_key = os.getenv("OPENAI_API_KEY")
prefab = Client(Options())

Prefab will look for an API key in the env var PREFAB_API_KEY. Get one with a free signup. With the Prefab client initialized, you can now fetch dynamic configurations for your OpenAI API calls. Here's an example of how you can use Prefab to dynamically set the parameters for a chat completion:

Before:

def response(name):
prompt = f"You are a story teller in the style of Charles Dickens, tell a story about {name}"
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": prompt
}
],
temperature=0.5,
max_tokens=1024,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
return response

After:

def response(name):
prompt = f"You are a story teller in the style of {prefab.get('storyteller')}, tell a story about {name}"
response = openai.ChatCompletion.create(
model=prefab.get("openai.model"),
messages=[
{
"role": "system",
"content": prompt
}
],
temperature=prefab.get("openai.temperature"),
max_tokens=prefab.get("openai.max-tokens"),
top_p=prefab.get("openai.top-p"),
frequency_penalty=prefab.get("openai.frequency-penalty"),
presence_penalty=prefab.get("openai.presence-penalty")
)
return response

To make this work, we'll need to setup the following configs in the Prefab UI.

OpenAI configuration options

In the above code, instead of hardcoding the values for parameters like model, temperature, and top_p, we're fetching them dynamically from Prefab using the prefab.get() method.

Instant Updates

Changes we make in the UI will be reflected in our application instantly without redeploying. See here as we change the story teller from Stephen King to Charles Dickens. Our ChatGPT prompt updates without any code changes or restarts.

Change the storyteller

Targeting

Prefab also supports powerful targeting capabilities like you might expect from feature flags. If you pass in context to the client you could choose a different storyteller based on a user's preference.

OpenAI configuration options

Debugging

Prefab also provides a robust logging system that allows you to see exactly what's happening with your configurations. You can change log levels on the fly for any class or method in your application. This is especially useful when you're trying to debug a specific issue.

Add the following to your code to enable logging:

def response(name):
prefab.logger.info(f"processing request for {name}")

response = openai.ChatCompletion.create(
...
)
prefab.logger.debug(response)
return response

Then you can change the log level for your application in the Prefab UI. Change it to debug to see both of the log statements we added.

OpenAI configuration options

The logging output will be display in the console or your existing logging aggregator.

2023-09-26 13:41:30 [info     ] processing request for bob     location=pycharmprojects.openaiexample.main.response
2023-09-26 13:41:38 [debug ] {
"id": "chatcmpl-836LrWRRLnwVPGYR5440RnyGpoKWu",
...
"usage": {
"prompt_tokens": 27,
"completion_tokens": 200,
"total_tokens": 227
}
} location=pycharmprojects.openaiexample.main.response

Conclusion

Dynamic configuration with Prefab offers a seamless way to tweak and optimize your OpenAI projects without the hassle of constant redeployments. Prefab provides you with:

  • Flexibility: Easily experiment with different settings to find the optimal configuration for your use case.
  • Quick Iteration: No need to redeploy your application every time you make a change.
  • Centralized Management: Manage all your configurations from a single place. Setting Up Prefab with OpenAI

Whether you're running experiments or need the flexibility to adjust settings on the fly in a production environment, Prefab provides a robust solution. So, the next time you find yourself redeploying your app just to change a single parameter, consider giving Prefab a try!

Release: Structured Logging for Ruby and Rails

· 2 min read
Jeff Dwyer
Jeff Dwyer
Prefab Founder & Engineer

Structured logging is great. It works just like you'd expect it to. I got into a ton of detail about Tagged vs Structured Logging last week, but the short version is that structured logging is fabulous for searching and analyzing your logs.

I'm happy to say that prefab-cloud-ruby 1.1.0 supports structured logging for ruby or rails.

Here's what that looks like, in our controller we can

class CalculatorController < ApplicationController
def index
@results = logic(height, weight)

# OLD
logger.debug "😞😞😞 finished calc results height=#{height} weight=#{weight} results=#{@results.size} "

# NEW
logger.debug "🍏🍏🍏 finished calc results", height: height, weight: weight, results: @results.size
end
end

Even with co-pilot assistance, this is so much nice than the old way of string formatting log output.

Running the server locally, we get the following output:

DEBUG  2023-09-19 14:30:18 -0400: app.controllers.calculator_controller.index 🍏🍏🍏 finished calc results height=19.0 results=6 weight=0.0

If you're using log_formatter: Prefab::Options::JSON_LOG_FORMATTER then you'll get JSON output instead.

{
"severity":"DEBUG",
"datetime":"2023-09-19T14:42:51.723-04:00",
"path":"app.controllers.calculator_controller.index",
"message":"🍏🍏🍏 finished calc results",
"height":19.0,
"weight":0.0,
"results":6
}

Of course the real reason to do this is to make it easier to search and analyze your logs. So I'll deploy and then change the log level for our controller to debug to make sure we see our output.

Dogs in the right boxes

Now we can see how nicely these show up in Datadog:

Dogs in the right boxes

Structured logging is great, and with prefab-cloud-ruby you're just a few minutes away from having it in your app. Check out the docs or learn more about dynamic logging. Happy logging!

Top 8 Features from Prefab this Summer

· 3 min read
Jeff Dwyer
Jeff Dwyer
Prefab Founder & Engineer

Prefab Summer

As the sun blazed outside, our team was hard at work (mostly inside). We're excited to share the fruits of our labor with you. Here's 7 Big Features we shipped this summer:

JS Dynamic Logging in the Browser

We've always had the ability to dynamically change your log levels. But now you can do it from the front end. This is a great way to debug a specific user or context and turns browser logging collection from an expensive firehose into a targeted laser.

Dynamic Javascript Logging

Timed Loggers

Ever turned logging to debug and forgotten to turn it back to warn? That can be an expensive mistake. Save yourself from necessary logging costs due to "oops" by setting a logger to be debug for just an hour. It will automatically revert when then hour is up.

segments

Evaluation Charts

Are your feature flags really evaluating like you think they are? How often is this dynamic config being used? Is this flag a zombie? Our new charts report the evaluation telemetry so you can see a clear and concise view of how your configurations are running in the wild.

segments

Multi-Context

Feature Flags aren't just for product teams anymore. Engineers, we've got you covered too! With our new multi-context feature, you can now tailor your FF experience to your specific role and needs. I wrote more about how better feature flags for engineers in Feature Flags for a Redis migration.

This is something that only the top-tier Feature Flag players support so we're excited to level up and join them. Here's an example of targeting a feature flag to a availability zone:

multi context

Dynamic Config Rules

Our dynamic configuration has taken a leap forward as well. Detailed targeting rules aren't just for feature flags anymore. With our enhanced rule & criteria capabilities, you can now customize and target your dynamic configuration. Combine this with multi-context and dynamic config really gives developers a new level of power working with their deployed systems.

config rules

Context Search & Variant Assignment

Prefab now lets you search the contexts that have been evaluated. This means you can search for any user and find the associated contexts that they're evaluated with.

But there's more! From the context details page, you can now assign specific variants, giving you a convenient way to put people in a feature. Read more in The Joy of Variant Assignment with Prefab Feature Flags

Context Search

Improved Code Samples

We understand the importance of context. That's why we've enriched our code samples to help see which context attributes need to be present for you flag to evaluate according to the rules.

segments

Shared Segments

Shared segments is a power feature that lets you re-use segments across your flags and configs. Our user interface now supports this which allows you to DRY up your rules.

Shared segments

Next Steps

Next up? We asked ourself what the best way to use Feature Flags would be and we came up with something we didn't know was possible. Turns out it's possible :)

Stay tuned for more exciting updates! Or Book a Demo to get a tour of the latest and greatest.

A Redis Cache Migration With Feature Flags

· 8 min read
Jeff Dwyer
Jeff Dwyer
Prefab Founder & Engineer

Introduction: Feature flags are phenomenally useful, but most examples are from a product perspective and focus on the user context. That's great, but developer use cases are equally powerful if we make the tools understand the engineering specific contexts. In this post, we'll explore what that looks like by walking through an example of a redis cache migration. /imagine canadian geese migrating with the power of rocketships.

feature flag migration

How to Manage a Redis Migration Safely Using Feature Flags

For this example, let's imagine that we're using 3rd party Redis provider and we've been unhappy with the reliability. We've spun up a helm chart to run redis ourselves and we're ready to migrate, but of course we want to do this in a way that's safe and allows us to roll back if we encounter any issues.

We're using this Redis as a cache so we don't need to worry about migrating data. It is under heavy load however and we don't trust our internal Redis just yet, so we'd like to move over slowly. Let's imagine we've set the new Redis up in a particular availability zone, but we aren't sure what that will mean for latency.

Here's the migration plan we've come up with, we'd like to take 10% of the nodes in US East A and point them at the internal Redis. If that goes well, we'll scale up to the entire AZ. If that goes well, we'll move the rest of the nodes in this availability zone. Then we'll move on to US East B and finally US West. The diagram here shows Step 1 of the rollout pictorially.

Why?

  1. 10% of Nodes in the same AZ will test basic functionality
  2. 100% of Nodes in the same AZ will start to test performance & load
  3. Adding US East B will test cross AZ latency
  4. Adding US West will test cross region latency

So, how do we do this?

Step 1: Code the Feature Flag

For this example let's use the connection string as the value of the feature flag. We can have the current value be redis://redis-11111.c1.us-central1-2.gce.cloud.redislabs.com:11111 and the new one be redis://internal-redis.example.com:6379. To use the value, we'll just modify the creation of the Redis client.

In simple terms, this will look something like:

String connStr = featureFlagClient.get("redis.connection-string");
return RedisClient.create(connStr).connect();

Implementing this is a bit more complex. The Redis connection is something you'll want to create as a Singleton, but if it's a singleton it will get created once and won't change until we restart the service. Since that's no fun, we'll need a strategy to have our code use a new connection when there is a change. We could do this by listening to Prefab change events, or we could use a Provider pattern that caches the connection until the connection string changes. Here's a Java example of the provider pattern:

Full Code of Provider Pattern

  @Singleton
public Provider<StatefulRedisConnection<String, String>> getConfigControlledConnection(
ConfigClient configClient
) {
record ConnectionStringConnectionPair(
String connectionString,
StatefulRedisConnection<String, String> connection
) {}
AtomicReference<ConnectionStringConnectionPair> currentConfigurationReference = new AtomicReference<>();
return () ->
currentConfigurationReference.updateAndGet(connPair -> {
String currentConfigString = configClient
.liveString("redis.connection-string")
.get();
if (connPair == null || !connPair.connectionString.equals(currentConfigString)) {
return new ConnectionStringConnectionPair(
currentConfigString,
RedisClient.create(currentConfigString).connect()
);
}
return connPair;
})
.connection;
}

Now, the bigger question. How do we:

  1. Configure our feature tool to randomize the 10% of nodes.
  2. Target the feature flag to just the nodes in US East A.

Step 2: Giving the Feature Flag Tool Infra Context

A brief detour into some context on "context". Most feature tools think of context as a simple map of strings. It usually looks something like this:

Typical Context for Product

{
"key": "1454f868-9a41-4419-a242-d5a872ec5f04",
"user_id": "123",
"user_name": "John Doe",
"team_id": "456",
"team_name": "Foo Corp",
"tier": "enterprise"
}

This starts out fine, but it can start to feel icky when you start to add more and more things to the context. Is it ok to push the details of our deployment like the host.id in here? How much is too much? This is the "bag o' stuff" model and it's a bit like a junk drawer.

The bigger issue however is whether the tool allow us to randomize by something other than key? Because our rollout plan is to test the new Redis on 10% of nodes, not 10% of users. If your feature tools isn't written with this use case in mind this can be hard or impossible to do.

So yes, as developers, we need more information. Oftentimes we are operating with a user and team context, but we also need to know about things like the deployment, the cloud resource, and the device. A proper context or "multi-context" should look something like this:

Context For Developers

{
"user": {
"key": "1454f868-9a41-4419-a242-d5a872ec5f04", // a unique key for the user / anonymous user
"name": "John Doe",
"id": "123" // for non anonymous users
},
"team": {
"key": 1,
"name": "Team 1",
"tier": "enterprise"
},
"device": {
"ip": "19.122.43.123",
"locale": "en_US",
"appVersion": "1.0.1",
"systemName": "Android",
"systemVersion": "11.6"
},
"request": {
"key": "c820567a-9f2d-4b3d-85e5-9ff4132d0e08", // a unique key for the request
"url": "http://production.example.com/user/123",
"path": "/user/123"
},
"deployment": {
"key": "pod/user-service-bcddb8c8d-mxz6v",
"namespace": "production",
"instance-type": "m4.xlarge",
"instance-id": "i-0a5b2c3d4e5f6g7h8",
"SHA": "27bdd4f9-3530-46a6-8188-9d90467f086e"
},
"cloud": {
"key": "i-07d3301208fe0a55a", //host-id
"platform": "aws_ec2",
"region": "us-east-1",
"availability-zone": "us-east-1a",
"host-type": "c4.large"
}
}

This is a lot more information, but it's also a lot more useful and structured. The key piece here is that we are specifying cloud.key as the host id. This is the unique identifier for the host that we can use to randomize the rollout.

We can use this as our "sticky property" in the UI. This is the property that the client will use to randomize on. The host doesn't change so each node will stay in the bucket that it is originally assigned.

feature flag by cloud host

Step 3: Targeting the Feature Flag to and Availability Zone

The next step is simply to add a targeting rule. We should have all of the context attributes available to select from in the UI. We can select cloud.availability-zone and set it to us-east-1a.

feature flag by cloud host

Altogether that looks like this:

feature flag by cloud host

The feature flag is now enabled for 10% of the nodes in us-east-1a. If the node is not in that availability zone, it will get the default value.

Step 4: Observe Our Results

It's always nice to observe that things are working as expected. What percent of all of the nodes are receiving the new Redis connection string?

As we go through the steps of our rollout plan and verify performance and latency along the way, we should see the number of evaluations increasing for our internal redis install.

feature flag by cloud host

Each bump in the graph should be related to our incremental approach to changing the rollout. If there are changes that aren't related, we may need to look into our assignment of the cloud.key since that is how we are randomizing. We should expect that the percentages here do not exactly match the percentages we set. For example if we split 80/20 and we have 50 hosts, each of them will have a 80% chance of being in the group, but we might certainly expect to see 38-42 hosts in that group due to random distributions.

If we want to dig into the numbers we're seeing here, we can select "show rules" to see all configuration changes on the chart and determine which feature flag rules lead to our output numbers.

feature flag by cloud host

Summary

This is a simple example, but it shows how we can use feature flags to manage a migration in a safe and incremental way. It also shows how we can use multi-context feature flags to make feature flags more powerful for developers and engineers.

Finally, we see how nice it is to have a feature flag tool that gives us insight into breakdown of our evaluations and the ability to dig into the numbers to see what's going on.

The Joy of Variant Assignment with Prefab Feature Flags - Building a Feature We Love

· 4 min read
Jeff Dwyer
Jeff Dwyer
Prefab Founder & Engineer

Introduction

Feature flags have become an integral part of our software development process, providing us the flexibility to release and test new features efficiently. While many feature flag tools offer similar functionalities, we had a unique vision of what would make our experience truly delightful. In this blog post, we'll share our personal journey with Prefab and how the introduction of variant assignment transformed our day-to-day feature flag management, leaving us happier with our own product.

What is Variant Assignment

Variant assignment is a task we encounter frequently - as developers, product managers, or sales representatives. Setting up a feature flag is great, but we almost always want to test them out. Variant assignment is when we specify that person X should be in a particular bucket. When I develop with a flag I'll usually assign myself to one variant and then then flip-flop a few times to verify functionality. Once it's released I'll usually do the same for internal and external users, so they can check out the new feature. Unfortunately this seemingly simple task can become tedious when dealing with tracking IDs and user data.

The Struggle with User Identification

The best practice for identifying a user is something that looks like:

prefab_context = {
user: {
key: "7971f1c7-30ba-456f-be00-ab798f03d3b8", // GUID assigned to cookie before user is created
id: 4233, // User ID in our database
name: "Jane Doe",
},
};

This let's us keep a consistent view of someone from visitor to user, respects PII by not using email and gives us a useful handle to the person by using their name.

But there's a problem.

When we want to put a user into a feature bucket we typically know their name, but not their tracking GUID. Using Prefab for ourselves, we found remembering or locating their unique tracking IDs often disrupted our workflow. The constant back-and-forth between tools to find user information became a pain point we wanted to address.

I need to know my GUID

Prefab's Empowering Solution: Our Own Creation

As creators of Prefab, we had the freedom to build the feature we craved - a seamless variant assignment process. What that should look like was pretty clear. I'd like to be able to search for a user by name, and then click to assign them to a variant.

Search for our user

There's a reason that most open-source and home-grown solutions don't have this feature though. A typical feature flag back end doesn't have any knowledge of the user context. Simple feature flag systems just store the flag rules and ship them out to the client SDKS. In order to build this feature, we need the client SDKs to start phoning home with the contexts that they've seen.

Building the Feature

Building the clients correctly was a good challenge. We wanted to make sure that we didn't add any unnecessary overhead to the client SDKs. To do that, the clients are built to only send new contexts that they haven't uploaded and to behave appropriately under heavy load.

By capturing the context server side, we were able to create profile pages for each user. This change allowed us to search for users effortlessly, using their names or any other information from the context. No more obscure tracking GUIDs - just straightforward assignment from the user's profile page.

Assign the variant to the user

Staying in the Flow with Prefab

The addition of variant assignment in Prefab was a transformative moment for us. The ability to swiftly assign variants without any context switching or hunting for IDs made a remarkable difference in our day-to-day experience. We found ourselves in a delightful flow, focused on core development work, testing, and refining features.

As you evaluate Feature Flag tools or consider building your own, I'd encourage you to strongly consider adding this capability to your decision process. You can live without it, but you deserve nice things. Luckily with Prefab there's no reason not to give yourself a present.

Giving ourself a present

7 Ruby Feature Flag Tools Compared

· 11 min read
Jeff Dwyer
Jeff Dwyer
Prefab Founder & Engineer

Let's compare 7 top FeatureFlag providers to see how they compare.

Feature Flags are great, but there are so many tools to choose. In this comparison, we will evaluate 7 tools against the same test case. We'll test: Flipper, Prefab, Unleash, Flagsmith, LaunchDarkly, ConfigCat & Devcycle. We'll try to perform the same test case in each tool and we'll share our results & screenshots. This should be a good way to quickly compare the UIs and features.

The Test Case

In order to put these tools through their paces, we'll use a straightforward test case. Here's our scenario:

As a developer on the checkout team I would like to test 2 new checkout flows:

  1. A new multi-page-version.
  2. A new single-page-version.
  3. A control of the existing checkout experience.

We have 4 targeting requirements:

  1. We don't test on our enterprise customers, so I want team.tier = enterprise to get the control.
  2. I want the existing beta-users and internal-users to try the multi-page-version. Beta users is a list of user ids. Ideally I can store this list in one place and reuse it. Internal users is anyone matching an email address ending in example.com.
  3. Two teams foo.com and bar.com complained about complexity so they should evaluate the single-page-version.
  4. Everyone else should get a 33/33/33 split of the 3 versions.

Okay, let's see how our contestants do! Choose your feature flags

FlipperCloud

FlipperCloud is a feature flagging system born out of a popular open source ruby gem. It's particularly popular among Ruby on Rails developers due to its ergonomic design and tight integration with the Rails ecosystem. Flipper has both an open-source library and a cloud-based service.

Trying to setup our test case in Flipper.Cloud was a bit of a challenge. Flipper does not support multi-variate flags, each flag can only be a boolean. To fully nail our test case I would need to hack around this and setup 3 flags, one for each variant. Let's lower the bar a bit and change the test case to just have 2 variants on and off.

Our next requirement was to avoid the enterprise tier. Flipper.Cloud does not support specifying off for an actor or group. So we'll probably need to do this in code:

if team.tier != :enterprise && flipper.enabled?(:experiment)
# do experiment
end

For our beta customers and target customers, we are able to target groups. Flipper is interesting in that again the group definition happens in code. This is pretty different from the other tools we are looking at, but could be convenient for a Rails monolith with complex targeting logic.

Flipper.register(:beta_customers) do |actor|
actor.is_in_beta_group?
end
Flipper.register(:target_customers) do |actor|
actor.email.ends_with?("@foo.com")|| actor.email.ends_with?("@example.com")
end

The resulting UI looks like: Flipper.Cloud

FlipperCloud Takeaways

  • Best for: Teams committed to being a Rails monolith and who don't need flags in JS.
  • Price: $20 / Seat
  • Test Case: 🙁 No support for multi-variate flags.
  • Features: Audit logs.
  • Architecture: Uses server-side evaluation, with adapters. Updates are polling.
  • Notes:
  1. No support for non boolean flags
  2. Can only use targeting to force into flag, not to exclude.
  3. Uses unusual "actors" and "groups" terminology.
  4. Group definitions are in code, not in UI.

Unleash

Unleash is an open-source feature flagging system with a strong focus on privacy. Let's see how it does with our test case.

Unleash has a pretty different approach to setting up our beta group and enterprise segments. My initial approach was to add these in as "strategies" like this.

Unleash Overrides

I was able to setup segments and the matching rules as you would expect, however this doesn't work! Strategies don't include a value. These fine grained rules only determine whether we should return the whole variant set or not.

Instead, we are meant to set overrides on the variants themselves. Unleash Overrides

This works for our Enterprise tier which was a simple property match.

But for our beta-group this functionality doesn't allow us to use our shared segments.

For the user.email, we aren't able to use an ends-with operator on this screen. We can only use an equality match.

So Unleash passes on the team-tier and fails on the other two.

The last note on UI here is that overview page here is, in my subjective opinion, confusing. Unleash Overview It's very hard to understand what's going on because the logic is split between the rules and variant pages. And if we dive into the variants page, we still can't see the overrides without going to the edit screen. Unleash Variant

Unleash Takeaways

  • Best for: Privacy / EU Compliance.
  • Price: Starts at $80 per month for 5 users.
  • Test Case: 😐 Challenges with targeting UI.
  • Architecture: Interesting architecture supporting enhanced privacy because customer data stays on-premises or in cloud proxies you run.
  • Notes:
  1. Problematic targeting UI doesn't pass our test case.
  2. No streaming updates, polling only.
  3. Nice Demo instance you can play with.

Prefab

Prefab is a newer entrant into the FeatureFlag market. I'm biased, but I think it passed the test with flying colors.

  • We are able to define the 3 variants for our flag.
  • We can setup a property match for the enterprise tier.
  • We can use shared segments to target the beta and internal customers.
  • We can do a 33% rollout across the rest of our customers.

Prefab

Prefab has a flexible context system that allows you to set context at the beginning of the request so you don't need to specify the context for every flag evaluation.

Prefab explains how to do this in the UI with helpful code suggestions. You can see the context you'll need to evaluate the flag. Prefab

Prefab Takeaways

  • Best for: Teams looking for real-time updates, robust resiliency, and competitive pricing.
  • Price: Super competitive pricing. $1 / pod charged minutely. $1 / 10k client MAU.
  • Test Case: 😀 Strong Pass.
  • Features: Robust audit logging, shared segments, real-time updates. Missing features: Full experimentation suite, reporting & advanced ACL / roles.
  • Architecture: Server-side evaluation. Real-time updates with SSE. CDN backed reliability story.
  • Notes:
  1. Clients in Ruby, Java, Node, Python, JS & React.
  2. Also provides other developer experience feature like dynamic log levels.
  3. Good story around local testing with default files.

Flagsmith

Our next comparison is with Flagsmith. Flagsmith is also open-source and touts itself as a good option for cloud or on-premises deployments.

Flagsmith has good support for multivariate flags, so that's a relief.

Flagsmith

The actual overrides is interesting. We specify rules and then specify the weights for each variant. This worked, but lead to a very long page of rules.

I also found the UI unclear for how to create the beta group. If I want the beta group to be user.id 1 or 2. It wasn't clear to me whether to use = and comma-delimit or use a regex. Flagsmith

  • Best for: Flexible deployments & on-premises hosting.
  • Price: Starts from $45 for 3 users per month. A free version with limited functionalities for a single user is available.
  • Test Case: Strong Pass 😃 Shared Segments and multivariate support.
  • Features: Shared Segments. Remote configs, A/B testing, integration with popular analytics engines.
  • Architecture: Open source, provides hosted API for easier deployment during development cycles.
  • Notes:
  1. Flag targeting is split across multiple UI tabs, can make it difficult to get an overview of flag settings.
  2. Targeting individual users only available on higher plan tier.
  3. No streaming updates, polling only.

LaunchDarkly

LaunchDarkly is a well known name in the feature flagging space. They have a robust feature set and a strong focus on enterprise customers. Let's see how they do with our test case.

Unsurprisingly LaunchDarkly does a great job of handling our test case. We can setup a property match for the enterprise tier, a shared segment for the beta group and internal users, and a user attribute for the email.

The UI is powerful, and you can see some of the more advanced enterprise features like workflows and prerequisites.

LaunchDarkly

LaunchDarkly has all the features you'll need, but you're going to pay for it. The pricing is based on the number of users you have. Anecdotally this makes it quite challenging for larger orgs to rationalize the cost. A number of teams I've talked to end up sharing accounts, or building internal tools around the API in order to save money on seats, though of course this is a bad idea and negates the benefits of permissions and audit logging.

  • Best for: Price Insensitive Enterprise.
  • Test Case: 😀 Strong Pass.
  • Price: Starts from $10 per user per month, however this is a low-ball. Many features / kickers force enterprise adoption. I've heard quotes in the range of "25 users for 30k a bucket" which is roughly $100/user/month.
  • Features: All the basics plus: scheduling and workflows. AB testing & advanced permissions.
  • Architecture: Enables the dev team to wrap code with feature flags and deploy it safely, ability to segment user base based on various attributes
  • Notes: Flag editing view gets long, no read only overview to see flag rules at a glance.

ConfigCat

ConfigCat is a feature flagging service that also supports remote configuration.

ConfigCat did a good job supporting our test case. We can setup shared segments for the beta group and internal users, and a user attribute for the email. Config cat ui One important note is that it does not have a concept of "variants" each rule is returning a simple string. This means that you could mistype a variant from one rule to another, which is just something to be aware of.

ConfigCat Takeaways

  • Best for: Developer focussed configuration.
  • Test Case: 😀 Strong Pass.
  • Price: Free for up to 10 flags. Tiers at $99 and then $299 for unlimited flags and > 3 segments.
  • Features: Shared segments, Webhooks, ZombieFlags report
  • Architecture: Polling, server side evaluation.
  • Notes:
  1. No concept of variants.
  2. I couldn’t do email ends_with_one_of. I could do match one of or contains.
  3. Updates via polling, not real-time streaming.

Devcycle

Devcycle is a feature flagging service focussed on developers. Let's see how it fares on our test case.

Overall, Devcycle did well and I was able to setup our test case. The main knock was the lack of shared segments, meaning that I'll need to define the Beta group in multiple flags and risk them getting out of sync. Devcycle UI The resulting UI does end up very long making it a bit of a challenge to get an overview of the test.

The UI was generally straightforward, however found it annoying to have to specify a name for each of my rules. Devcycle UI

DevCycle Takeaways

  • Best for: Developer focussed configuration.
  • Test Case: 😐 Pass, but no shared segments.
  • Price: Free for up to 1000 MAU. $500 for 50k MAU+. Pricing axis on client side MAU and Events.
  • Features: AB Testing with Metrics
  • Architecture: Streaming & Polling, server side evaluation.
  • Notes:
  1. No shared segments
  2. Redundancy in UI meant it was hard to get an overview of our test.
  3. Offers typed context

Summary

That's a wrap. We looked at 7 different flag providers and how they handle a common test case. We gave "Strong pass" to 4 of the tools. 2 of the tools got a "Pass" because they lacked segments and 1 got a "Fail" for not supporting multi-variate flags.

CompetitorBest ForTestCostPricing
Flipper CloudRails Only.🙁💰💰$20 / Seat Link
PrefabFunctionality for Less at any Scale😃💰$1 / Connection. Usage based. Link
UnleashPrivacy / EU Compliance😐💰💰$15 / seat Link
FlagsmithFlexible deployments & On-Prem Deployments😃💰💰$20 / seat Link
LaunchDarklyPrice Insensitive Enterprise😃💰💰💰💰$17 / seat, $70+ / seat for all features Link
ConfigCatDeveloper focussed configuration😃💰$99 for > 10 flags. Usage based. Link
DevcycleDeveloper focussed with Metrics😐💰💰$25 for 1000 MAU. $500 for 50k MAU. Usage based. Link

Changing Log Levels At Runtime - Rails

· 4 min read
Jeff Dwyer
Jeff Dwyer
Prefab Founder & Engineer

Question: how do you change the log level of a running Rails application?

Answer: You don't.

Technically the Rails docs inform us that we just need to do.

config.log_level = :warn # In any environment initializer, or
Rails.logger.level = 0 # at any time

However, that "at any time" is doing a lot of heavy lifting. You'd have to ssh into each server, edit the config file, restart the server, and then hope that you didn't make a typo.

As you can imagine, I'm here to show you something better, but first let's think about why you might want to change the log level of a running application in the first place and "what problem are we trying to solve".

Why change log levels?

Typicaly we want to change log levels because there's a bug that's hard to reproduce locally and there's just no substitute for understanding just exactly what is happening in our production or staging environment. Trying to use config.log_level = :debug to solve this is like trying to do surgery with a sledgehammer. You're going to end up with a lot of collateral damage (log aggregation expense) and a lot of wasted time.

This is because config.log_level = :debug is so non-specific. If you're trying to debug a user 4234's billing issue, do you need to see the logging from every single template render for every user? Probably not.

If you had a magic wand, what you'd really love would be able to target the logging to:

  • The billing code
  • The billing Sidekiq job
  • User 4234
  • Just for the next hour while you're debugging

But can we really do that? Yes, we can, and you're 10 minutes away from trying it yourself.

So, how do we do that?

First things first. We aren't going to change your log aggregator. You can still use DataDog of Logtail or whatever you like. We're just going to help you get much more value out of them.

Second, we aren't going to change your logging code. All of your Rails.logger.info or Rails.logger.debug are perfect just the way they are.

Here's what it does. Psuedo code is worth 1000 words, so here's the psuedo of what happens:

class PrefabLogger < ::Logger
# path = app.models.billing.calculate_tax
# level = :debug
def log(message, path, level)
Prefab.get("log-level-#{path}") > level
...do the logging
end
end
end

class Prefab
def get(key)
@dynamic_config_map.get(key, current_context)
end
end

Rails.logger = PrefabLogger

Now of course we've moved the heavy lifting to our @dynamic_config_map, but this is pretty simple to conceptualize. The map is a threadsafe Concurrent::Map.new. It will be populated from a CDN with the latest values, and it will be updated in real time as the values change. The values in the map can have rules inside them so that we can filter the log levels based on the context of the request.

To get the current_context in the above pseudo code, we'll just set some properties in an around_action in our ApplicationController.

Dynamic logging at it's core is just a special case of dynamic configuration. It's a simple solution that gives us a ton of power.

The User Experience

So, how do we actually use this? The Prefab UI has us covered. In the LogLevel UI, we'll see a list of all the log levels that are currently being used in our application. For any package, any class and even any method, we can simply click and change log level of any of these loggers.

Change Log Levels

We can also target specific loggers by using a targeted logger. This has the same targeting power as the Prefab Feature Flag system, so you'll have no problem laser targeting the loggers that you want to change.

Change Log Levels

That's it! Truly it's that simple. It's also very very easy to give this a try. Cut a branch, Signup, throw in your API key and set the logger, run your app and start using dynamic logging in just a few minutes. Full documentation for the ruby-sdk.

Conclusion

I hope you enjoyed this quick tour of dynamic logging. It's a simple solution to a common problem and once you get used to it, you'll wonder how you ever lived without it. Over time it will start to change how you think about logging. Without it, there's not a ton of point putting in Rails.logger.debug statements, because you're never going to see them, but with it, you can start to think about logging as an as-needed tool that you have in your pocket for when you need it most.

So, I encourage you to give dynamic logging a try, and experience the benefits of fine-tuning your log output. Happy debugging! 🚀