Skip to main content

11 posts tagged with "Product"

View All Tags

· 6 min read
Jeff Dwyer

Prefab is democratizing the core internal services that make product engineering teams go fast at big orgs that can afford 20 engineers on developer experience. Today we help organizations:

  • Improve MTTR by changing log levels instantly
  • Reduce Datadog bills by only logging / tracing when you need it or targeted to a particular user or transaction.
  • Save 90% vs LaunchDarkly for a robust feature flag solution
  • Improve local developer experience with shared secrets
  • Manage microservice configuration at scale with dynamic configuration.

This quarter we shipped a ton of great stuff, much of it coming straight from customer requests.

Improved UI

Sometimes it's the little things that make the difference. We made a ton of small tweak based on your feedback this quarter and the product is feeling tighter than ever.

Search for a user AND change their flags

Some users reported that while it was great that they could see all of the flag values for a given user, what they really wanted to be able to do was change those values. They we're pleasantly surprised when we told them they could already do that! They just needed to click the pill that didn't look clickable. Talk about hiding our light under a bushel. As much as we like secret off-menu features, we decided to go ahead and make this look clickable.

Context Drop Downs

Improvements for dynamic LLM prompts

And for all of you using Prefab to instantly change your LLM prompts without shipping new code. Suffer no longer with a text box sized for ants!

Search for logger

Well, it's all better now, with dynamically resizing text areas.

Better inputs when using prefab to dynamically change llm prompts

Search for loggers

Huge improvement for those of you with tons of loggers. Have 500+ loggers? (I'm looking at you Java). Find the one you want to turn on instantly with a quick little search.

Search for logger

Production Release of Python client

Our Python client got a huge overhaul this quarter and is ready to roll. Check out the full docs. The code is tighter, the telemetry is in and the integration with standard Python logging frameworks is much improved.

prefab_cloud_python.get_client().get("max-jobs-per-second", default=10) # => 10

Ruby Client Logging Overhaul to use SemanticLogger

In a similar vein, we did a big overhaul of the ruby client as well and we now work out of the box with the best ruby logging library out there. We were able to reduce the surface area of the library and really focus in on what we can bring to your ruby logging: being a smart filter that decides what logs to output.

Read more in Ruby Dynamic Logging With Semantic Logger

Support For Okta & Other SAML Providers

If you wanted to use Prefab, but really needed to use Okta or another SAML provider, we're so excited to have removed this blocker for you.

Support for SAML & Okta

SAML is live and we're welcoming new signups with open arms. What's particularly exciting about this to me is that we're running it ourselves. We're doing it internally, because it seems to us that everyone should be able to support SAML and this is kinda our thing: write the code one last time so that it all plays well together and we can all use it.

Durations

Ever written http.connect(timeout: Prefab.get("my.timeout')) and been worried that someone might use millisecond or seconds or minutes? Naming all your time duration configs kafka.retry.timeout-in-seconds to try to be really explicit? Well, good news because Durations are coming to Prefab.

Durations is a new type of config that acts like the java.time.Duration object or ActiveSupport::Duration in Ruby. You can specify it in whatever units you like and then retrieve it in whatever units you like. Under the covers it's stored as an ISO 8601 duration.

New duration config

And in your code you can just ask the duration in whatever unit you need. Unit mismatch crises averted.

duration = Prefab.get("mysql.timeout") # P80S # a Prefab duration object that quacks like ActiveSupport::Duration

mysql.connect(timeout: duration.in_milliseconds) // 80000
mysql.connect(timeout: duration.in_seconds) // 80
mysql.connect(timeout: duration.in_minutes) // 1.5
mysql.connect(timeout: duration.in_days) // 0.00001

Updated Pricing & PricingCompared.com

We heard you loud and clear. Our pricing was... confusing. We've re-packaged it and we think it's a lot clearer now. Getting into pricing made us really wonder how we stacked up to other tools in the space. It turns out the answer is... complicated. We started building the spreadsheet to help us figure out how we really compared and then decided it should really be code and then decided we might as well just share it with everyone.

Thus Feature Flag Pricing Compared was born. Check it out!

Send To Frontend

Why should backend libraries like Python, Java and Ruby have all the fun? What if you want to make dynamic configuration updates to content on your website. We agreed, so we added an opt-in flag to configuration that will send config to front end libraries.

sent configuration to frontend clients

Drunk/Lazy CEO mode for the CLI

What's that command line argument again? I could never remember the exact CLI command and now that we support setting secrets via the CLI, I was getting annoyed. The solution! Something we affectionately called "drunk ceo" mode. Wouldn't it be great if all CLIs also had a mode where you could just type in what you wanted it to do and it would find the right combination of flags for you and prompt for missing information? We thought so too and Jeffrey made it happen.

Better inputs when using prefab to dynamically change llm prompts

Secret Management

Early in the quarter we shipped a big secret management release and we've been enjoying using it ever since.

Truly it's tough to remember that updating secrets used to involve some hand-rolled kubernetes yaml and a call to base64 --encode. Shudder.

Secret Management CLI

Improved Support for Jamstack hosting providers

Last but not least, we made some nice improvements to the libraries and the documentation around Feature Flags for our Jamstack customers. Whether you're on Clouflare, Vercel or Netlify, you should be able to get started quickly with Prefab for Feature Flags, Dynamic Logging or Configuration.

Feature Flags for Netlify Functions and Change Log Levels instantly in Netlify

Wrap

Q1 was awesome, and we are hard at work on a whole host of improvements for Q2. Our focus will be squarely on our existing customers, giving you all the tools you need to feel confident and assured that Prefab respects the core role that you're entrusted us with in your stack.

· 8 min read
Jeff Dwyer

Your billing system knows user.plan == pro, but your React code needs to know if it should let the user see the "advanced reporting" tab or calculate how many "active lists" they're allowed. How do we bridge this gap? How do we go from the SKU/plan to the actual product features that should be on or off. How can we handle product entitlements with feature flags?

The naive version of these systems is pretty easy to build. We could of course put if user.plan == :pro checks everywhere, however we are going to feel pain from this almost immediately, because there are common entitlement curve-balls that will lead to spaghetti code and hard to understand systems once they are faced with reality. For today, let's see how to build a more resilient system by:

  1. Taking a real pricing page and modeling it.
  2. Throwing some common curve-balls at it.

Some organizations are going to want a whole separate entitlements system, but let's look at how to do this with the feature flag system you already know and love. We'll implement using Prefab, but these concepts apply to any feature flag tool.

Our Example

Let's implement a portion of HubSpot SalesHub pricing page as our guinea pig here. It has a mix of on/off features as well as limits.

Pricing Page example

The first thing we need to do is model how we think about a customer's plan. Our feature flag tool is not the system of record for what SKU / billing plan a user or team is on, so we should pass that in as context to the tool. But what should that context look like?

Design your pricing and tools so you can adapt them later is a great read that gives us a good place to start. I'm going to use the SKU definition from that as a start point for us, but it's worth a read in its entirety as well.

A suggested product SKU

While it is helpful to have a single canonical SKU as in the image above, it will be easier to work with if we also split it up into the component parts. (I didn't use an integer for the product part because that's a bit more confusing for our example today with a product that isn't yours, but it's a fine idea).

Here's a good straw-person context we can start with:

    {
user: {
key: "123",
name: "Bob Customer",
email: "bob@fexample.com"
},
team: {
key: "456",
email: "Example Corp"
},
plan: {
key: "us-starter-myproduct-01-v0",
country: "us",
tier: "starter",
product: myproduct
payment: 1,
version: 0,
}
}

Modelling Out The Smart Send Times Feature

Ok, we've got a feature calls "Smart send times"? Initially, it's a no-brainer: a boolean flag that flips on or off based on the user's plan.

Our first pass at modeling out smart send times is really easy. It's a boolean flag, and it varies by plan tier. We can simply model it like this:

A basic rule for a feature

In code, we'll just call

const context = //context from above

// Initialize client
const options = { apiKey: 'YOUR_CLIENT_API_KEY', context: new Context(context) };
await prefab.init(options);
prefab.isEnabled('features.smart-send-times');

This will work great for a while, but let's throw in our first curve-ball.

Demos & Instant Overrides

For this curve-ball lets figure out what to do when, inevitably, sales gets in touch and says that they need to be able to demo this feature to users in their account. The user is still going to be in the starter plan, but we want to temporarily give them the same access (for this feature) as if they were in pro

The easiest way to do this is to make an one-off override for the customer that sales wants to demo to.

To do this, we can use the context search screen which lets us type in the name of the customer. From the resulting page we can see how every flag evaluates for them. We can then click and change to set the flag variant manually for them.

Setting a value from context

That creates a new rule for us, specifying that this particular user should have access to the feature. Rules are evaluated in order, so the top rule will win here.

Rule after context set

Improving One-Offs with Shared Segments

What we did there works fine, but let's throw another curve-ball. It's pretty common to have a situation where a salesperson always wants to demo 10+ different features. We sure don't want them to have to edit 10 different flags and then remember to undo 10 different flags in order to make this happen.

A good way to make this better would be to create a segment called currently-demoing, then we can reference this segment in the rules.

A segment definition

When a sales person needs to demo, they can quickly add the team or user key and instantly that team will be upgraded for all flags that use this segment.

We can use this segment as a top-priority rule in the features we want to demo.

A rule using a segment

When using segments, it can be a bit more challenging to know exactly why a user is getting a feature. To help you with that, you can use the context view again, but this time hover over the ? next to each flag. This will show you all of the rules for that flag and highlight which rule is matching. Here we can see that we are matching true because of this currently-demoing segment.

Context shows why the rule matched

Segments are powerful and can help you bring order to your models. They will be a good aid in solving challenges around one-off contracts as well. Remember, just because you offer 3 tiers on the website, doesn't mean there isn't going to be a 4th, 5th and 6th tier once sales starts negotiating. Good naming is key to making this work so game out a number of scenarios before you commit to ensure that you have the best chance of success.

Modeling Pricing Plan Changes

The next entitlement we'll model will be the number of "Calling Minutes". Enterprise should have 12,000, Pro 3000 and Starter 50.0

We can use a FeatureFlag with integer values here.

Rules for calling limit

Let's throw in a new curve-balls this time. These limits seem like something we'll want to experiment with. Let's say we want to try reducing the number of minutes on the pro plan, but we don't want to change anything for existing users. This is where our good SKU modeling is going to help.

We'll be changing the sku from:

us-pro-myproduct-01-v0 #before
us-pro-myproduct-01-v1 #after

We can no longer just use 'plan.tier' to determine the number of calling minutes, so let's use the plan.version attribute to lock down the behavior for the Pro plan version 0.

Testing a new calling limit

We have our legacy pricing locked in and won't be changing existing customers. If we wanted, we could even start experimenting. To model a 50/50 split of calling minutes, we can do a percentage rollout. Note that we should be careful to select the sticky property. If we want all members on a team to have a consistent experience, we should choose team.key instead of user.key.

Testing a new calling limit

Summary

Dealing with product entitlements and feature flags is part science, part art, and a whole lot of not shooting yourself in the foot. It’s about laying down a flexible foundation, so you’re not boxed in when you need to evolve. And remember, it’s always smarter to hash out your game plan with someone else before you commit. Two heads are better than one, especially when one is yours :)

Make sure to consider the curve-balls we considered here:

  1. How will you do demos?
  2. How will you support one-off contracts?
  3. How will keep legacy plans consistent while you iterate on new pricing?

If you’re neck-deep in figuring out how to model your features and entitlements, don’t go it alone. Ping me, and let’s nerd out over it. Because at the end of the day, we’re all just trying to make software that doesn’t suck.

· 2 min read
Jeff Dwyer

Our CLI is more awesome now and I think you'll love it too.

CLI's for Busy People

I like CLIs, but I also find them frustrating. What are the options? Which options go together? I always reach for a CLI when I want something to happen RIGHT NOW, but then it's a process of reading the help page and trying to piece together an argument string.

I get it working and then promptly forget before the next time I need it.

With this in mind, we've retooled the CLI to meet you where you are. Know just what you need? Use the arguments. Can't be bothered? Just type prefab, and we'll work through things together.

Here's what it looks like to change a feature flag now.

What would you like to do today?

We've pulled together the common operations that you might want to do so you can just search through them with autocomplete. Is it "set" or "update"? Type either, and you'll still find it.

What would you like to do with the CLI

What flag are we using?

Can't remember that flag name? Autocomplete to the rescue again.

Which flag are we changing?

And what environment are we in?

Stop typing our --env=staging all the time. Type "sta" enter, and you're on your way.

What would you like to do with the CLI

And what would you like to set the value to?

Now, just pick the new value, and you're off to the races.

What would you like to do with the CLI

Wrap

That's it.

Edited Feature Flag With CLI

We have similar helpers to create a new secret with our secret management. This video has a lot of good examples of this CLI in action for secrets

Install with npm install -g @prefab-cloud/prefab to get going or read more about the cli or secret management.

· 3 min read
Jeff Dwyer

I'm really excited to unveil our latest product: a robust and easy-to-use Secret Management system.

Secrets are... a pain in the ass. Even at Prefab, where we are ostensibly experts on dynamic configuration of applications, secrets were the fly in our ointment. Our code and practices around secrets were secure, but they were a pain to manage. Everytime we wanted to add another 3rd party API key it meant re-learning how to do things and PRs into an infrastructure repo that wasn't part of our normal flow.

I am so much happier with how we are handling secrets today and I'm excited to share.

CLI-Based Workflow for Enhanced Security

When we went to build secrets, we had one big guardrail: don't screw it up. With this in mind we had a strong desire that Prefab should have zero-knowledge of your secrets.

The best way to achieve this was with a CLI-based approach, because that ensures your secrets are always encrypted locally, with an encryption key that we never see.

Change the log level for this express route

Cost-Effective Solution for Shared Secrets

There are some other good secret management platforms out there, but to us paying per user for something simple like this just didn't feel right.

In our mind, secrets are just another piece of configuration, albeit one that needs to be decrypted before you use it.

Change the log level for this express route

Do you really need to pay per user so that a library can run AES.decrypt() on your configuration? We don't think so.

Say Goodbye to Insecure Practices

Secrets can feel like a shell game and it can be frustrating getting a new developer the secrets they need to run the application. It's not uncommon for a developer to slack a secret around to get someone unstuck quickly.

Having a single secret means that when new secrets are added, all of your developers need to do... nothing at all. With just their regular Prefab API Key and the single shared secret, your developers can focus on what they do best, free from the hassles of managing .env files.

Change the log level for this express route

Flexible and Cross-Language Secret Sharing

We like Rails a lot at Prefab and our solution is heavily inspired by how Rails does Credentials. The problem with Rails credentials for us is simply that we aren't just a Rails monolith. We need secrets in all our code which meant that Rails credentials didn't work for us.

All we needed to do to build this was to make sure that all of our clients can consistently decrypt in the same way.

Change the log level for this express route

With Prefab secrets, whether you're working in Rails, Node, Java or Python, you can now hava a unified solution for all your applications.

Wrap

If you're interested in a simple improvement to the way your organization handles secrets, take a minute to check out the secret documentation or create a free account to try it out. We'd love to get your feedback on what we've built.

· 4 min read
Jeff Dwyer

OpenAI's GPT models are great. However, as with any powerful tool, there are many settings to tweak to get the desired output. Parameters like temperature, top_p, model, and frequency_penalty can significantly influence the results. Changing these settings often requires code changes and redeployment, which can be cumbersome in a production environment.

Feel like watching instead of reading? Check out the video instead.

Prefab is a dynamic configuration service that allows you to modify these settings on the fly without redeploying your application. By the end of the blog post we'll have a working example of an OpenAI storyteller that can be reconfigured without restarting.

How Can I Change OpenAI Parameters Instantly?

We won't cover the full python documentation here, but you'll need to have the openai and prefab_cloud_python libraries installed. If you haven't already, you can install them using pip:

pip install openai prefab-cloud-python

Next, set up your OpenAI API key and initialize the Prefab client:

import openai
import os
from prefab_cloud_python import Options, Client

openai.api_key = os.getenv("OPENAI_API_KEY")
prefab = Client(Options())

Prefab will look for an API key in the env var PREFAB_API_KEY. Get one with a free signup. With the Prefab client initialized, you can now fetch dynamic configurations for your OpenAI API calls. Here's an example of how you can use Prefab to dynamically set the parameters for a chat completion:

Before:

def response(name):
prompt = f"You are a story teller in the style of Charles Dickens, tell a story about {name}"
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": prompt
}
],
temperature=0.5,
max_tokens=1024,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
return response

After:

def response(name):
prompt = f"You are a story teller in the style of {prefab.get('storyteller')}, tell a story about {name}"
response = openai.ChatCompletion.create(
model=prefab.get("openai.model"),
messages=[
{
"role": "system",
"content": prompt
}
],
temperature=prefab.get("openai.temperature"),
max_tokens=prefab.get("openai.max-tokens"),
top_p=prefab.get("openai.top-p"),
frequency_penalty=prefab.get("openai.frequency-penalty"),
presence_penalty=prefab.get("openai.presence-penalty")
)
return response

To make this work, we'll need to setup the following configs in the Prefab UI.

OpenAI configuration options

In the above code, instead of hardcoding the values for parameters like model, temperature, and top_p, we're fetching them dynamically from Prefab using the prefab.get() method.

Instant Updates

Changes we make in the UI will be reflected in our application instantly without redeploying. See here as we change the story teller from Stephen King to Charles Dickens. Our ChatGPT prompt updates without any code changes or restarts.

Change the storyteller

Targeting

Prefab also supports powerful targeting capabilities like you might expect from feature flags. If you pass in context to the client you could choose a different storyteller based on a user's preference.

OpenAI configuration options

Debugging

Prefab also provides a robust logging system that allows you to see exactly what's happening with your configurations. You can change log levels on the fly for any class or method in your application. This is especially useful when you're trying to debug a specific issue.

Add the following to your code to enable logging:

def response(name):
prefab.logger.info(f"processing request for {name}")

response = openai.ChatCompletion.create(
...
)
prefab.logger.debug(response)
return response

Then you can change the log level for your application in the Prefab UI. Change it to debug to see both of the log statements we added.

OpenAI configuration options

The logging output will be display in the console or your existing logging aggregator.

2023-09-26 13:41:30 [info     ] processing request for bob     location=pycharmprojects.openaiexample.main.response
2023-09-26 13:41:38 [debug ] {
"id": "chatcmpl-836LrWRRLnwVPGYR5440RnyGpoKWu",
...
"usage": {
"prompt_tokens": 27,
"completion_tokens": 200,
"total_tokens": 227
}
} location=pycharmprojects.openaiexample.main.response

Conclusion

Dynamic configuration with Prefab offers a seamless way to tweak and optimize your OpenAI projects without the hassle of constant redeployments. Prefab provides you with:

  • Flexibility: Easily experiment with different settings to find the optimal configuration for your use case.
  • Quick Iteration: No need to redeploy your application every time you make a change.
  • Centralized Management: Manage all your configurations from a single place. Setting Up Prefab with OpenAI

Whether you're running experiments or need the flexibility to adjust settings on the fly in a production environment, Prefab provides a robust solution. So, the next time you find yourself redeploying your app just to change a single parameter, consider giving Prefab a try!

· 2 min read
Jeff Dwyer

Structured logging is great. It works just like you'd expect it to. I got into a ton of detail about Tagged vs Structured Logging last week, but the short version is that structured logging is fabulous for searching and analyzing your logs.

I'm happy to say that prefab-cloud-ruby 1.1.0 supports structured logging for ruby or rails.

Here's what that looks like, in our controller we can

class CalculatorController < ApplicationController
def index
@results = logic(height, weight)

# OLD
logger.debug "😞😞😞 finished calc results height=#{height} weight=#{weight} results=#{@results.size} "

# NEW
logger.debug "🍏🍏🍏 finished calc results", height: height, weight: weight, results: @results.size
end
end

Even with co-pilot assistance, this is so much nice than the old way of string formatting log output.

Running the server locally, we get the following output:

DEBUG  2023-09-19 14:30:18 -0400: app.controllers.calculator_controller.index 🍏🍏🍏 finished calc results height=19.0 results=6 weight=0.0

If you're using log_formatter: Prefab::Options::JSON_LOG_FORMATTER then you'll get JSON output instead.

{
"severity":"DEBUG",
"datetime":"2023-09-19T14:42:51.723-04:00",
"path":"app.controllers.calculator_controller.index",
"message":"🍏🍏🍏 finished calc results",
"height":19.0,
"weight":0.0,
"results":6
}

Of course the real reason to do this is to make it easier to search and analyze your logs. So I'll deploy and then change the log level for our controller to debug to make sure we see our output.

Dogs in the right boxes

Now we can see how nicely these show up in Datadog:

Dogs in the right boxes

Structured logging is great, and with prefab-cloud-ruby you're just a few minutes away from having it in your app. Check out the docs or learn more about dynamic logging. Happy logging!

· 3 min read
Jeff Dwyer

Prefab Summer

As the sun blazed outside, our team was hard at work (mostly inside). We're excited to share the fruits of our labor with you. Here's 7 Big Features we shipped this summer:

JS Dynamic Logging in the Browser

We've always had the ability to dynamically change your log levels. But now you can do it from the front end. This is a great way to debug a specific user or context and turns browser logging collection from an expensive firehose into a targeted laser.

Dynamic Javascript Logging

Timed Loggers

Ever turned logging to debug and forgotten to turn it back to warn? That can be an expensive mistake. Save yourself from necessary logging costs due to "oops" by setting a logger to be debug for just an hour. It will automatically revert when then hour is up.

segments

Evaluation Charts

Are your feature flags really evaluating like you think they are? How often is this dynamic config being used? Is this flag a zombie? Our new charts report the evaluation telemetry so you can see a clear and concise view of how your configurations are running in the wild.

segments

Multi-Context

Feature Flags aren't just for product teams anymore. Engineers, we've got you covered too! With our new multi-context feature, you can now tailor your FF experience to your specific role and needs. I wrote more about how better feature flags for engineers in Feature Flags for a Redis migration.

This is something that only the top-tier Feature Flag players support so we're excited to level up and join them. Here's an example of targeting a feature flag to a availability zone:

multi context

Dynamic Config Rules

Our dynamic configuration has taken a leap forward as well. Detailed targeting rules aren't just for feature flags anymore. With our enhanced rule & criteria capabilities, you can now customize and target your dynamic configuration. Combine this with multi-context and dynamic config really gives developers a new level of power working with their deployed systems.

config rules

Context Search & Variant Assignment

Prefab now lets you search the contexts that have been evaluated. This means you can search for any user and find the associated contexts that they're evaluated with.

But there's more! From the context details page, you can now assign specific variants, giving you a convenient way to put people in a feature. Read more in The Joy of Variant Assignment with Prefab Feature Flags

Context Search

Improved Code Samples

We understand the importance of context. That's why we've enriched our code samples to help see which context attributes need to be present for you flag to evaluate according to the rules.

segments

Shared Segments

Shared segments is a power feature that lets you re-use segments across your flags and configs. Our user interface now supports this which allows you to DRY up your rules.

Shared segments

Next Steps

Next up? We asked ourself what the best way to use Feature Flags would be and we came up with something we didn't know was possible. Turns out it's possible :)

Stay tuned for more exciting updates! Or Book a Demo to get a tour of the latest and greatest.

· 8 min read
Jeff Dwyer

Introduction: Feature flags are phenomenally useful, but most examples are from a product perspective and focus on the user context. That's great, but developer use cases are equally powerful if we make the tools understand the engineering specific contexts. In this post, we'll explore what that looks like by walking through an example of a redis cache migration. /imagine canadian geese migrating with the power of rocketships.

feature flag migration

How to Manage a Redis Migration Safely Using Feature Flags

For this example, let's imagine that we're using 3rd party Redis provider and we've been unhappy with the reliability. We've spun up a helm chart to run redis ourselves and we're ready to migrate, but of course we want to do this in a way that's safe and allows us to roll back if we encounter any issues.

We're using this Redis as a cache so we don't need to worry about migrating data. It is under heavy load however and we don't trust our internal Redis just yet, so we'd like to move over slowly. Let's imagine we've set the new Redis up in a particular availability zone, but we aren't sure what that will mean for latency.

Here's the migration plan we've come up with, we'd like to take 10% of the nodes in US East A and point them at the internal Redis. If that goes well, we'll scale up to the entire AZ. If that goes well, we'll move the rest of the nodes in this availability zone. Then we'll move on to US East B and finally US West. The diagram here shows Step 1 of the rollout pictorially.

Why?

  1. 10% of Nodes in the same AZ will test basic functionality
  2. 100% of Nodes in the same AZ will start to test performance & load
  3. Adding US East B will test cross AZ latency
  4. Adding US West will test cross region latency

So, how do we do this?

Step 1: Code the Feature Flag

For this example let's use the connection string as the value of the feature flag. We can have the current value be redis://redis-11111.c1.us-central1-2.gce.cloud.redislabs.com:11111 and the new one be redis://internal-redis.example.com:6379. To use the value, we'll just modify the creation of the Redis client.

In simple terms, this will look something like:

String connStr = featureFlagClient.get("redis.connection-string");
return RedisClient.create(connStr).connect();

Implementing this is a bit more complex. The Redis connection is something you'll want to create as a Singleton, but if it's a singleton it will get created once and won't change until we restart the service. Since that's no fun, we'll need a strategy to have our code use a new connection when there is a change. We could do this by listening to Prefab change events, or we could use a Provider pattern that caches the connection until the connection string changes. Here's a Java example of the provider pattern:

Full Code of Provider Pattern

  @Singleton
public Provider<StatefulRedisConnection<String, String>> getConfigControlledConnection(
ConfigClient configClient
) {
record ConnectionStringConnectionPair(
String connectionString,
StatefulRedisConnection<String, String> connection
) {}
AtomicReference<ConnectionStringConnectionPair> currentConfigurationReference = new AtomicReference<>();
return () ->
currentConfigurationReference.updateAndGet(connPair -> {
String currentConfigString = configClient
.liveString("redis.connection-string")
.get();
if (connPair == null || !connPair.connectionString.equals(currentConfigString)) {
return new ConnectionStringConnectionPair(
currentConfigString,
RedisClient.create(currentConfigString).connect()
);
}
return connPair;
})
.connection;
}

Now, the bigger question. How do we:

  1. Configure our feature tool to randomize the 10% of nodes.
  2. Target the feature flag to just the nodes in US East A.

Step 2: Giving the Feature Flag Tool Infra Context

A brief detour into some context on "context". Most feature tools think of context as a simple map of strings. It usually looks something like this:

Typical Context for Product

{
"key": "1454f868-9a41-4419-a242-d5a872ec5f04",
"user_id": "123",
"user_name": "John Doe",
"team_id": "456",
"team_name": "Foo Corp",
"tier": "enterprise"
}

This starts out fine, but it can start to feel icky when you start to add more and more things to the context. Is it ok to push the details of our deployment like the host.id in here? How much is too much? This is the "bag o' stuff" model and it's a bit like a junk drawer.

The bigger issue however is whether the tool allow us to randomize by something other than key? Because our rollout plan is to test the new Redis on 10% of nodes, not 10% of users. If your feature tools isn't written with this use case in mind this can be hard or impossible to do.

So yes, as developers, we need more information. Oftentimes we are operating with a user and team context, but we also need to know about things like the deployment, the cloud resource, and the device. A proper context or "multi-context" should look something like this:

Context For Developers

{
"user": {
"key": "1454f868-9a41-4419-a242-d5a872ec5f04", // a unique key for the user / anonymous user
"name": "John Doe",
"id": "123" // for non anonymous users
},
"team": {
"key": 1,
"name": "Team 1",
"tier": "enterprise"
},
"device": {
"ip": "19.122.43.123",
"locale": "en_US",
"appVersion": "1.0.1",
"systemName": "Android",
"systemVersion": "11.6"
},
"request": {
"key": "c820567a-9f2d-4b3d-85e5-9ff4132d0e08", // a unique key for the request
"url": "http://production.example.com/user/123",
"path": "/user/123"
},
"deployment": {
"key": "pod/user-service-bcddb8c8d-mxz6v",
"namespace": "production",
"instance-type": "m4.xlarge",
"instance-id": "i-0a5b2c3d4e5f6g7h8",
"SHA": "27bdd4f9-3530-46a6-8188-9d90467f086e"
},
"cloud": {
"key": "i-07d3301208fe0a55a", //host-id
"platform": "aws_ec2",
"region": "us-east-1",
"availability-zone": "us-east-1a",
"host-type": "c4.large"
}
}

This is a lot more information, but it's also a lot more useful and structured. The key piece here is that we are specifying cloud.key as the host id. This is the unique identifier for the host that we can use to randomize the rollout.

We can use this as our "sticky property" in the UI. This is the property that the client will use to randomize on. The host doesn't change so each node will stay in the bucket that it is originally assigned.

feature flag by cloud host

Step 3: Targeting the Feature Flag to and Availability Zone

The next step is simply to add a targeting rule. We should have all of the context attributes available to select from in the UI. We can select cloud.availability-zone and set it to us-east-1a.

feature flag by cloud host

Altogether that looks like this:

feature flag by cloud host

The feature flag is now enabled for 10% of the nodes in us-east-1a. If the node is not in that availability zone, it will get the default value.

Step 4: Observe Our Results

It's always nice to observe that things are working as expected. What percent of all of the nodes are receiving the new Redis connection string?

As we go through the steps of our rollout plan and verify performance and latency along the way, we should see the number of evaluations increasing for our internal redis install.

feature flag by cloud host

Each bump in the graph should be related to our incremental approach to changing the rollout. If there are changes that aren't related, we may need to look into our assignment of the cloud.key since that is how we are randomizing. We should expect that the percentages here do not exactly match the percentages we set. For example if we split 80/20 and we have 50 hosts, each of them will have a 80% chance of being in the group, but we might certainly expect to see 38-42 hosts in that group due to random distributions.

If we want to dig into the numbers we're seeing here, we can select "show rules" to see all configuration changes on the chart and determine which feature flag rules lead to our output numbers.

feature flag by cloud host

Summary

This is a simple example, but it shows how we can use feature flags to manage a migration in a safe and incremental way. It also shows how we can use multi-context feature flags to make feature flags more powerful for developers and engineers.

Finally, we see how nice it is to have a feature flag tool that gives us insight into breakdown of our evaluations and the ability to dig into the numbers to see what's going on.

· 4 min read
Jeff Dwyer

Introduction

Feature flags have become an integral part of our software development process, providing us the flexibility to release and test new features efficiently. While many feature flag tools offer similar functionalities, we had a unique vision of what would make our experience truly delightful. In this blog post, we'll share our personal journey with Prefab and how the introduction of variant assignment transformed our day-to-day feature flag management, leaving us happier with our own product.

What is Variant Assignment

Variant assignment is a task we encounter frequently - as developers, product managers, or sales representatives. Setting up a feature flag is great, but we almost always want to test them out. Variant assignment is when we specify that person X should be in a particular bucket. When I develop with a flag I'll usually assign myself to one variant and then then flip-flop a few times to verify functionality. Once it's released I'll usually do the same for internal and external users, so they can check out the new feature. Unfortunately this seemingly simple task can become tedious when dealing with tracking IDs and user data.

The Struggle with User Identification

The best practice for identifying a user is something that looks like:

prefab_context = {
user: {
key: "7971f1c7-30ba-456f-be00-ab798f03d3b8", // GUID assigned to cookie before user is created
id: 4233, // User ID in our database
name: "Jane Doe",
},
};

This let's us keep a consistent view of someone from visitor to user, respects PII by not using email and gives us a useful handle to the person by using their name.

But there's a problem.

When we want to put a user into a feature bucket we typically know their name, but not their tracking GUID. Using Prefab for ourselves, we found remembering or locating their unique tracking IDs often disrupted our workflow. The constant back-and-forth between tools to find user information became a pain point we wanted to address.

I need to know my GUID

Prefab's Empowering Solution: Our Own Creation

As creators of Prefab, we had the freedom to build the feature we craved - a seamless variant assignment process. What that should look like was pretty clear. I'd like to be able to search for a user by name, and then click to assign them to a variant.

Search for our user

There's a reason that most open-source and home-grown solutions don't have this feature though. A typical feature flag back end doesn't have any knowledge of the user context. Simple feature flag systems just store the flag rules and ship them out to the client SDKS. In order to build this feature, we need the client SDKs to start phoning home with the contexts that they've seen.

Building the Feature

Building the clients correctly was a good challenge. We wanted to make sure that we didn't add any unnecessary overhead to the client SDKs. To do that, the clients are built to only send new contexts that they haven't uploaded and to behave appropriately under heavy load.

By capturing the context server side, we were able to create profile pages for each user. This change allowed us to search for users effortlessly, using their names or any other information from the context. No more obscure tracking GUIDs - just straightforward assignment from the user's profile page.

Assign the variant to the user

Staying in the Flow with Prefab

The addition of variant assignment in Prefab was a transformative moment for us. The ability to swiftly assign variants without any context switching or hunting for IDs made a remarkable difference in our day-to-day experience. We found ourselves in a delightful flow, focused on core development work, testing, and refining features.

As you evaluate Feature Flag tools or consider building your own, I'd encourage you to strongly consider adding this capability to your decision process. You can live without it, but you deserve nice things. Luckily with Prefab there's no reason not to give yourself a present.

Giving ourself a present

· 11 min read
Jeff Dwyer

Let's compare 7 top FeatureFlag providers to see how they compare.

Feature Flags are great, but there are so many tools to choose. In this comparison, we will evaluate 7 tools against the same test case. We'll test: Flipper, Prefab, Unleash, Flagsmith, LaunchDarkly, ConfigCat & Devcycle. We'll try to perform the same test case in each tool and we'll share our results & screenshots. This should be a good way to quickly compare the UIs and features.

The Test Case

In order to put these tools through their paces, we'll use a straightforward test case. Here's our scenario:

As a developer on the checkout team I would like to test 2 new checkout flows:

  1. A new multi-page-version.
  2. A new single-page-version.
  3. A control of the existing checkout experience.

We have 4 targeting requirements:

  1. We don't test on our enterprise customers, so I want team.tier = enterprise to get the control.
  2. I want the existing beta-users and internal-users to try the multi-page-version. Beta users is a list of user ids. Ideally I can store this list in one place and reuse it. Internal users is anyone matching an email address ending in example.com.
  3. Two teams foo.com and bar.com complained about complexity so they should evaluate the single-page-version.
  4. Everyone else should get a 33/33/33 split of the 3 versions.

Okay, let's see how our contestants do! Choose your feature flags

FlipperCloud

FlipperCloud is a feature flagging system born out of a popular open source ruby gem. It's particularly popular among Ruby on Rails developers due to its ergonomic design and tight integration with the Rails ecosystem. Flipper has both an open-source library and a cloud-based service.

Trying to setup our test case in Flipper.Cloud was a bit of a challenge. Flipper does not support multi-variate flags, each flag can only be a boolean. To fully nail our test case I would need to hack around this and setup 3 flags, one for each variant. Let's lower the bar a bit and change the test case to just have 2 variants on and off.

Our next requirement was to avoid the enterprise tier. Flipper.Cloud does not support specifying off for an actor or group. So we'll probably need to do this in code:

if team.tier != :enterprise && flipper.enabled?(:experiment)
# do experiment
end

For our beta customers and target customers, we are able to target groups. Flipper is interesting in that again the group definition happens in code. This is pretty different from the other tools we are looking at, but could be convenient for a Rails monolith with complex targeting logic.

Flipper.register(:beta_customers) do |actor|
actor.is_in_beta_group?
end
Flipper.register(:target_customers) do |actor|
actor.email.ends_with?("@foo.com")|| actor.email.ends_with?("@example.com")
end

The resulting UI looks like: Flipper.Cloud

FlipperCloud Takeaways

  • Best for: Teams committed to being a Rails monolith and who don't need flags in JS.
  • Price: $20 / Seat
  • Test Case: 🙁 No support for multi-variate flags.
  • Features: Audit logs.
  • Architecture: Uses server-side evaluation, with adapters. Updates are polling.
  • Notes:
  1. No support for non boolean flags
  2. Can only use targeting to force into flag, not to exclude.
  3. Uses unusual "actors" and "groups" terminology.
  4. Group definitions are in code, not in UI.

Unleash

Unleash is an open-source feature flagging system with a strong focus on privacy. Let's see how it does with our test case.

Unleash has a pretty different approach to setting up our beta group and enterprise segments. My initial approach was to add these in as "strategies" like this.

Unleash Overrides

I was able to setup segments and the matching rules as you would expect, however this doesn't work! Strategies don't include a value. These fine grained rules only determine whether we should return the whole variant set or not.

Instead, we are meant to set overrides on the variants themselves. Unleash Overrides

This works for our Enterprise tier which was a simple property match.

But for our beta-group this functionality doesn't allow us to use our shared segments.

For the user.email, we aren't able to use an ends-with operator on this screen. We can only use an equality match.

So Unleash passes on the team-tier and fails on the other two.

The last note on UI here is that overview page here is, in my subjective opinion, confusing. Unleash Overview It's very hard to understand what's going on because the logic is split between the rules and variant pages. And if we dive into the variants page, we still can't see the overrides without going to the edit screen. Unleash Variant

Unleash Takeaways

  • Best for: Privacy / EU Compliance.
  • Price: Starts at $80 per month for 5 users.
  • Test Case: 😐 Challenges with targeting UI.
  • Architecture: Interesting architecture supporting enhanced privacy because customer data stays on-premises or in cloud proxies you run.
  • Notes:
  1. Problematic targeting UI doesn't pass our test case.
  2. No streaming updates, polling only.
  3. Nice Demo instance you can play with.

Prefab

Prefab is a newer entrant into the FeatureFlag market. I'm biased, but I think it passed the test with flying colors.

  • We are able to define the 3 variants for our flag.
  • We can setup a property match for the enterprise tier.
  • We can use shared segments to target the beta and internal customers.
  • We can do a 33% rollout across the rest of our customers.

Prefab

Prefab has a flexible context system that allows you to set context at the beginning of the request so you don't need to specify the context for every flag evaluation.

Prefab explains how to do this in the UI with helpful code suggestions. You can see the context you'll need to evaluate the flag. Prefab

Prefab Takeaways

  • Best for: Teams looking for real-time updates, robust resiliency, and competitive pricing.
  • Price: Super competitive pricing. $1 / pod charged minutely. $1 / 10k client MAU.
  • Test Case: 😀 Strong Pass.
  • Features: Robust audit logging, shared segments, real-time updates. Missing features: Full experimentation suite, reporting & advanced ACL / roles.
  • Architecture: Server-side evaluation. Real-time updates with SSE. CDN backed reliability story.
  • Notes:
  1. Clients in Ruby, Java, Node, Python, JS & React.
  2. Also provides other developer experience feature like dynamic log levels.
  3. Good story around local testing with default files.

Flagsmith

Our next comparison is with Flagsmith. Flagsmith is also open-source and touts itself as a good option for cloud or on-premises deployments.

Flagsmith has good support for multivariate flags, so that's a relief.

Flagsmith

The actual overrides is interesting. We specify rules and then specify the weights for each variant. This worked, but lead to a very long page of rules.

I also found the UI unclear for how to create the beta group. If I want the beta group to be user.id 1 or 2. It wasn't clear to me whether to use = and comma-delimit or use a regex. Flagsmith

  • Best for: Flexible deployments & on-premises hosting.
  • Price: Starts from $45 for 3 users per month. A free version with limited functionalities for a single user is available.
  • Test Case: Strong Pass 😃 Shared Segments and multivariate support.
  • Features: Shared Segments. Remote configs, A/B testing, integration with popular analytics engines.
  • Architecture: Open source, provides hosted API for easier deployment during development cycles.
  • Notes:
  1. Flag targeting is split across multiple UI tabs, can make it difficult to get an overview of flag settings.
  2. Targeting individual users only available on higher plan tier.
  3. No streaming updates, polling only.

LaunchDarkly

LaunchDarkly is a well known name in the feature flagging space. They have a robust feature set and a strong focus on enterprise customers. Let's see how they do with our test case.

Unsurprisingly LaunchDarkly does a great job of handling our test case. We can setup a property match for the enterprise tier, a shared segment for the beta group and internal users, and a user attribute for the email.

The UI is powerful, and you can see some of the more advanced enterprise features like workflows and prerequisites.

LaunchDarkly

LaunchDarkly has all the features you'll need, but you're going to pay for it. The pricing is based on the number of users you have. Anecdotally this makes it quite challenging for larger orgs to rationalize the cost. A number of teams I've talked to end up sharing accounts, or building internal tools around the API in order to save money on seats, though of course this is a bad idea and negates the benefits of permissions and audit logging.

  • Best for: Price Insensitive Enterprise.
  • Test Case: 😀 Strong Pass.
  • Price: Starts from $10 per user per month, however this is a low-ball. Many features / kickers force enterprise adoption. I've heard quotes in the range of "25 users for 30k a bucket" which is roughly $100/user/month.
  • Features: All the basics plus: scheduling and workflows. AB testing & advanced permissions.
  • Architecture: Enables the dev team to wrap code with feature flags and deploy it safely, ability to segment user base based on various attributes
  • Notes: Flag editing view gets long, no read only overview to see flag rules at a glance.

ConfigCat

ConfigCat is a feature flagging service that also supports remote configuration.

ConfigCat did a good job supporting our test case. We can setup shared segments for the beta group and internal users, and a user attribute for the email. Config cat ui One important note is that it does not have a concept of "variants" each rule is returning a simple string. This means that you could mistype a variant from one rule to another, which is just something to be aware of.

ConfigCat Takeaways

  • Best for: Developer focussed configuration.
  • Test Case: 😀 Strong Pass.
  • Price: Free for up to 10 flags. Tiers at $99 and then $299 for unlimited flags and > 3 segments.
  • Features: Shared segments, Webhooks, ZombieFlags report
  • Architecture: Polling, server side evaluation.
  • Notes:
  1. No concept of variants.
  2. I couldn’t do email ends_with_one_of. I could do match one of or contains.
  3. Updates via polling, not real-time streaming.

Devcycle

Devcycle is a feature flagging service focussed on developers. Let's see how it fares on our test case.

Overall, Devcycle did well and I was able to setup our test case. The main knock was the lack of shared segments, meaning that I'll need to define the Beta group in multiple flags and risk them getting out of sync. Devcycle UI The resulting UI does end up very long making it a bit of a challenge to get an overview of the test.

The UI was generally straightforward, however found it annoying to have to specify a name for each of my rules. Devcycle UI

DevCycle Takeaways

  • Best for: Developer focussed configuration.
  • Test Case: 😐 Pass, but no shared segments.
  • Price: Free for up to 1000 MAU. $500 for 50k MAU+. Pricing axis on client side MAU and Events.
  • Features: AB Testing with Metrics
  • Architecture: Streaming & Polling, server side evaluation.
  • Notes:
  1. No shared segments
  2. Redundancy in UI meant it was hard to get an overview of our test.
  3. Offers typed context

Summary

That's a wrap. We looked at 7 different flag providers and how they handle a common test case. We gave "Strong pass" to 4 of the tools. 2 of the tools got a "Pass" because they lacked segments and 1 got a "Fail" for not supporting multi-variate flags.

CompetitorBest ForTestCostPricing
Flipper CloudRails Only.🙁💰💰$20 / Seat Link
PrefabFunctionality for Less at any Scale😃💰$1 / Connection. Usage based. Link
UnleashPrivacy / EU Compliance😐💰💰$15 / seat Link
FlagsmithFlexible deployments & On-Prem Deployments😃💰💰$20 / seat Link
LaunchDarklyPrice Insensitive Enterprise😃💰💰💰💰$17 / seat, $70+ / seat for all features Link
ConfigCatDeveloper focussed configuration😃💰$99 for > 10 flags. Usage based. Link
DevcycleDeveloper focussed with Metrics😐💰💰$25 for 1000 MAU. $500 for 50k MAU. Usage based. Link

· 4 min read
Jeff Dwyer

Question: how do you change the log level of a running Rails application?

Answer: You don't.

Technically the Rails docs inform us that we just need to do.

config.log_level = :warn # In any environment initializer, or
Rails.logger.level = 0 # at any time

However, that "at any time" is doing a lot of heavy lifting. You'd have to ssh into each server, edit the config file, restart the server, and then hope that you didn't make a typo.

As you can imagine, I'm here to show you something better, but first let's think about why you might want to change the log level of a running application in the first place and "what problem are we trying to solve".

Why change log levels?

Typicaly we want to change log levels because there's a bug that's hard to reproduce locally and there's just no substitute for understanding just exactly what is happening in our production or staging environment. Trying to use config.log_level = :debug to solve this is like trying to do surgery with a sledgehammer. You're going to end up with a lot of collateral damage (log aggregation expense) and a lot of wasted time.

This is because config.log_level = :debug is so non-specific. If you're trying to debug a user 4234's billing issue, do you need to see the logging from every single template render for every user? Probably not.

If you had a magic wand, what you'd really love would be able to target the logging to:

  • The billing code
  • The billing Sidekiq job
  • User 4234
  • Just for the next hour while you're debugging

But can we really do that? Yes, we can, and you're 10 minutes away from trying it yourself.

So, how do we do that?

First things first. We aren't going to change your log aggregator. You can still use DataDog of Logtail or whatever you like. We're just going to help you get much more value out of them.

Second, we aren't going to change your logging code. All of your Rails.logger.info or Rails.logger.debug are perfect just the way they are.

Here's what it does. Psuedo code is worth 1000 words, so here's the psuedo of what happens:

class PrefabLogger < ::Logger
# path = app.models.billing.calculate_tax
# level = :debug
def log(message, path, level)
Prefab.get("log-level-#{path}") > level
...do the logging
end
end
end

class Prefab
def get(key)
@dynamic_config_map.get(key, current_context)
end
end

Rails.logger = PrefabLogger

Now of course we've moved the heavy lifting to our @dynamic_config_map, but this is pretty simple to conceptualize. The map is a threadsafe Concurrent::Map.new. It will be populated from a CDN with the latest values, and it will be updated in real time as the values change. The values in the map can have rules inside them so that we can filter the log levels based on the context of the request.

To get the current_context in the above pseudo code, we'll just set some properties in an around_action in our ApplicationController.

Dynamic logging at it's core is just a special case of dynamic configuration. It's a simple solution that gives us a ton of power.

The User Experience

So, how do we actually use this? The Prefab UI has us covered. In the LogLevel UI, we'll see a list of all the log levels that are currently being used in our application. For any package, any class and even any method, we can simply click and change log level of any of these loggers.

Change Log Levels

We can also target specific loggers by using a targeted logger. This has the same targeting power as the Prefab Feature Flag system, so you'll have no problem laser targeting the loggers that you want to change.

Change Log Levels

That's it! Truly it's that simple. It's also very very easy to give this a try. Cut a branch, Signup, throw in your API key and set the logger, run your app and start using dynamic logging in just a few minutes. Full documentation for the ruby-sdk.

Conclusion

I hope you enjoyed this quick tour of dynamic logging. It's a simple solution to a common problem and once you get used to it, you'll wonder how you ever lived without it. Over time it will start to change how you think about logging. Without it, there's not a ton of point putting in Rails.logger.debug statements, because you're never going to see them, but with it, you can start to think about logging as an as-needed tool that you have in your pocket for when you need it most.

So, I encourage you to give dynamic logging a try, and experience the benefits of fine-tuning your log output. Happy debugging! 🚀