The World's Fastest Client-Side Feature Flags
Using a feature flag provider for your client-side code can be slow and require spinners or janky content pop-in.
Your users deserve 0ms
client-side feature flags.
Most client-side feature flags are built around the simple HTTPS request.
Your app makes a request to a server and includes the context of the current user (like their plan sku, their location, or their role). The server responds with a JSON object that includes the true/false values for flags for that user.
You get the flags you need, but what does it cost? ⏳ Every HTTP request required to load your content is a tax your users pay to use your site. They're on your site for a reason, but first they have to wait.
Let's keep a list of the price we pay for client-side feature flags. We'll treat it like a diff — new items will be green and removed items will be red.
- TCP handshake / SSL negotiation
- OPTIONS request for preflight check before actual request
- OPTIONS Latency caused by physical distance to API server
- Time for API server to generate OPTIONS response
- User's network speed to download the OPTIONS response (negligible)
- GET request for JSON
- GET Latency caused by physical distance to API server
- Time for API server to generate JSON response
- User's network speed to download the JSON response
Ugh. If you want to use these feature flags to render your UI, you have to wait for the evaluated-flag JSON response to come back. This is a render-blocking request so you end up with spinners. If you choose not to block rendering, you end up with jank (where content pops-in or out). Either way, we'll add Degraded UX to the list.
No wonder developers are reluctant to add feature flags to their client-side code.
(It is actually worse, but we'll let the DNS lookup slide since hopefully it has a reasonable TTL and is cached. I'm not even going to get into queued requests due to too many concurrent HTTP requests in the browser.)
Some providers might stop here.
We can do better.
No wonder developers are reluctant to add feature flags to their client-side code.
Let's add a CDN to the Feature Flag provider
Let's assume the CDN has global distribution and can cache the pre-flight/CORS OPTIONS request as well.
We'll use some hash of the user context as the cache key and assume the Flag provider will purge the cache when content changes.
Now we have a better experience for the cached experience even though our uncached experience is still the same.
- CDN Cached
- Uncached
- TCP handshake / SSL negotiation
- OPTIONS request for preflight check before actual request
- OPTIONS Latency caused by physical distance to APICDN server
- Time for API server to generate OPTIONS response
- User's network speed to download the OPTIONS response (negligible)
- GET request for JSON
- GET Latency caused by physical distance to APICDN server
- Time for API server to generate JSON response
- User's network speed to download the JSON response
- Degraded UX
- TCP handshake / SSL negotiation
- OPTIONS request for preflight check before actual request
- OPTIONS Latency caused by physical distance to API server
- Time for API server to generate OPTIONS response
- User's network speed to download the OPTIONS response (negligible)
- GET request for JSON
- GET Latency caused by physical distance to API server
- Time for API server to generate JSON response
- User's network speed to download the JSON response
- Degraded UX
For the cached experience, we've removed the time spent generating the responses. The Jank/Render-blocking duration should be shorter thanks to the CDN's being globally distributed.
We can do better.
Let's give the OPTIONS request a long Max-Age
We decide our OPTIONS request should always be a 204 No Content
response (if we need to do any header negotiation we can do it on the actual JSON request). We can set the Access-Control-Max-Age
to effectively forever. We'll do this on the API and the CDN.
We'll consider this removed entirely even though it'll have to happen once per user (per browser).
- CDN Cached
- Uncached
- TCP handshake / SSL negotiation
- OPTIONS request for preflight check before actual request
- OPTIONS Latency caused by physical distance to CDN server
- User's network speed to download the OPTIONS response (negligible)
- GET request for JSON
- GET Latency caused by physical distance to CDN server
- User's network speed to download the JSON response
- Degraded UX
- TCP handshake / SSL negotiation
- OPTIONS request for preflight check before actual request
- OPTIONS Latency caused by CDN's physical distance to the API server
- Time for API server to generate OPTIONS response
- User's network speed to download the OPTIONS response (negligible)
- GET request for JSON
- GET Latency caused by CDN's physical distance to the API server
- Time for API server to generate JSON response
- User's network speed to download the JSON response
- Degraded UX
This helps us get to the content faster.
We can do better.
Let's introduce Edge Nodes to minimize the impact of geographic distance
We'll use something like Fly.io to run our API server on the edge. This doesn't help the cached response but the uncached response will be faster.
- CDN Cached
- Uncached
- TCP handshake / SSL negotiation
- GET request for JSON
- GET Latency caused by physical distance to CDN server
- User's network speed to download the JSON response
- Degraded UX
- TCP handshake / SSL negotiation
- GET request for JSON
- GET Latency caused by CDN's physical distance to APIEdge server
- Time for APIEdge server to generate JSON response
- User's network speed to download the JSON response
- Degraded UX
Now the first experience is dramatically faster.
Unfortunately, we can't get to 0ms
because we're still stuck with an HTTP request. 😢 Honestly, this is about as good as it gets for purely client-side feature flags.
If you're OK with temporarily stale data and potential content pop-in, you can introduce a localStorage cache. Your page can be rendered with the cached data while you fetch the latest. But, again, you might have some jank where values change.
This is about as good as it gets for purely client-side feature flags.
But Prefab isn't just client-side feature flags.
Got a hybrid app with a server in the mix?
We can do better!
Let's introduce Server-side Bootstrapping
If you're running a Prefab SDK on your server, you already have all the rulesets for your flags and configs in-memory (and kept up-to-date via SSE). We can bootstrap the page with this data to avoid the HTTP request altogether.
We'll render a script tag to the document using Prefab.generate_js_stub(context)
. This gives us a JavaScript stub so you can call prefab.isEnabled("my.feature.flag")
or prefab.get("my.config")
as you would if you were using the full JavaScript library.
- Bootstrapped
- Minimal CPU time on server
- TCP handshake / SSL negotiation
- GET request for JSON
- GET Latency caused by physical distance to CDN server
- User's network speed to download the JSON response
- Degraded UX
We've almost made it to 0ms client-side feature flags (besides the minimal time spent building the JS data on your server).
This assumes you have the full context of the user on the backend. If your context is enriched by client-side data, you'll need to make a request to the server to get the flags BUT you can still use the bootstrapped data to avoid the majority of jank. More on this later.
We can do better.
We can bootstrap the page to avoid the HTTP request altogether.
Let's introduce a LRU cache for the bootstrapping
If we've seen a context before and flag data hasn't changed since then, we can fetch it from Memcached (or similar) and avoid any work building the JavaScript data.
- Bootstrapped Cached
- Bootstrapped
- Minimal CPU load on server
- Minimal CPU load on server
🎉 There you go. 0ms
client-side feature flags. We did it!
If you don't need client-side context for evaluating the flags, you're done. Your users never wait on anything and you can avoid loading spinners and jank.
But what if you really need the flags to consider the user's client-side-enriched context. Do you need to make a request on the client side every time?
No worries. We can do better.
🎉 0ms
client-side feature flags. We did it!
Let's add localStorage as another caching layer
I know, I know, we mentioned localStorage and its downsides from a client-side-only FF perspective above. But let's look at how it can work together our server-side SDK.
We can store the current flag payload in localStorage and check it against both the enriched context and our server-side SDK's highwater mark for the last time data was updated.
If the context is the same and the data hasn't changed, we can use the localStorage data and avoid the HTTP request.
If the data has changed, we can fetch the latest (using all the improvements we've made above — CDN, edge, etc.) and then update the localStorage data.
- Bootstrapped Cached & localStorage Cache-hit
- Bootstrapped & localStorage Cache-miss but CDN Cached
- Bootstrapped & localStorage Cache-miss and CDN Cache-miss
- This item intentionally left blank. We already hit
0ms
here.
- Minimal CPU load on server
- TCP handshake / SSL negotiation
- GET request for JSON
- GET Latency caused by physical distance to CDN server
- User's network speed to download the JSON response
- Degraded UXMinor Jank where values change
- Minimal CPU load on server
- TCP handshake / SSL negotiation
- GET request for JSON
- GET Latency caused by CDN's physical distance to APIEdge server
- Time for APIEdge server to generate JSON response
- User's network speed to download the JSON response
- Degraded UXMinor Jank where values change
This is the best of all possible worlds. Your page is bootstrapped with enough flag data to render the page almost entirely correct. If the flag data that comes back from the HTTP request has changes, you'll have minor jank in places on the page where relevant values changed. If the data doesn't change, you'll have no jank at all but you can still be confident you have evaluated the data with the most-enriched context possible.