Skip to main content

26 posts tagged with "Engineering"

View All Tags

· 3 min read
Jeffrey Chupp

If you're just starting the LSP, you might wonder what language to build your Language Server (LS) with. This article will help you pick the right language. You can choose anything (seriously, I built a toy Language Server in Bash). There's no universally correct answer, but there’s a correct one for you.

Thinking...

Consideration 1: Audience

Your audience is the most important consideration. If you're writing a language server for a new Python web framework (Language Servers aren't just for languages, people), then implementing the language server in Java might raise a few eyebrows.

The audience for a Python framework is less likely to contribute to a language server written in a language they're less familiar with. There's nothing wrong with Java (IMHO), but the biases associated with languages could hurt adoption.

If your language server is for a specific language or tooling tied to a specific language, you should probably use the same language to build the server.

Consideration 2: Personal Preference

If you're building your language server as a hobby, the first user is yourself. Optimize for your own enjoyment.

You’re less likely to have fun building if you pick a popular language with unfamiliar (or poor) ergonomics. If you're not having fun, you're less likely to get very far with the project, and your language server won't ever matter to anyone else anyway.

This doesn't mean you should limit yourself to languages you're already an expert in -- building a language server is a great way to learn how a new language handles

  • stdin/stdout and other communication channels
  • concurrency and parallelism
  • error handling
  • testing, debugging, profiling
  • etc.

Consider picking a language you'll enjoy using.

Non-consideration: Performance

Unless you're building a language server to replace one that is demonstrably slow, you should probably avoid optimizing your decision for performance. Measure first before you start hand-coding assembly code.

You're a developer; I get it. You want to think performance matters. Suppose computationally intensive behaviors are required to calculate diagnostics/code actions/etc. In that case, you can always shell out to something tuned for performance and still keep the Language Server itself implemented at a higher level.

Don't worry about performance. It isn't important at first, and you have options later.

Non-consideration: Ecosystem and Libraries

Many languages already have libraries that provide abstractions to help you write language servers. These can jump-start your development but aren't going to make or break your project.

You have all the building blocks you need if you can read and write over stdin/stdout and encode and decode JSON.

Learn more and build alongside me in my LSP From Scratch series.

You can build a language server without third-party libraries.

What If There's No Clear Winner?

If the considerations above haven't helped you pick a clear winner, choose TypeScript (or, if you must, JavaScript).

The first-party libraries (e.g., vscode-languageserver-node) are in written TypeScript, and the community and ecosystem are excellent. A discussion on the vscode-languageserver-node project often leads to an update to the spec itself.

As a bonus, servers written in TypeScript (and JavaScript) can be bundled inside a VS Code extension and be available in the VS Code Marketplace as a single download. I've put up a Minimum Viable VS Code Language Server Extension repo where you can see how this all fits together.

All things being equal, choose TypeScript.

· 6 min read
Andrew Yip

It's common to use static site generators like Jekyll or Docusaurus for marketing or documentation websites. However, it's not always easy to run A/B tests when using these tools.

Prefab makes it simple. In this post we'll show how to setup an A/B test on a statically-generated Docusaurus website. We'll also show you how to send your experiment exposures to an analytics tool. We'll be using Posthog, but the process should be very similar for any analytics tool that has a JS client.

Installing Prefab

This step is the same as for adding Prefab to any other React project.

npm install @prefab-cloud/prefab-cloud-react

Initializing Prefab in Docusaurus

We recommend using the PrefabProvider component from our React library. In a normal React application, you would insert this component somewhere near the top level of your app. For a Docusuarus site, the easiest place to add it is in the Root component. That way Prefab will be available for experimentation on any page of your site.

tip

If you haven't already swizzled the Root component, here's a link to the Docusaurus docs for how to do it: https://docusaurus.io/docs/swizzling#wrapper-your-site-with-root

Everything that we're going to do here needs to run client side, so we'll start by adding the Docusaurus useIsBrowser hook to our Root component.

import React from "react";
import useIsBrowser from "@docusaurus/useIsBrowser";

export default function Root({ children }) {
const isBrowser = useIsBrowser();

if (isBrowser) {
// do client stuff
}

return <>{children}</>;
}

This is the basic initialization for the Prefab client.

import React from "react";
import useIsBrowser from "@docusaurus/useIsBrowser";
import { PrefabProvider } from "@prefab-cloud/prefab-cloud-react";

export default function Root({ children }) {
const isBrowser = useIsBrowser();

if (isBrowser) {
const onError = (error) => {
console.log(error);
};

return (
<PrefabProvider apiKey={"YOUR_CLIENT_API_KEY"} onError={onError}>
{children}
</PrefabProvider>
);
}

return <>{children}</>;
}

Adding Context for Consistent Bucketing

Often A/B tests are bucketed based on users. To do that, we need some consistent way to identify the user, even if they're not logged in...which is usually the case for a static site. Luckily you can probably get an identifier from whatever analytics tool you have installed, or you can generate one yourself.

const uniqueIdentifier = window.posthog?.get_distinct_id();

Once you have the identifier, you can pass it to the Prefab client as context.

const contextAttributes = {
user: { key: uniqueIdentifier },
};

<PrefabProvider
...
contextAttributes={contextAttributes}
...
>
{children}
</PrefabProvider>
tip

We have some opinions about why you might want to generate your own unique tracking ID.

Tracking Experiment Exposures

Your experiment is only going to be useful if you have data to analyze. Prefab is designed to work with whatever analysis tool you already have, so you don't have a competing source of truth. To do this we make it easy to forward exposure events to your tool of choice.

Typically you will have initialized your tracking library as part of the Docusaurus config. You can then provide an afterEvaluationCallback wrapper function to the Prefab client. This will be called after each use of isEnabled or get to record the flag evaluation and resulting value. In this example we're using the Posthog analytics platform.

<PrefabProvider
...
afterEvaluationCallback={(key, value) => {
window.posthog?.capture("Feature Flag Evaluation", {
key, // this is the feature flag name, e.g. "my-experiment"
value, // this is the active flag variant, e.g. true, "control", etc.
});
}}
...
>
{children}
</PrefabProvider>

Here's an example chart from Posthog showing an experiment funnel going from experiment exposure to viewing any other page.

Prefab experiment analysis
tip

Prefab also provides evaluation charts for each feature flag, which you can find under the Evaluations tab on the flag detail page. This telemetry is opt-in, so you need to pass collectEvaluationSummaries={true} to PrefabProvider if you want the data collected. While these are lossy and not a substite for analysis in your analytics tool of choice, they can be useful for troubleshooting experiment setup. Below is an example of an experiment with a 30/70 split.

Prefab experiment analysis

Setting up Your Experiment Code

Congrats, now you're ready to use Prefab from any Docusuarus JSX page or component. Import the usePrefab hook and use it to get a value for your experiment.

import React from "react";
import Layout from "@theme/Layout";
import { usePrefab } from "@prefab-cloud/prefab-cloud-react";

export default function Hello() {
const { isEnabled } = usePrefab();

return (
<Layout title="Hello" description="Hello React Page">
{isEnabled("my-experiment") && (
<div>
<p>"Some experimental copy..."</p>
</div>
)}
</Layout>
);
}
tip

The usePrefab hook also provides a get function for accessing non-boolean feature flags.

Is it Fast?

The Prefab client loads feature flag data via our CDN to ensure minimal impact on your page load speed. It also caches flag data after the initial load. You can read more about the Prefab client architecture in our docs.

Will it Flicker?

There's a catch here, which is not specific to using Prefab. Since Docusaurus is a static site generator, it does not execute any server-side logic when pages are requested. There are more details in the Docusaurus static site generation docs.

This means that the page will first render the static version, which means no access to cookies or to the Prefab flags data. Once your React code runs client-side, it will render again with the correct feature flag values from Prefab.

So in the example above, the page will initially load without your experiment content. Then it will pop-in on the re-render. You'll have to make a judgement call on whether this negatively impacts the user experience, depending on where the experiment is on the page and how it affects the layout of other page elements.

The alternative is to render a loading state on the initial render, then display the actual content once the Prefab client has loaded.

const MyComponent () => {
const {get, loading} = usePrefab();

if (loading) {
return <MySpinnerComponent />
}

switch (get("my-experiment")) {
case "experiment-on":
return (<div>Render the experiment UI...</div>);
case "control":
default:
return (<div>Render the control UI...</div>);
}
}

You can read a more in-depth discussion of handling loading states in the Prefab React client docs.

Configuring the Experiment in the Prefab Dashboard

I wrote a detailed walkthrough of creating flags in the Prefab UI in a previous blog post.

For a simple experiment with only a control and an experiment treatment, you'll want to create a boolean feature flag. The important part for making it an experiment is defining a rollout rule for targeting. Notice that we are setting user.key as the "sticky property". This means that Prefab will use the unique identifier we passed in for segmenting users into the two experiment variants.

Prefab experiment settings

· 5 min read
Jeffrey Chupp

So you've got a misbehaving function in your Node app, and you need to debug it. How can you get more logging? It would be great if you could add log lines to your function, but only output them when you need them so you don't create a bunch of noise & expense. You can do this with Prefab dynamic logging for Node.

Let's see how to enable logging for:

  • A single function
  • A particular user
  • For just 1 hour

The Code We Want To Debug

Here's a really basic skeleton of an Express app. It's has a simple route that takes a user id from the url and returns some data from the database. Let's pretend it's misbehaving and we need to debug it.

We've added two console.log statements, but this probably isn't shippable as is because, at high throughput, we're going to print out way too much logging.

app.get("/users/:id", (req, res) => {
const userId = req.params.id;

var sql = "SELECT * FROM users WHERE id = $1";
console.log(`running the following SQL ${sql}`, { userId: userId });

db.run(sql, [userId], (err, rows) => {
if (err) {
// ...
}

console.log("query returned", { rows: rows });
res.send(`200 Okey-dokey`);
});
});

Add & Initialize Prefab

The first thing we're going to do is add Prefab. We'll use the standard NodeJS server side client. This gives us an SSE connection to Prefab's API out-of-the-box so we'll get instant updates when we change our log levels.

const { Prefab } = require("@prefab-cloud/prefab-cloud-node");

const prefab = new Prefab({
apiKey: process.env.PREFAB_API_KEY,
defaultLogLevel: "warn",
});

// ... later in our file
await prefab.init();

Swap Logging to Prefab

Rather than use a console.log we will create a Prefab logger with the name express.example.app.users-path and the default level of warn so we don't get too much output.

We can replace our console.log with some logger.debug and logger.info and now it's safe to deploy. They won't emit logs until we turn them on.

const logger = prefab.logger("express.example.app.users-path", "warn");

// simple info logging
logger.info(`getting results for ${userId}`);

var sql = "SELECT * FROM table WHERE user_id = $1";

// more detailed debug logging
logger.debug(`running the following SQL ${sql} for ${userId}`);

db.run(sql, [userId], function (err, rows) {
logger.debug("query returned", { rows: rows });
res.send(`200 Okey-dokey`);
});

Listen for changes and Turn On Debugging in the UI

We can now toggle logging in the Prefab UI! Just choose express.example.app.users-path, change it to debug and a minute later you'll see the debug output in your logs.

Change the log level for this express route

Adding Per User Targeting

To add per user targeting, we need to set some context for Prefab so it can evaluate the rules. We should move the logger creation inside this context so that the logger knows about the user id.

// take the context from our url /users/123 and give it to prefab as context
const prefabContext = { user: { key: userId } };

// wrap our code in this context
prefab.inContext(prefabContext, (prefab) => {
const logger = prefab.logger("express.example.app.users-path", "warn");

logger.info(`getting results for ${userId}`);

var sql = "SELECT * FROM table WHERE user_id = $1";

// more detailed debug logging
logger.debug(`running the following SQL ${sql} for ${userId}`);

db.run(sql, [userId], function (err, rows) {
logger.debug("query returned", { rows: rows });
return res.send(`200 Okey-dokey`);
});
});

We can now create the rules in the Prefab UI for just 1 hour and just user 1234. This will let us see the debug output for just that user and automatically stop debug logging after the hour is up.

Target express route logging to just a single user

That's It!

If we load the pages /users/1000, /users/1001 and /users/1234 we'll see the following output in our logs. We have INFO level logging for the first two, but DEBUG level logging for the last one because it matches our user.key rule.

INFO  express.example.app.users-path: getting results for 1000
INFO express.example.app.users-path: getting results for 1001
INFO express.example.app.users-path: getting results for 1234
DEBUG express.example.app.users-path: running the following SQL SELECT * FROM table WHERE user_id = $1 for 1234
DEBUG express.example.app.users-path: query returned { rows: [ { id: 1, user_id: 1234, account: active, balance: 340 } ] }

Full Code Example

const express = require("express");
const { Prefab } = require("@prefab-cloud/prefab-cloud-node");

const prefab = new Prefab({
apiKey: process.env.PREFAB_API_KEY,
defaultLogLevel: "warn",
});

const app = express();
const port = 3000;

// Mock database for the purposes of this example
const db = {
run: (sql, params, callback) => {
callback(null, []);
},
};

const main = async () => {
app.get("/users/:id", (req, res) => {
const userId = req.params.id;
// take the context from our url /users/123 and give it to prefab as context
const prefabContext = { user: { key: userId } };

// wrap our code in this context
prefab.inContext(prefabContext, (prefab) => {
const logger = prefab.logger("express.example.app.users-path", "warn");

logger.info(`getting results for ${userId}`);

var sql = "SELECT * FROM table WHERE user_id = $1";

// more detailed debug logging
logger.debug(`running the following SQL ${sql} for ${userId}`);

db.run(sql, [userId], function (err, rows) {
logger.debug("query returned", { rows: rows });
return res.send(`200 Okey-dokey`);
});
});
});

await prefab.init();

app.listen(port, () => {
console.log(`Example app listening on port ${port}`);
});
};

main();

To learn more about Prefab dynamic logging, check out the dynamic logging or check out the other things you can do with Prefab in Node like feature flags.

· 6 min read
Jeff Dwyer

So you've got a misbehaving Netlify function and you need to debug it. How can you get more logging? It would be great if we could add log lines to our function, but only output them when we need them so we don't create a bunch of noise & expense. We can do this with Prefab dynamic logging for Netlify.

In this post, we'll add dynamic logging to our Netlify function that will let us turn on debug logging for:

  • A single function
  • A particular user
  • For just 1 hour

The Code We Want To Debug

Here's a really basic skeleton of a Netlify function. It's a simple function that takes a user id from the url and returns some data from the database. Let's pretend it's misbehaving and we need to debug it.

We've added two console.log statements, but this probably isn't shippable as is because, at high throughput, we're going to print out way too much logging.


export default async (req, context) => {

const {userId} = context.params;

var sql = "SELECT * FROM table WHERE user_id = $1";
console.log(`running the following SQL ${sql}`, {userId: userId});

db.run(sql, [userId], function (err, rows) {
console.log("query returned", {rows: rows});
return new Response("200 Okey-dokey");
});
};

export const config = {
path: "/users/:userId"
};

Add & Initialize Prefab

The first thing we're going to do is add Prefab. We'll use the standard NodeJS server-side client, but we'll turn off the background processes. Since we're running on a lambda, we don't want any background processes in our function.

import {Prefab} from "@prefab-cloud/prefab-cloud-node";

var prefab = new Prefab({
apiKey: process.env.PREFAB_API_KEY,
enableSSE: false, // we don't want any background process in our function
enablePolling: false, // we'll handle updates ourselves
defaultLogLevel: "warn",
collectLoggerCounts: false, // turn off background telemetry
contextUploadMode: "none", // turn off background telemetry
collectEvaluationSummaries: false, // turn off background telemetry
});
await prefab.init();

Swap Logging to Prefab

Rather than use a console.log, we will create a Prefab logger with the name netlify.functions.hello and the default level of warn so we don't get too much output.

We can replace our console.log with some logger.debug and logger.info, and now it's safe to deploy. They won't emit logs until we turn them on.

const logger = prefab.logger("netlify.functions.hello", "warn");

// simple info logging
logger.info(`getting results for ${userId}`);

var sql = "SELECT * FROM table WHERE user_id = $1";

// more detailed debug logging
logger.debug(`running the following SQL ${sql} for ${userId}`);
db.run(sql, [userId], function (err, rows) {
logger.debug("query returned", {rows: rows});
return new Response("200 Okey-dokey");
});

This logging will not show up in your Netlify logs yet, because the logger is warn but the logging here is info and debug. That means it's safe to go ahead and deploy.

Listen for changes and Turn On Debugging in the UI

Since we turned off the background polling, we'll want to update prefab in line. We can do this by calling the updateIfStalerThan with our desired polling frequency. This is a quick check to a CDN, taking around 40ms (once every minute).

prefab.updateIfStalerThan(60 * 1000); // check for new updates every minute

We can now toggle logging in the Prefab UI! Just choose the function, change it to debug, and a minute later, you'll see the debug output in your logs.

Change the log level for this netlify function

This is pretty cool and you can stop here if this solves your needs. With this pattern you'll be able to instantly turn logging on and off for any function in your app.

Adding Per User Targeting

Now we'll go deeper and add per user targeting. This will let us laser focus in on a particular problem.

To add per user targeting, we need to tell Prefab who the current user is. We do this by setting some context for Prefab so it can evaluate the rules. We should also move the logger creation inside this context so that the logger has this context available to it.

// take the context from our url /users/123 and give it to prefab as context
const {userId} = context.params;
const prefabContext = {user: {key: userId}};

// wrap our code in this context
prefab.inContext(prefabContext, (prefab) => {
// logger goes inside the context block
const logger = prefab.logger("netlify.functions.hello", "warn");

logger.info(`getting results for ${userId}`);

var sql = "SELECT * FROM table WHERE user_id = $1";

logger.debug(`running the following SQL ${sql} for ${userId}`);
db.run(sql, [userId], function (err, rows) {
logger.debug("query returned", {rows: rows});
return new Response("200 Okey-dokey");
});
});

We can now create the rules in the Prefab UI for just 1 hour and just user 1234. This will let us see the debug output for just that user and automatically stop debug logging after the hour is up.

Target netlify function logging to just a single user

That's It!

If we load the pages /users/1000, /users/1001, and /users/1234, we'll see the following output in our logs. We have INFO level logging for the first two, but DEBUG level logging for the last one because it matches our user.key rule.

INFO  netlify.functions.hello: getting results for 1000
INFO netlify.functions.hello: getting results for 1001
INFO netlify.functions.hello: getting results for 1234
DEBUG netlify.functions.hello: running the following SQL SELECT * FROM table WHERE user_id = $1 for 1234
DEBUG netlify.functions.hello: query returned { rows: [ { id: 1, user_id: 1234, account: active, balance: 340 } ] }

Full Code Example

import {Prefab} from "@prefab-cloud/prefab-cloud-node";

var prefab = new Prefab({
apiKey: process.env.PREFAB_API_KEY,
enableSSE: false, // we don't want any background process in our function
enablePolling: false, // we don't want any background process in our function
defaultLogLevel: "warn",
collectLoggerCounts: false, // we don't want any background process in our function
contextUploadMode: "none", // we don't want any background process in our function
collectEvaluationSummaries: false, // we don't want any background process in our function
});

export default async (req, context) => {
prefab.updateIfStalerThanSec(60 * 1000); // check for new updates every minute

// take the context from our url /users/123 and give it to prefab as context
const {userId} = context.params;
const prefabContext = {user: {key: userId}};

prefab.inContext(prefabContext, (prefab) => {
const logger = prefab.logger("netlify.functions.hello", "warn");

logger.info(`getting results for ${userId}`);

var sql = "SELECT * FROM table WHERE user_id = $1";

logger.debug(`running the following SQL ${sql} for ${userId}`);
db.run(sql, [userId], function (err, rows) {
logger.debug("query returned", {rows: rows});
return new Response("200 Okey-dokey");
});
});
};

export const config = {
path: "/users/:userId"
};

To learn more about Prefab dynamic logging, check out the dynamic logging or check out the other things you can do with Prefab in Netlify like feature flags.

We set this up to target a particular user, but you can easily target anything else you provide in the context. Team ID, transaction ID, device ID, device type are all common examples.

Happy dynamic logging!

· 4 min read
Jeff Dwyer

Introduction

How should we integrate feature flags into Netlify functions? We'll explore why it's a bit tricky with lambdas, and I'll guide you through the best approaches to make it work efficiently.

The Lambda Challenge

Lambdas, like those in Netlify functions, are transient and don't run indefinitely. They're frozen after execution. This behavior poses a unique challenge for feature flags, which need to be swift and efficient and typically achieve this by using a background process to update the flag definitions.

Understanding Feature Flag Paradigms

Feature flags generally operate in two ways:

  1. Server-Side Flags: Here, your server connects to the flag server, downloads the necessary data, and performs local flag evaluations. This setup ensures no network calls during flag evaluations. Plus, we can manage telemetry asynchronously to avoid slowing down requests.

  2. Client-Side Flags: Common in web browsers, this approach involves making a network call to fetch flag values. For example, sending user data to an evaluation endpoint on page load, which returns the flag states. These endpoints need to be optimized for low latency, because they get called on every request.

Netlify Functions: A Middle Ground

Netlify functions are neither purely server-side nor client-side. They can't run background processes traditionally, but they are more persistent than a web browser so it would be nice to avoid network calls on every request. So, what's the best approach?

Feature Flags in Netlify: The Browser-Like Approach

A practical solution is to treat Netlify functions similar to a browser. Prefab's Javascript client, for instance, caches flag evaluations per user in a CDN. Here's a sample code snippet for this approach:

import { prefab, Context } from "@prefab-cloud/prefab-cloud-js";

export default async (req, context) => {
const clientOptions = {
apiKey: process.env.PREFAB_API_KEY,
context: new Context({user: {key: 1234}}),
};

await prefab.init(clientOptions);
if (prefab.get("my-flag")) {
// Your code here
}
return new Response("ok");
}

In my testing from a Netlify function I see results around a 50ms latency initially and around then 10ms for each subsequent request for the same context. That may be too slow for some applications, but it's a good starting point and very easy to set up.

The nice thing about this solution is that you're going to get instant updates when you change a flag. The next request will have up to date data.

The Server-Side Alternative

Alternatively, you can implement a server-side strategy using the Prefab NodeJS client. The key will be configuring our client to disable background updates and background telemetry, then performing an update on our own timeline.

Here's a sample code snippet for this approach:

import { Prefab } from "@prefab-cloud/prefab-cloud-node";

var prefab = new Prefab({
apiKey: process.env.PREFAB_API_KEY,
enableSSE: false, // we don't want any background process in our function
enablePolling: false, // we'll handle updates ourselves
collectLoggerCounts: false, // turn off background telemetry
contextUploadMode: "none", // turn off background telemetry
collectEvaluationSummaries: false, // turn off background telemetry
});

// initialize once on cold start
await prefab.init();

export default async (req, context) => {
const { userId } = context.params;
const prefabContext = { user: { key: context.userId } };

return prefab.inContext(prefabContext, (prefab) => {
if (prefab.get("my-flag")) {
// Your code here will
}

// ever 60 seconds, check for updates in-process
updateIfStalerThan(60 * 1000);
return new Response("ok");
});
};

export const config = { path: "/users/:userId" };

With this approach, most of our requests will be fast, but we'll have a periodic update that will take a bit longer. This is about 50ms in my testing from a Netlify function. We're entirely in control of the frequency here, so it's a judgment call on how real-time you want your feature flag updates. You could even disable the updates altogether if tail latency is of utmost concern and you didn't mind redeploying to update your flags.

Is there a better way?

The best way to solve this problem would be to use a Lambda Extension which could run a sidecar process to update the flags, then serve the flag data over localhost to your function. Unfortunately, Netlify doesn't support Lambda Extensions yet, but this is an exciting avenue to explore for other serverless platforms.

Conclusion

Deciding between a browser-like or server-side approach depends on your specific use case in Netlify functions. Both methods have their merits. The browser-like method offers simplicity and instant updates to feature flags, whereas the server-side approach gives a much better average response time at the cost of some tail latency and a configurable delay seeing flag changes. Choose what fits best for your application's architecture and performance requirements. Happy coding!

· 7 min read
Jeff Dwyer

We build configuration tooling here at Prefab, so it was a little embarrassing that our own local development configuration was a mess. We fixed it, we feel a lot better about it and we think you might dig it.

So What Was Wrong?

We used our own dynamic configuration for much of our config and that worked well, but when we needed environment variables everything started to fall apart. The pain points were:

Defaults In Multiple Places

Environment variables sound nice: "I'll just have my app be configurable from the outside". But in practice it can get messy. What are the default values? Do I need to specify defaults for everything? How do I share those defaults. When do I have fallback values and when do I blow up?

We had ended up with defaults in:

Ruby code:

# puma.rb
ENV.fetch('WEB_CONCURRENCY') { 2 }

A .env.example file:

GCS_BUCKET_NAME=app-development-billing-csv-uploads

Other yaml configs like config/application.yml:

STRIPE_SECRET_KEY: sk_test_1234566

production:
STRIPE_PRODUCT_BUSINESS_MONTHLY_2022_PRICE_ID: price_1234556

And in Terraform in another rep:

resource "kubernetes_config_map" "configs" {
metadata {
name = "configs"
}

data = {
"redis.uri" = "${local.redis_base_uri}/1"
}
}

Per Env Configuration All Over the Place

Beyond defaults, where do I put the environment specific overrides? Are these all in my devops CD pipeline? That's kinda a pain. Where are the production overrides? Could be anywhere! We had them in each of:

  1. config/production.rb
  2. database.yml production: section
  3. config/application.yml production: section

Duplicated Defaults Across Repos

Because we have multiple services, we also had some of the defaults in ruby .env.example also showing up in our Java app in a src/main/resources/application-development.yml.

No Easy Way to Share Secrets / API Keys

As if all of ^ wasn't enough of a mess. Secrets had to have an entirely different flow. We were good about not committing anything to source control, but it was a pain to get the secrets to the right place and easy to forget how to do it.

Summary

We were surviving, but it wasn't fun and the understanding / context fell out of our heads quickly meaning that whenever we needed to change something we had to reboot how things worked into our working memory and it took longer than it needed to. For a longer rant on environment variables, check out 9 Things I Hate About Environment Variables.

What Would Be Better?

So, what would be better? We wanted:

  • A single place to look to see all of our of my configuration
  • Developers have a single API key to manage, no local env var mysteries
  • Defaults that are easy to override for local dev, but weren't footguns leading to Works On My Machine issues
  • Easy to share configuration between projects
  • Interoperability with our Terraform / IaaS / Kubernetes variables
  • A system that supports secrets as well as configuration
  • Telemetry on what values are actually being used in production for our IaaS / Terraform provided values

We had a ton of the infrastructure in place to support this from our dynamic configuration work, but when it came to environment variables we were still in the stone age.

Our Dream

Our dream looked like this. With just a single api key and callsite, like:

#.env
# One API Key per developer
PREFAB_API_KEY=123-Development-P456-E2-SDK-c12c561b-22c3-4a52-9c38-a8f24355c102

#database.yaml
default: &default
database: <%= Prefab.get("postgres.db.name") %>

We wanted to be able to see all of my configuration in one place in:

The Prefab Config UI for a config using normal strings as well as provided strings.

Prefab UI for database name

It's clear what the value is in every environment and I can see which environments are getting the value from a Terraform provided env var.

What We Did to Enable This

There were 3 big things we needed to support to make this happen: Environment variables, Datafiles & Secrets.

Provided ENV Vars as a Config Type

First we needed to allow a config value to be "provided by" an environment variable. You can now do that within the Prefab UI or CLI.

Set config to be provided by an ENV VAR in some environments

If you check the box for "Provide by ENV VAR" you can then specify the ENV VAR name for any environments that it should be provided in.

Datafile Support

Datafile support allows the Prefab client to start up using a single local file instead of reaching out to the Prefab API. This is useful for CI environments where you may want perfect reproducibility and no external network calls. You can generate a datafile for your local environment and then commit it to source control. This allows you to have a single source of truth for your configuration and secrets.

In our goal of having a "Single Source of Truth" for our configuration, the original system of default files like .prefab.default.config.yaml ended up being more of a hindrance than a help. There's a big difference between a UI that is all-knowing and a system that has partial knowledge that could be overridden by other files, re-introducing complexity into the system.

Making the API all-knowing is lovely, but if everything is in the API, what do we do for CI / Testing?

Our solution is to have 2 modes:

  1. Live mode.
  2. Datafile mode. Offline, load a single datafile.

The datafiles are easy to generate. You simply run prefab download --env test and it will download a datafile for the test environment. You can then commit that datafile to source control and use it in CI.

In CI environments you can then run PREFAB_DATAFILE=prefab.test.123.config.json and it will use that datafile instead of reaching out to the API.

Secrets

The last big piece of this work was supporting secrets. If we were going to clean this all up once and for all, it just didn't work to still be on our own for secrets. I'll cover that in a future blog post, but if you're interested in our Secrets Management Beta please let us know. It's a zero-trust, CLI based solution that we think hits the nail on the head of being dead simple and easy to use.

Prefab Secret Management

What's Next?

We're really happy with how this turned out. Everything just feels... right. Configuration is important. Configuration goes in one place. It sounds like that should be easy, but from my experience up until now it's not the world many of us have been living in.

If you've been living in a monolith world deploying to heroku, you've long been enjoying the simple pleasure of heroku config:set GITHUB_USERNAME=joesmith. But if you have more than one application, apps in different languages, or weren't deploying to something simple like heroku, the story has been much worse.

What we've built has been a big improvement for us and we think it will be for you too. We're going to be rolling this out to all of our SDKs over the next few weeks. We'd love to hear what you think.

· 3 min read
Jeff Dwyer

We're thrilled to introduce our new Editor Tools for React developers!

As React developers, we cherish our time in the editor. However, dealing with Feature Flags typically meant stepping out of that space. We pondered, "What if we could manage everything directly from the editor?" The result is something we're really proud of.

Feature Flag Autocompletion

First off, we've integrated an autocomplete feature for feature flags. A wrongly typed feature flag can be a nuisance, especially since they default to false, leading to tricky debugging. Let your editor assist you. Enjoy autocomplete for flag and configuration names, and the ability to auto-create simple flags if they don't exist yet.

Feature Flag autocomplete

Feature Flag Evaluations Data on Hover

Implementing a feature flag is often straightforward. The real challenge is monitoring its status. Is it active? Can it be removed? What's its production value?

We envisioned how amazing it would be to integrate evaluation data directly into the editor. The result is indeed amazing! Now, you can get all the answers with a simple hover, without ever leaving your editor.

Feature Flag autocomplete

This lets you see if a flag is set to true in your staging or demo environment, or doing a percentage rollout in production.

Toggle Feature Flags

Don't leave your editor to toggle a feature flag. Simply click on the flag and set it to true.

Feature Flag autocomplete

Personal Overrides

Ever accidentally committed if true || flags.enabled?("myflag")? I've done it. It happens when I want to see a flag's effect but don't want to change it globally. So, I temporarily tweak it to true and then sometimes forget to revert it.

Feature Flag autocomplete

Wouldn't it be better to simply click on the flag and set it to true just for your local environment? This personal overrides feature, linked to your developer account, lets you do just that. Test your feature without disrupting others, all within your coding flow.

Summary

We're absolutely digging these tools internally and we're excited to expand upon them. We think the idea of being able to detect zombie or stale flags right in our editor would be very useful. We feel like we've taken a big step forward with the inline evaluation data on hover, but we're excited to keep pushing forward. We'd love to hear some of your ideas for how we can make these tools even better.

· 8 min read
Jeff Dwyer

In the world of software development, environment variables are how we configure our applications. The Twelve-Factor app methodology made this canonical and was a significant improvement over the terrible things we'd done before. However, I think we can do better.

Looking at the big picture, we've essentially created a system of global, untyped variables with no declarations and no defaults – a scenario that would be unacceptable in regular code. Yet, here we are, using this approach for one of the most critical aspects of our applications.

Specific Challenges with Environment Variables

1. Environment Variable Whack-a-Mole

How often have you cloned an app only to be greeted with a slew of errors due to missing environment variables? Start the app, it explodes, hunt down the value for the env var, start the app, explode on another env var, etc. I asked a friend how big a problem this was on a scale of 1-10. I think he spoke for us all when he said: "Mostly a 1 or 2. Yesterday, it was an 11."

  • Examples:
    • Api.get(key: ENV["THE_KEY"]) will lead us to frustrating mysterious 401 errors when it isn't defined
    • Api.get(key: ENV.fetch("THE_KEY")) will raise the error, but now we're mole-whacking.
  • Doesn't Dotenv Fix It?: Sometimes. Dotenv has been a huge improvement but over time each developers local .env starts straying from the common .env.example and we get a lot of "it works on my machine" issues. Oh... and it's got nothing for secrets.

2. Scattered Defaults

Env vars are big global variables, and there isn't even a clear answer to where we put the values. Most codebases end up with a mix of defaults in the ENV invocation, some in .env files or maybe a .env.production file. Possibly a config/staging.yaml. Maybe something from our continuous deployment. Some things in a kubernetes configmap. It's a mess.

  • Examples:
    • .env.production using dotenv for deployed envs.
    • config/default.yaml or config/production.yaml YAML configs.
    • config.x.swarm_count = ENV.fetch('SWARM_COUNT', 3) in-line defaults.
    • config.x.configure_sys = !Rails.env.test? this looks like a config value but isn't actually updateable.
  • Issue: Defaults are inconsistently spread throughout the codebase, creating a chaotic and confusing setup.

3. No Types & Unsafe Interpolation

Speaking of chaotic mess, how much fun is it debugging an issue when your env var is a string but you're expecting a boolean? Or when you're expecting an array delimited on comma, but somebody left a space in it, and the env var isn't quoted someplace. Good times.

  • Examples:
    • config.x.use_seeds = ENV.fetch('USE_SEEDS', 'false') == 'true' (Potential boolean misinterpretation)
    • config.x.cors_origins = ENV.fetch('CORS_ORIGINS', '').split(',') (Complications with array parsing)
    • config.x.timeout_millis = ENV.fetch('TIMEOUT', '1') * 1000 (Potential for unit mismatches abound)
  • Issue: The lack of inherent type safety necessitates extra coding for handling data types, increasing the risk of errors.

4. What Value is it in Production?

How many times have you had to SSH into a production server to check the value of an environment variable? Or had to ask an ops person to do it for you? It's a pain, it's gross, it's a security risk. Environment variables: the really important configuration variables that you can't actually see or audit.

Partly, this is from scattered defaults, but mostly, this is from the complexity of the systems we've built to inject these variables and the lack of telemetry on their usage.

  • Issue: Assessing the environment variable values in production is cumbersome, requiring system access and specific commands.
  • Impact: This adds complexity to troubleshooting and configuration verification in live environments.

Why can't I use a CLI to see this? Why can't I just hover in my editor and see the configuration in each environment and the actual runtime values?

~/app (main)  $ prefab info
? Which item would you like to see? postgres.db.ip

- Default: 127.0.0.1
- Development: [inherit]
- Production: `POSTGRES_DB_IP` via ENV
- Staging: `POSTGRES_DB_IP` via ENV
- Test: [inherit]

Evaluations over the last 24 hours:

Production: 5
- 100% - 10.1.1.1

Staging: 2
- 100% - 10.11.12.13

Development: 25
- 100% - 127.0.0.1

No more ssh and printenv; I should just be able to do this from the comfort of home.

5. Refactoring Environment Variables is Terrible

Want to change an environment variable name? Good luck. Enjoy slacking everyone that they need to update their .env file in every repo.

Want to spin up a new application? Copy pasta the old .env around and let the duplication party begin.

Want to update the default across all your apps? Good luck.

  • Issue: Each .env is a massive duplication of our configuration, and this makes refactoring hard.
  • Impact: We get crufty code.

6. Cross Language Incompatibility

In truth, Rails has a decent story around all of this for a monolith. And various languages and frameworks have good approaches. But, what's that you say? You have a node app and a rails app? A Java app, too? And you'd like to... gasp... share a configuration value across them all? Sorry, mate, you're on your own.

  • Issue: Custom configuration libraries for each language create a lack of consistency and interoperability.
  • Impact: Lack of interoperability meets cut-overs to the new redis.uri need to happen on a per language basis and require understanding the configuration system (or systems) for each repository.

7. Question of Scale: How Many is Too Many?

How many environment variables is the right number? Ten or twenty is certainly fine. 100 sure feels like a lot and makes things ugly. 1000? More? No, thank you.

But... how many aspects of my system would I like to be configurable? Well, if you take off the shackles of having to jam everything into an env var, I suppose I'd actually like to configure lots of things. Should my http timeout be the same for every single request? Actually, I’d like to tune that at a fine-grained level. But I sure as heck am not going to do that if there is one env var per config. TIMEOUT_AUTH_SERVICE_FROM_BILLING_SERVICE=5000 is madness.

  • Issue: The way environment variables work fundamentally encourages a small number of variables, which is at odds with the desire to have a highly configurable system.
  • Impact: We build systems and libraries without as many knobs and levers as we'd like, and this limits our options for real-time adjustments to production issues.

8. Updates: Slow and Forgettable

Most places I would expect an hour or two. Yes that's crazy, but yes that's the reality. Usually this is a ticket into your devops team and then they have to go update the value in a configmap or something. (I will admit that if you're on heroku this probably takes 1 minute. This is how it should be!)

Changing a variable should be instant, but we have these variables locked into a system that, for most of us, is slow to update.

  • Issue: Updating environment variables can be time-consuming, particularly in larger and more complex systems.
  • Impact: Slow MTTR when issues could be fixed by configuration changes.

9. Secrets Management Requires a Different System

Secrets are just configuration too, or they should be, albeit with more permissions and confidentiality. However, our code needs to know the values just like it would any other variable. Instead, almost all of us have to operate two totally separate tools/processes for managing secrets and configuration.

  • Issue: Managing sensitive data often requires a separate system from standard environment variables, adding to the complexity of configuration management.
Secrets and config should live next to each other

I should be able to see all my configuration in one place, secrets, too. Sure, secrets are confidential and should be encrypted, but that doesn't mean I shouldn't be able to understand that my applications are using them.

Conclusion

Environment variables have got us a long way, but we can do better, and indeed, lots of organizations have built sophisticated dynamic configuration systems that address all of these issues. The future just isn't evenly distributed. Or... hasn't been until now.

The key elements of a better system are:

  1. A single view of all of my configuration
  2. Typed values like: string, bool, duration, arrays, etc.
  3. Defaults that are easy to override for local dev
  4. Easy to share configuration between projects
  5. Telemetry on what values are actually being used in production
  6. Interoperability with Terraform / IaaS / Kubernetes / Existing Secrets Management
  7. A system that supports secrets as well as configuration

As I said, to my knowledge, the best examples of systems that support all this typically come from internal tools at large companies. HubSpot talks briefly about their in How we deploy 300 times a day. Amplitude covers the architecture decisions of theirs in Using DynamoDB for Dynamic Configuration and Netflix's open source Archaius has a lot of the underpinning pieces, though no help on the UI. And, of course, we have Prefab, which is our attempt to bring this to the world.

What's Next?

I think we're a fair way along this journey here at Prefab, and we're excited to share what we've learned and what we've built. I'd love you to check out our dynamic configuration and let me know what you think.

To a world of better config for all 🚀

· 16 min read
Jeff Dwyer

Lograge is one of the all-time great Rails gems and is one of those 3-4 gems like devise, timecop or rails_admin that I typically install on every new Rails project.

So, lograge is great, but are better alternatives in 2024? I think so. We'll examine two options: an excellent free choice called Rails Semantic Logger and one I've built for the Prefab platform.

What Problem Do These Gems Solve?

First, let's take a quick look at what problem we're trying to solve. Here's the logging you get from Rails & Lograge out of the box at info level.

Started GET "/posts" for 127.0.0.1 at 2023-09-06 10:08:20 -0400
Processing by PostsController#index as HTML
Rendered posts/index.html.erb within layouts/application (Duration: 0.8ms | Allocations: 813)
Rendered layout layouts/application.html.erb (Duration: 19.3ms | Allocations: 34468)
Completed 200 OK in 30ms (Views: 21.1ms | ActiveRecord: 0.5ms | Allocations: 45960)

There are a number of annoyances with the Rails logging defaults:

  1. 5 lines of logging for a single request.
  2. "Completed 200 OK" doesn't tell us which request it was that completed.
  3. No timestamps on the log lines after the first one.

In contrast, lograge is clear, concise & grep-able.

(You can, of course, output in JSON as well; I'm not showing that because it's not as pleasant to look at.)

So this is the problem that lograge has been solving all these years, and it's a great fix.

More Than Just a Single Line

But, a single-line request summary is not the only interesting aspect of logging. What about debug logging? What about tagged and struct logging? Can any of these libraries help us fix our problems faster?

To exercise the other aspects of logging, here's the sample code we will run. It logs at all four levels and uses tagged and structlog to show how to add context to your logs.

class PostsController < ApplicationController
def index
@posts = Post.all
if can_use_struct_log?
logger.debug "🍏🍏🍏detailed information", posts: @posts.size #struct logging
logger.info "🍋🍋🍋informational logging", posts: @posts.size
else
logger.debug "🍏🍏🍏detailed information #{@posts.size}" # old-school
logger.info "🍋🍋🍋informational logging #{@posts.size}"
end
@posts.first.process_post
end
end

class Post < ApplicationRecord
def process_post
logger.tagged "process post" do
logger.tagged "post-#{id}" do #nested tagged logging
logger.debug "🍏🍏🍏details of the post"
logger.info "🍋🍋🍋 info about the post"
end
end
end
end

Comparing At INFO Level

Let's compare the output at the info level for each of the gems. It is a bit funky to see it on the web and not a wide terminal, but we'll do our best. Each of the gems does provide a way to provide a custom formatter to tweak things like date, time, etc, but this is what you get out of the box. Also, you'll probably want to use JSON output in production. All the gems support that; see the JSON output comparison in the appendix, but it's pretty straightforward.

Started GET "/posts" for 127.0.0.1 at 2023-09-06 10:08:20 -0400
Processing by PostsController#index as HTML
🍋🍋🍋informational logging 1
[process post] [post-2] 🍋🍋🍋 info about the post
Rendered posts/index.html.erb within layouts/application (Duration: 0.8ms | Allocations: 813)
Rendered layout layouts/application.html.erb (Duration: 19.3ms | Allocations: 34468)
Completed 200 OK in 30ms (Views: 21.1ms | ActiveRecord: 0.5ms | Allocations: 45960)

We see that Rails, by default, has no support for structured logging, though it does support tagged logging. We're outputting logging 1 instead of logging posts=1 and in JSON we won't have a nice {"posts": 1} to index.

It's also not clear what class or file informational logging 1 is coming from. This is annoying because it makes us need to type more detail into the error message just to help us locate / grep.

We do see helpful logging about what templates were rendered and how long they took.

Comparing At DEBUG Level

For completeness, let's compare the output at the debug level for each of the gems.

Started GET "/posts" for 127.0.0.1 at 2023-09-06 10:01:55 -0400
Processing by PostsController#index as HTML
Post Count (0.0ms) SELECT COUNT(*) FROM "posts"
↳ app/controllers/posts_controller.rb:8:in `index'
🍏🍏🍏detailed information 1
CACHE Post Count (0.0ms) SELECT COUNT(*) FROM "posts"
↳ app/controllers/posts_controller.rb:9:in `index'
🍋🍋🍋informational logging 1
CACHE Post Count (0.0ms) SELECT COUNT(*) FROM "posts"
↳ app/controllers/posts_controller.rb:10:in `index'
CACHE Post Count (0.0ms) SELECT COUNT(*) FROM "posts"
↳ app/controllers/posts_controller.rb:11:in `index'
Post Exists? (0.1ms) SELECT 1 AS one FROM "posts" LIMIT ? [["LIMIT", 1]]
↳ app/controllers/posts_controller.rb:12:in `index'
Post Load (0.0ms) SELECT "posts".* FROM "posts" ORDER BY "posts"."id" ASC LIMIT ? [["LIMIT", 1]]
↳ app/controllers/posts_controller.rb:13:in `index'
[process post] [post-2] 🍏🍏🍏details of the post
[process post] [post-2] 🍋🍋🍋 info about the post
Rendering layout layouts/application.html.erb
Rendering posts/index.html.erb within layouts/application
Post Load (0.1ms) SELECT "posts".* FROM "posts"
↳ app/views/posts/index.html.erb:4
Rendered posts/index.html.erb within layouts/application (Duration: 0.6ms | Allocations: 736)
Rendered layout layouts/application.html.erb (Duration: 7.9ms | Allocations: 14135)
Completed 200 OK in 15ms (Views: 8.4ms | ActiveRecord: 0.6ms | Allocations: 20962)

Rails debug for local dev gets pretty chatty with a lot more detail. In addition to the templates and layouts we got at the debug level, we can now see the executed SQL.

We also now get class and line number output for some of the logging, though not all of it. Our custom logging does not include any information about where it came from.

It's also a lot of output, and it's not particularly easy to grep for the important bits. As your app grows, this logging can start to get a bit overwhelming.

It may go without saying, but turning this on in production would be a very bad idea.

JSON Formatting

The JSON formatting of each of these libraries is pretty much what you would expect, but for completeness:

{
"method":"GET",
"path":"/posts",
"format":"html",
"controller":"PostsController",
"action":"index",
"status":200,
"allocations":45961,
"duration":37.91,
"view":29.63,
"db":0.42
}

Summary

The state of Rails logging gems in 2024 is good, and you've got great options. If you've been defaulting to just standard Rails logging or lograge for years, it may be time to take a look at what else is out there. Being able to quickly trace each log line back to its source is valuable, and structured logging feels much better than string interpolation and can be very powerful in conjunction with your log aggregation tool.

Rails Default Logger

Good:

  • Rails gives us a lot of great information out of the box.

Bad:

  • 5 lines of logging for a single request is too much in deployed environments.
  • "Completed 200 OK" doesn't tell us what request completed.
  • No timestamps on the log lines after the first one.
  • Difficult to fine-tune debug logging. Very "all-or-nothing".

Lograge

Lograge has been the default choice for years. If you don't use much logging and just want the basic request log, it's a good choice.

For more information, check out the GitHub for lograge

Good

  • Better 1-line Req Log
  • Supports tagged logging

Bad

  • No support for struct log
  • Unclear display of level by default formatter
  • Difficult to fine-tune debug logging
  • Doesn’t format your other logs, like Rails.logger.info "foo"
  • No file names/origin of logging

rails_semantic_logger

Rails Semantic Logger is an impressive piece of software that has been around for a long time. It's got a lot of features and is very configurable. It has everything you could want in a logger, save the ability to update it dynamically.

For more information, check out the docs for rails_semantic_logger

Good:

  • Adds Class Name
  • Optionally add file and line number
  • Better 1-line Req Log
  • Very configurable
  • Structlog & Tagged Logging
  • logger.measure_trace("Low level trace information such as data sent over a socket") do ... end is cool
  • logger.error("Oops external call failed", exception) is cool

Bad:

  • Can't change log levels on the fly

prefab-cloud-ruby

Prefab is a SaaS that provides dynamic configuration and uses that to support Feature Flag and Dynamic Log Levels. Prefab provides the same log cleanup and core features as SemanticRails, with the additional benefit of being able to quickly debug production by temporarily enabling debug logging for a specific user or job.

For more information, check out: Dynamic Log Levels

Good

  • Consistent display of class and method
  • Change log level instantly
  • Turn debugging logging on for just a single user or team
  • Better 1-line Req Log
  • Structlog & Tagged Logging

Bad

  • Not as comprehensive as rails_semantic_logger
  • Fewer integrations than rails_semantic_logger or lograge
  • Part of a paid product

Thanks for checking this out. If you have any questions, please reach out.

· 3 min read
Jeff Dwyer

We're super excited about our new Editor Tools! When Jeffrey first starting hacking around with the LSP we each had one of those whoa moments where you feel like you're seeing things in a whole new light.

I love being in my editor, but everything about Feature Flags has always required me to leave. We spent the past month asking "what would it be like to do it all from the editor?" and I love where we ended up.

Feature Flag Autocompletion

First up, the autocomplete feature for feature flags. A mistyped feature flag is a terrible thing and pretty annoying to debug since they default to false. Let your editor help with that. Autocomplete for flag and config names, and auto-create simple flags if the flag doesn't exist yet.

Feature Flag autocomplete

Evaluations on Hover

Writing the feature flag is often the easy part. The real question comes later. Is this thing on? Can I delete it? What value is set in production?

We wondered how excellent it would be to bring our evaluation data right into the editor and our answer is... very excellent! No more leaving your editor to answer a simple question. A simple hover, and you’ve got all the info you need.

Feature Flag in Editor Evaluation Summaries

In this picture you can see that the flag is true in staging and test, false in dev and a 66% rollout to true in production. And it looks like it's working too. Data over the past 24 hours shows that 67% of users are seeing the feature.

Personal Overrides

Grimace if you've ever committed if true || flags.enabled? "myflag" to version control. It's easy to do. You want to see what happens when a flag is enabled, but setting the flag is annoying or will change it for everyone, so you cheat and put a raw true in front of your flag and then forget to take it out.

What would be a better way? Well, could I just click on the flag and set it to true? Of course it should only be true for me on my personal machine so I don't screw anyone else up. That sounds nice right?

Feature Flag in Editor Evaluation Summaries

This personal overrides feature is tied to you developer account, so no more global toggles causing chaos. Set your value, test your feature, all without leaving your coding groove.

Learning More About LSPs

We've been learning a ton about Language Server Protocols (LSPs) lately. Jeffrey's been on a roll, sharing his insights on creating LSPs with tutorials like writing an lsp in bash and lsp zero to completion.

We're not just building these tools for you; we're building them for us, too. We're genuinely jazzed about these new features and the difference they're making in our coding lives.

Give these new tools a spin and let us know what you think. We'd love to hear about what else you think would make the in-editor experience for feature flags brilliant. Don't use VSCode? Don't worry, we're working on the other editors too. Go here to get notified about new editor releases.

· 7 min read
Jeffrey Chupp

Implementing a language server is so easy that we're going to do it in Bash.

You really shouldn't write your language server in Bash, but we'll show that you could.

The minimum viable language server needs to

  1. receive JSON-RPC requests from the client
  2. respond with JSON-RPC to client requests

Like a good bash program, we'll talk over stdin and stdout.

Mad scientist holding a bash language server

JSON-RPC

JSON-RPC is simply a protocol for communication over JSON.

  • A request message includes an id, params, and the method to be invoked.
  • A response message includes the id from the request and either a result or error payload.
  • The LSP adds on the additional requirement of a header specifying the Content-Length of the message.

An example request might look like

Content-Length: 226\r\n
\r\n
{"jsonrpc":"2.0","method":"initialize","id":1,"params":{"trace":"off","processId":2729,"capabilities":[],"workspaceFolders":null,"rootUri":null,"rootPath":null,"clientInfo":{"version":"0.10.0-dev+Homebrew","name":"Neovim"}}}

An example response might look like

Content-Length: 114\r\n
\r\n
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"capabilities": {
"completionProvider": {}
}
}
}

For our language server in bash, we'll write the following function:

respond() {
local body="$1"
local length=${#body}
local response="Content-Length: $length\r\n\r\n$body"

echo -e "$response"
}

This will take a JSON string as an argument and echo it out. The -e ensures our line breaks work as intended.

Listening for messages and parsing them

Our language server will listen for messages on stdin and write messages on stdout.

Let's name the bash script /tmp/bash-ls and chmod +x it.

I'll connect it to my editor, Neovim, using

vim.lsp.start {
name = "Bash LS",
cmd = {"/tmp/bash-ls"},
capabilities = vim.lsp.protocol.make_client_capabilities(),
}

Now, we'll work on our read/print loop.

We'll start with the Bash classic

while IFS= read -r line; do

This gives us a value for $line that looks like Content-Length: 3386

The content length will vary based on the capabilities of your editor, but the gist here is that we need to read 3386 characters to get the entire JSON payload.

Let's extract the content length number

while IFS= read -r line; do
# Capture the content-length header value
[[ "$line" =~ ^Content-Length:\ ([0-9]+) ]]
length="${BASH_REMATCH[1]}"

We need to add 2 to the number to account for the initial \r after the content length header. So we'll length=$((length + 2))

Now we're ready to read the JSON payload:

while IFS= read -r line; do
# Capture the content-length header value
[[ "$line" =~ ^Content-Length:\ ([0-9]+) ]]
length="${BASH_REMATCH[1]}"

# account for \r at end of header
length=$((length + 2))

# Read the message based on the Content-Length value
json_payload=$(head -c "$length")

Remember that JSON-RPC requires us to include the id of the request message in our response. We could write some convoluted JSON parsing in bash to extract the id, but we'll lean on jq instead.

while IFS= read -r line; do
# Capture the content-length header value
[[ "$line" =~ ^Content-Length:\ ([0-9]+) ]]
length="${BASH_REMATCH[1]}"

# account for \r at end of header
length=$((length + 2))

# Read the message based on the Content-Length value
json_payload=$(head -c "$length")

# We need -E here because jq fails on newline chars -- https://github.com/jqlang/jq/issues/1049
id=$(echo -E "$json_payload" | jq -r '.id')

Now, we have everything we need to read and reply to our first message.

The initialize method

The first message sent by the client is the initialize method. It describes the client's capabilities to the server.

You can think of this message as saying, "Hey, language server, here are all the features I support!"

The server replies with, "Oh, hi, client. Given the things you support, here are the things I know how to handle."

Well, that's how it should work, anyway. For our MVP here, we'll provide a canned response with an empty capabilities section.

while IFS= read -r line; do
# Capture the content-length header value
[[ "$line" =~ ^Content-Length:\ ([0-9]+) ]]
length="${BASH_REMATCH[1]}"

# account for \r at end of header
length=$((length + 2))

# Read the message based on the Content-Length value
json_payload=$(head -c "$length")

# We need -E here because jq fails on newline chars -- https://github.com/jqlang/jq/issues/1049
id=$(echo -E "$json_payload" | jq -r '.id')
method=$(echo -E "$json_payload" | jq -r '.method')

case "$method" in
'initialize')
respond '{
"jsonrpc": "2.0",
"id": '"$id"',
"result": {
"capabilities": {}
}
}'
;;

*) ;;
esac
done

We pluck out the request's method and use a case statement to reply to the correct method. If we don't support the method, we don't respond to the client.

If we didn't use a case statement here and always replied with our canned message, we'd make it past initialization, but then the client would get confused when we respond to (e.g.) its request for text completions with an initialize result.

That's all you need for a minimum viable language server built-in bash. It doesn't do anything besides the initialization handshake, but it works.

Adding completion

A language server that doesn't do anything is no fun, so let's teach it how to respond to textDocument/completion to offer text completions.

First, we'll need to modify our capabilities in our initialize response to indicate that we support completion:

          "result": {
"capabilities": {
"completionProvider": {}
}
}

We'll start with hardcoded results to verify things work. This is as easy as adding a new condition to our case statement.

  'textDocument/completion')
respond '{
"jsonrpc": "2.0",
"id": '"$id"',
"result": {
"isIncomplete": false,
"items": [
{ "label": "a sample completion item" },
{ "label": "another sample completion item" }
]
}
}'
;;
Hardcoded completions

That works as we hoped. Let's jazz it up a little by completing the first 1000 words from the dict file on macOS (your path may differ).

Here's the final version of the script:

#!/bin/bash

respond() {
local body="$1"
local length=${#body}
local response="Content-Length: $length\r\n\r\n$body"

echo "$response" >>/tmp/out.log

echo -e "$response"
}

completions=$(head </usr/share/dict/words -n 1000 | jq --raw-input --slurp 'split("\n")[:-1] | map({ label: . })')

while IFS= read -r line; do
# Capture the content-length header value
[[ "$line" =~ ^Content-Length:\ ([0-9]+) ]]
length="${BASH_REMATCH[1]}"

# account for \r at end of header
length=$((length + 2))

# Read the message based on the Content-Length value
json_payload=$(head -c "$length")

# We need -E here because jq fails on newline chars -- https://github.com/jqlang/jq/issues/1049
id=$(echo -E "$json_payload" | jq -r '.id')
method=$(echo -E "$json_payload" | jq -r '.method')

case "$method" in
'initialize')
respond '{
"jsonrpc": "2.0",
"id": '"$id"',
"result": {
"capabilities": {
"completionProvider": {}
}
}
}'
;;

'textDocument/completion')
respond '{
"jsonrpc": "2.0",
"id": '"$id"',
"result": {
"isIncomplete": false,
"items": '"$completions"'
}
}'
;;

*) ;;
esac
done
Dictionary completions

Lovely.

Closing

In 56 lines of bash, we've implemented a usable (if boring) language server.

I wouldn't advocate anyone writing a serious language server in bash. Hopefully, this has illustrated how easy it is to get started with language servers and has made the LSP and JSON-RPC a little less magical.

What language would I recommend for writing a language server? That's probably a whole article in itself, but the short answer is

  1. All things being equal, choose TypeScript. The first-party libraries (e.g., vscode-languageserver-node) are in written TypeScript, and the community and ecosystem are excellent.
  2. If you don't want to use TypeScript, use whatever language you're most productive in. There's probably already a library for writing a language server in your preferred language, but if there isn't, you now know how easy it is to write it yourself.

If you'd like to be notified when I publish more LSP content, sign up for my newsletter.

· 5 min read
Andrew Yip

This post will walk you through setting up Prefab feature flags in your React app, and creating a flag for a gradual rollout of a new feature.

Creating a feature flag in the Prefab dashboard

If you don't have one already, start by creating a free Prefab account. Once you sign in, this is what you'll see

Prefab dashboard

To create your first feature flag, click on Flags in the left nav and you'll be brought to the Flags index page.

Prefab new feature flag page

Click Add Flag to create your first flag. We'll name it flag.my-new-feature and leave the type as bool, then click Save.

Prefab feature flag settings

Once you Save the flag name, you'll see the flag Variants and flag Rules. Since this is a boolean flag, the variants aren't editable, but for other flag types such as string you can name as many variants as you want.

Prefab feature flag rules

Now let's add a rule under the Development environment:

  1. Click Add Rule
  2. Change the variant of the new rule to true
  3. Enter user.email in the property field
  4. Change the logic operator to ends with one of
  5. Enter @prefab.cloud in the values field
  6. Click Save to save your new rule
info

Prefab creates the Development, Staging, and Production environments automatically, but you can edit, add, or delete environments to meet your needs.

Prefab feature flag rule

There's one more step. You'll need to click Publish Changes to make your new rule live.

Prefab feature flag publish

That's it! Your new flag is ready to use, but there's one more step before we can write some coe.

Creating an API key

You need to create an API key before you can access Prefab from your code. Click on Environments in the left nav. API Keys belong to environments.

Prefab environments page

Since we're using React, you'll want to create a Client key. You can learn more about client vs. server keys in our docs. Click Add Client API Key under the Development environment and make sure you write down the key. It will only be displayed once.

Setting up Prefab in your React app

Install the latest version

Use your favorite package manager to install @prefab-cloud/prefab-cloud-react npm | github

npm install @prefab-cloud/prefab-cloud-react

Initialize the client

Wrap your component tree in the PrefabProvider, e.g. This will enable you to use our hooks when you want to evaluate a feature flag.

import { PrefabProvider } from "@prefab-cloud/prefab-cloud-react";

const WrappedApp = () => {
const onError = (reason) => {
console.error(reason);
};

return (
<PrefabProvider apiKey={"YOUR_CLIENT_API_KEY"} onError={onError}>
<MyApp />
</PrefabProvider>
);
};

Using hooks to evaluate flags

Now use the usePrefab hook to fetch flags. isEnabled is a convenience method for boolean flags.

const NewFeatureButton = () => {
const { isEnabled } = usePrefab();

if (isEnabled("flag.my-new-feature")) {
return <button>Do the thing!</button>;
}

return null;
};
info

Prefab gives you a single source of truth for feature flags, so your backend and frontend code are always in sync about which variant to assign.

See the result

If we render the NewFeatureButton component, we see nothing. Uh oh, is our flag working? Well if you remember the rules we setup, the default value of the flag is false. It will only be true if the user's email ends with @prefab.cloud, but so far we haven't told Prefab anything about the email address. This is where Context comes in.

Add context to match the Prefab rule

Context is a powerful mechanism that lets you supply metadata about users, teams, devices, or any other entity that's important. You can then use this data for rule targeting. Let's add some context. Usually you'll want to define context once when you setup PrefabProvider.

import { PrefabProvider } from "@prefab-cloud/prefab-cloud-react";

const WrappedApp = () => {
const contextAttributes = {
user: { email: "me@prefab.cloud" },
};

const onError = (reason) => {
console.error(reason);
};

return (
<PrefabProvider
apiKey={"YOUR_CLIENT_API_KEY"}
contextAttributes={contextAttributes}
onError={onError}
>
<App />
</PrefabProvider>
);
};

Now try rendering NewFeatureButton again. You should see our button!

Making a rollout for Production

Let's try one more thing. Suppose you want to gradually release this feature to your users. Prefab supports rollouts that will allocate users in a percentage split (also great for A/B testing!).

Go back to our feature flag in the Prefab dashboared, and click Edit Rules.

Prefab saved rule

Click on the Production tab, then click on the current variant (false) and change it to Rollout.

Prefab create rollout

Here you can set the percentage of users that should receive each variant. You also need to choose a sticky property, which is a context property that will be used to consistently assign a given user to a variant. In this case we can enter user.email since that will be unique and we're already providing it as context. However, if you have a user tracking ID that is often the best choice.

Testing

What about unit testing? Prefab supplies a PrefabTestProvider component that you can use to force specific variants when setting up tests. You can learn more about testing in our docs.