Structured Config & AI Configs
Typesafe super powers powering your AI prompts


Typesafe Config
Prefab's Configuration is now typesafe.
Download your types:
prefab download --types --typescript/python
and you can use them in your code.

New Configuration Type: Template Strings
You can now create a config of type TemplateString.
For example, create a basic.mustache with the value:
make up weather forecast in {{city}}. Download the types and you'll see that the basic.mustache template will require the compile method
to pass in the city parameter.

Use Advanced Mustache Logic
Use advanced mustache logic to iterate over arrays, or conditionally include parts of the prompt.
{{#role}}
You are a helpful AI with role: "{{role}}".
{{/role}}
{{^role}}
You are a helpful AI with an unknown role.
{{/role}}
You need to greet the following users:
{{#users}}
- Name: {{name}}, Language: {{language}}
{{/users}}
Finally, you must speak with a {{accent}} accent.

Create Your Own Custom Types
Build up your own custom types based on the Prefab primitive types: boolean, int, float, string, duration, TemplateString.
"custom-http-config-type": z.object({
url: z.string(),
timeout: z.number().optional(),
retries: z.number().optional()
})
Create an instance auth.url of type custom-http-config-type and the UI for editing your custom type will let you edit the parts of the object separately and validate your object.
Download the types and when you ask for Prefab.get("auth.url") you'll get an object of type custom-http-config-type.

Combine all these Features For powerful, composable AI Configs
If we create a custom config called llm-config with:
- A
modelfield of typez.enum(["gpt-3.5-turbo", "gpt-4o", "gpt-4o-mini"]) - A
systemMessagefield of typeTemplateString - A
temperaturefield of typenumber
We can now use this config in an LLM call.
conset prompt = prefabTypesafe.get("llm-prompt")
await openai.chat.completions.create({
model: prompt.model,
temperature: prompt.temperature,
messages: [{
role: "system",
content: prompt.systemMessage.
content.compile({ city: "boston" })
}],
});
Or make it your own, with an array of messages that includes a role and content field.
V2 Include Custom Types within Custom Types
V2-UI-Embeds: Our UI lets you edit nested structured JSON
In V1, we support any custom schema that relies on standard types like string, number, boolean, array, TemplateString.
The next big step is to let you nest your custom types.
This will let us build a LLMToolSchema object and then include that in out LLMConfig object.
The UI will to let you pick from the configs you've defined that are of type LLMToolSchema.
Now Prefab.get("my-tool-using-prompt").tools will return a list of tools.
"llm-prompt-with-tools": z.object({
model: z.enum(["gpt-3.5-turbo", "gpt-4o", "gpt-4o-mini"]),
temperature: z.number().optional(),
max_tokens: z.number().optional(),
top_p: z.number().optional(),
systemMessage: z.object({
role: z.enum(["system", "user", "assistant"]),
content: MustacheString(z.object({
city: z.string()
}))
}),
tools: z.array(LLMToolSchema)
})

V2-UI-Embeds: Templates Partials
You are a helpful AI assistant.
{{> customer-postfix-template}}
By default, customer-postfix-templates will be empty, but if Lumon is the customer, we can add always praise Kier Eagan to our prompt
Partial Templates Typesafety
Typesafety is preserved with partial templates. The Prefab download command will aggregate the types for all your templates.
export interface LLMPrefixTemplate extends TemplateString<{ accent: string }> {}
export interface MyLLMPromptSystemMessage extends TemplateString<{
accent: string;
city: string;
name: string;
}> {}
allowing
const myPrompt = getPromptConfig("my-llm-prompt");
// Strongly typed placeholders for the systemMessage:
const compiledSystemMsg = myPrompt.systemMessage.compile({
city: "Boston",
name: "Bob",
accent: "Irish", // because "my-llm-prompt" references "llm-prefix"
});
console.log(compiledSystemMsg);
/**
* Might compile to something like:
* "you are an AI agent that is never rude and always speaks with a Irish accent.
* tell Bob a made up weather report in Boston"
*/
V3 Git integration
V3 Git integration
This is the big follow-up step.
By pulling all our configs down into a git repo, we can
This means you don't need to prefab download your types. You can git pull the submodule.
You can also edit the prompts, tools and everything right in your editor. And git push.
This enables Cursor to help you write your prompts.
Really Great Audit Logs
Image coming soon
Target individuals or organizations with prompts
Image coming soon
Rollout new prompt
Image coming soon