Dynamic Config for OpenAI & Python
OpenAI's GPT models are great. However, as with any powerful tool, there are many settings to tweak to get the desired output. Parameters like temperature
, top_p
, model
, and frequency_penalty
can significantly influence the results. Changing these settings often requires code changes and redeployment, which can be cumbersome in a production environment.
Feel like watching instead of reading? Check out the video instead.
Prefab is a dynamic configuration service that allows you to modify these settings on the fly without redeploying your application. By the end of the blog post we'll have a working example of an OpenAI storyteller that can be reconfigured without restarting.
How Can I Change OpenAI Parameters Instantly?
We won't cover the full python documentation here, but you'll need to have the openai
and prefab_cloud_python
libraries installed. If you haven't already, you can install them using pip:
pip install openai prefab-cloud-python
Next, set up your OpenAI API key and initialize the Prefab client:
import openai
import os
from prefab_cloud_python import Options, Client
openai.api_key = os.getenv("OPENAI_API_KEY")
prefab = Client(Options())
Prefab will look for an API key in the env var PREFAB_API_KEY
. Get one with a free signup. With the Prefab client initialized, you can now fetch dynamic configurations for your OpenAI API calls. Here's an example of how you can use Prefab to dynamically set the parameters for a chat completion:
Before:
def response(name):
prompt = f"You are a story teller in the style of Charles Dickens, tell a story about {name}"
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": prompt
}
],
temperature=0.5,
max_tokens=1024,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
return response
After:
def response(name):
prompt = f"You are a story teller in the style of {prefab.get('storyteller')}, tell a story about {name}"
response = openai.ChatCompletion.create(
model=prefab.get("openai.model"),
messages=[
{
"role": "system",
"content": prompt
}
],
temperature=prefab.get("openai.temperature"),
max_tokens=prefab.get("openai.max-tokens"),
top_p=prefab.get("openai.top-p"),
frequency_penalty=prefab.get("openai.frequency-penalty"),
presence_penalty=prefab.get("openai.presence-penalty")
)
return response
To make this work, we'll need to setup the following configs in the Prefab UI.
In the above code, instead of hardcoding the values for parameters like model
, temperature
, and top_p
, we're fetching them dynamically from Prefab using the prefab.get()
method.
Instant Updates
Changes we make in the UI will be reflected in our application instantly without redeploying. See here as we change the story teller from Stephen King to Charles Dickens. Our ChatGPT prompt updates without any code changes or restarts.
Targeting
Prefab also supports powerful targeting capabilities like you might expect from feature flags. If you pass in context to the client you could choose a different storyteller based on a user's preference.
Debugging
Prefab also provides a robust logging system that allows you to see exactly what's happening with your configurations. You can change log levels on the fly for any class or method in your application. This is especially useful when you're trying to debug a specific issue.
Add the following to your code to enable logging:
def response(name):
prefab.logger.info(f"processing request for {name}")
response = openai.ChatCompletion.create(
...
)
prefab.logger.debug(response)
return response
Then you can change the log level for your application in the Prefab UI. Change it to debug
to see both of the log statements we added.
The logging output will be display in the console or your existing logging aggregator.
2023-09-26 13:41:30 [info ] processing request for bob location=pycharmprojects.openaiexample.main.response
2023-09-26 13:41:38 [debug ] {
"id": "chatcmpl-836LrWRRLnwVPGYR5440RnyGpoKWu",
...
"usage": {
"prompt_tokens": 27,
"completion_tokens": 200,
"total_tokens": 227
}
} location=pycharmprojects.openaiexample.main.response
Conclusion
Dynamic configuration with Prefab offers a seamless way to tweak and optimize your OpenAI projects without the hassle of constant redeployments. Prefab provides you with:
- Flexibility: Easily experiment with different settings to find the optimal configuration for your use case.
- Quick Iteration: No need to redeploy your application every time you make a change.
- Centralized Management: Manage all your configurations from a single place. Setting Up Prefab with OpenAI
Whether you're running experiments or need the flexibility to adjust settings on the fly in a production environment, Prefab provides a robust solution. So, the next time you find yourself redeploying your app just to change a single parameter, consider giving Prefab a try!