AIConfig Editor
AIConfig Editor is a locally hosted playground where you can visually create and edit prompts stored as AIConfig JSON files. The editor is model-agnostic
, multimodal
, and extensible
by design - it can support any generative AI model with text, image, and audio modalities. You can quickly transition from prototype to production using the AIConfig generated from AIConfig Editor. The AIConfig SDK enables you to execute the prompts and model parameters from the AIConfig in your application code.
This guide covers the core features of AIConfig Editor and demonstrates how to:
- Set up the Editor
- Open the AIConfig Editor
- Edit and save AIConfigs
- Run prompts
- Chain prompts
- Create prompt templates
- Add custom model parsers
- Telemetry
- FAQ
Want to get started quickly? Check out our Getting Started Tutorial.
Set Up
- Install the AIConfig python package to use the AIConfig editor.
- pip
- poetry
$ pip3 install python-aiconfig
$ poetry add python-aiconfig
You need to install the python AIConfig package to create and edit your configs using the AIConfig Editor. You can still use the AIConfig Node SDK to interact with your config in your application code.
- Setup your API Keys required by the model providers.
You will need to specify your API keys for the model providers (i.e. OpenAI, Google, HuggingFace) you plan to use. We recommend adding your API keys as environment variables so that they are accessible for all projects. The python library will automatically detect and use them without you having to write any code.
Example: Setup your OpenAI API Key as a environment variable (MacOS / Linux / Windows)
- Get your OpenAI API Key: https://platform.openai.com/account/api-keys
- Open Terminal
- Edit Bash Profile: Use the command
nano ~/.bash_profile
ornano ~/.zshrc
(for newer MacOS versions) to open the profile file in a text editor. - Add Environment Variable: In the editor, add the line below, replacing your-api-key-here with your actual API key:
export OPENAI_API_KEY='your-api-key-here'
- [Optional] add in Environment Variables for your other model providers (Google, HuggingFace, Anyscale, etc.).
- Save and Exit: Press
Ctrl+O
followed byENTER
to write the change. ThenCtrl+X
to close the editor. - Load Your Profile: Use the command
source ~/.bash_profile
orsource ~/.zshrc
to load the updated profile. - Verification: Verify the setup by typing
echo $OPENAI_API_KEY
in the terminal. It should display your API key.
Open AIConfig Editor
You can open the AIConfig Editor from your terminal to start prompting against models (saved as AIConfigs).
- Open Terminal
- Run this command:
aiconfig edit
This will open the AIConfig Editor in your default browser and in parallel create a new empty AIConfig JSON file in your current directory. Your work in the editor will be saved by default to my_aiconfig.aiconfig.json
. Update the command to aiconfig edit –aiconfig-path={file_path_name}
if you want to save to a specified file path.
We also support YAML in addition to JSON for the AIConfig file format.
To get started, here’s an example prompt chain created in the AIConfig Editor and the corresponding AIConfig. See Getting Started Tutorial for details.
This is saved to an AIConfig JSON file.
Edit and save AIConfigs
If you already have an AIConfig JSON file, you can use the AIConfig Editor to visually edit the prompts and model parameters.
Open Terminal, run this command:
aiconfig edit --aiconfig-path={file_path_existing_aiconfig}
A new tab with the AIConfig Editor opens in your default browser with the prompts, chaining logic, and settings from the specified AIConfig populated in the editor. If the file path doesn’t exist, a new AIConfig will be created at that path and the editor will be blank.
Saving
Your edits in AIConfig Editor will auto-save and update the AIConfig file every 15 seconds. There is also a Save
button to manually save changes to your AIConfig.
Run Prompts
Each cell in AIConfig Editor is used to prompt generative AI models and output responses. Editor cell features:
Feature | Description |
---|---|
Prompt Name | The name of the prompt cell which can be referenced in other cells for chaining. |
Model | The model you are prompting in this cell. Use the dropdown to see the available default models to AIConfig Editor. |
Settings | The settings and parameters specific to the model (i.e. system prompt, temperature). These settings will vary depending on the model selected. |
Local Parameters | These are parameters (variables) that you set to be used in the prompt via handlebars syntax. Local parameters are local to the cell and cannot be accessed in other cells. |
Click ▶️ at the right of the cell to execute the prompt and see the model response.
The outputs are saved to the AIConfig file by default.
Chain Prompts
You can chain your prompts via the cell reference names and handlebars syntax. For example, you can have a cell that uses GPT-4 to generate a haiku, and a GPT-3.5 cell that translates the message into a different language.
Create Prompt Templates
Prompt templates allow you to scale your prompts to different data inputs without needing to constantly modify the prompt itself. To do this in AIConfig Editor, parameters are used to pass in data to prompts. You can set both global and local parameters. Global Parameters can be used across all prompts defined in the editor whereas Local Parameters can only be used in the prompt cell they are defined for.
Global Parameters
You can set global parameters to be used across all cells in the editor. Click on Global Parameters
at the top of the editor to expand the form to enter your global parameters.
Local Parameters
You can set local parameters to be used in specific cells in the editor. In the cell, expand the right pane and select Local Parameters
.
Local parameters will override the global parameters if they have the same name.
Creating Prompt Templates
Prompt templates are created using handlebars syntax for the parameters. Here is an example where {{language}}
is defined as a global parameter. You can easily change the values of the parameter but keep the prompt template the same.
Add Custom Model Parsers
The AIConfig Editor is highly customizable and allows for custom models to be integrated into the editor. Check out our Gradio cookbook to see an example of integrating other cool model parsers like:
- text-to-image
- text-to-audio
- image-to-text
- audio-to-text
- text-summarization
- and much more!
Telemetry
AIConfig Editor collects telemetry data, which helps us understand how to improve the product. The telemetry helps us debug issues and prioritize new features.
Disabling telemetry
If you don't want to send any telemetry back to us to help the team build a better product, you can set allow_usage_data_sharing
to false
in the $HOME/.aiconfigrc
configuration file.
More Resources
Check out these resources on how you can use your AIConfig created from your AIConfig Editor in your application code.
Coming Soon
- Support for non-default models in Editor. AIConfig Editor currently supports the default models available with AIConfig - see here. We will soon be adding support for non-default models via model parser extensions.
FAQ
What are the Environment Variables for different model providers?
Environment Variable Name | Description | Link |
---|---|---|
OPENAI_API_KEY | API Key for OpenAI models | OpenAI API Keys |
GOOGLE_API_KEY | API Key for Google Gemini and PaLM models | Google API Keys |
HUGGING_FACE_API_TOKEN | API Token for models running on Hugging Face inference | Hugging Face User access tokens |
ANYSCALE_ENDPOINT_API_KEY | API Key for models hosted on Anyscale endpoints | Anyscale API Keys |