Skip to main content

AIConfig Editor

AIConfig Editor is a locally hosted playground where you can visually create and edit prompts stored as AIConfig JSON files. The editor is model-agnostic, multimodal, and extensible by design - it can support any generative AI model with text, image, and audio modalities. You can quickly transition from prototype to production using the AIConfig generated from AIConfig Editor. The AIConfig SDK enables you to execute the prompts and model parameters from the AIConfig in your application code.

This guide covers the core features of AIConfig Editor and demonstrates how to:

Want to get started quickly? Check out our Getting Started Tutorial.


Set Up

  1. Install the AIConfig python package to use the AIConfig editor.
$ pip3 install python-aiconfig
note

You need to install the python AIConfig package to create and edit your configs using the AIConfig Editor. You can still use the AIConfig Node SDK to interact with your config in your application code.

  1. Setup your API Keys required by the model providers.

You will need to specify your API keys for the model providers (i.e. OpenAI, Google, HuggingFace) you plan to use. We recommend adding your API keys as environment variables so that they are accessible for all projects. The python library will automatically detect and use them without you having to write any code.

Example: Setup your OpenAI API Key as a environment variable (MacOS / Linux / Windows)
  1. Get your OpenAI API Key: https://platform.openai.com/account/api-keys
  2. Open Terminal
  3. Edit Bash Profile: Use the command nano ~/.bash_profile or nano ~/.zshrc (for newer MacOS versions) to open the profile file in a text editor.
  4. Add Environment Variable: In the editor, add the line below, replacing your-api-key-here with your actual API key:
export OPENAI_API_KEY='your-api-key-here'
  1. [Optional] add in Environment Variables for your other model providers (Google, HuggingFace, Anyscale, etc.).
  2. Save and Exit: Press Ctrl+O followed by ENTER to write the change. Then Ctrl+X to close the editor.
  3. Load Your Profile: Use the command source ~/.bash_profile or source ~/.zshrc to load the updated profile.
  4. Verification: Verify the setup by typing echo $OPENAI_API_KEY in the terminal. It should display your API key.

Open AIConfig Editor

You can open the AIConfig Editor from your terminal to start prompting against models (saved as AIConfigs).

  1. Open Terminal
  2. Run this command: aiconfig edit

This will open the AIConfig Editor in your default browser and in parallel create a new empty AIConfig JSON file in your current directory. Your work in the editor will be saved by default to my_aiconfig.aiconfig.json. Update the command to aiconfig edit –aiconfig-path={file_path_name} if you want to save to a specified file path.

note

We also support YAML in addition to JSON for the AIConfig file format.

To get started, here’s an example prompt chain created in the AIConfig Editor and the corresponding AIConfig. See Getting Started Tutorial for details.

image1

This is saved to an AIConfig JSON file.

Edit and save AIConfigs

If you already have an AIConfig JSON file, you can use the AIConfig Editor to visually edit the prompts and model parameters.

Open Terminal, run this command:

aiconfig edit --aiconfig-path={file_path_existing_aiconfig}

A new tab with the AIConfig Editor opens in your default browser with the prompts, chaining logic, and settings from the specified AIConfig populated in the editor. If the file path doesn’t exist, a new AIConfig will be created at that path and the editor will be blank.

Saving

Your edits in AIConfig Editor will auto-save and update the AIConfig file every 15 seconds. There is also a Save button to manually save changes to your AIConfig.

Run Prompts

Each cell in AIConfig Editor is used to prompt generative AI models and output responses. Editor cell features:

FeatureDescription
Prompt NameThe name of the prompt cell which can be referenced in other cells for chaining.
ModelThe model you are prompting in this cell. Use the dropdown to see the available default models to AIConfig Editor.
SettingsThe settings and parameters specific to the model (i.e. system prompt, temperature). These settings will vary depending on the model selected.
Local ParametersThese are parameters (variables) that you set to be used in the prompt via handlebars syntax. Local parameters are local to the cell and cannot be accessed in other cells.

Click ▶️ at the right of the cell to execute the prompt and see the model response.

image3

The outputs are saved to the AIConfig file by default.

Chain Prompts

You can chain your prompts via the cell reference names and handlebars syntax. For example, you can have a cell that uses GPT-4 to generate a haiku, and a GPT-3.5 cell that translates the message into a different language.

image7

Create Prompt Templates

Prompt templates allow you to scale your prompts to different data inputs without needing to constantly modify the prompt itself. To do this in AIConfig Editor, parameters are used to pass in data to prompts. You can set both global and local parameters. Global Parameters can be used across all prompts defined in the editor whereas Local Parameters can only be used in the prompt cell they are defined for.

Global Parameters You can set global parameters to be used across all cells in the editor. Click on Global Parameters at the top of the editor to expand the form to enter your global parameters.

image5

Local Parameters You can set local parameters to be used in specific cells in the editor. In the cell, expand the right pane and select Local Parameters.

note

Local parameters will override the global parameters if they have the same name.

image6

Creating Prompt Templates Prompt templates are created using handlebars syntax for the parameters. Here is an example where {{language}} is defined as a global parameter. You can easily change the values of the parameter but keep the prompt template the same.

image4

Add Custom Model Parsers

The AIConfig Editor is highly customizable and allows for custom models to be integrated into the editor. Check out our Gradio cookbook to see an example of integrating other cool model parsers like:

  • text-to-image
  • text-to-audio
  • image-to-text
  • audio-to-text
  • text-summarization
  • and much more!

Telemetry

AIConfig Editor collects telemetry data, which helps us understand how to improve the product. The telemetry helps us debug issues and prioritize new features.

Disabling telemetry

If you don't want to send any telemetry back to us to help the team build a better product, you can set allow_usage_data_sharing to false in the $HOME/.aiconfigrc configuration file.

More Resources

Check out these resources on how you can use your AIConfig created from your AIConfig Editor in your application code.

Coming Soon

  • Support for non-default models in Editor. AIConfig Editor currently supports the default models available with AIConfig - see here. We will soon be adding support for non-default models via model parser extensions.

FAQ

What are the Environment Variables for different model providers?

Environment Variable NameDescriptionLink
OPENAI_API_KEYAPI Key for OpenAI modelsOpenAI API Keys
GOOGLE_API_KEYAPI Key for Google Gemini and PaLM modelsGoogle API Keys
HUGGING_FACE_API_TOKENAPI Token for models running on Hugging Face inferenceHugging Face User access tokens
ANYSCALE_ENDPOINT_API_KEYAPI Key for models hosted on Anyscale endpointsAnyscale API Keys