Configuration file

The configuration is stored in a YAML file that defines how F5 AI Gateway behaves.

This page introduces the structure of the configuration file. To learn how to apply the configuration file to the AI Gateway see Configuration overview.

The configuration file includes the following sections:

General settings

The configuration file starts with a few settings that define the general behavior of AI Gateway:

Setting

Description

version

The version of the configuration file. The only valid value is currently 1.

server

The server section defines the settings for the AI Gateway core server:

- address: The address and port where the AI Gateway core listens for incoming requests.

- tls: Enable TLS authentication and configure the TLS cert and key paths, serverCertPath and serverKeyPath.

- mtls: Enable mTLS authentication and configure the mTLS cert path clientCertPath.

Example general settings section YAML:

version: 1

server:
  address: :4141
  tls:
    enabled: true
    serverCertPath: .certs/server.crt
    serverKeyPath: .certs/server.key
  mtls:
    enabled: true
    clientCertPath: .certs/ca.crt

Routes

The routes section defines the endpoints that are exposed by AI Gateway and the policy that applies to each of them.

See the Configure routes topic for more information on the available settings.

Policies

The policies section allows you to use different profiles based on different selectors.

See the Configure policies topic for more information on the available settings.

Profiles

The profiles section defines the different sets of processors and services that apply to the input and output of the AI model based on a set of rules.

See the Configure profiles topic for more information on

Processors

The processors section defines the processing services that can be applied to the input or output of the AI model.

See the Configure processors topic for more information on the available processors and their configuration.

Services

The services section defines the upstream LLM services that the AI Gateway can send traffic to.

See the Configure services topic for more information on configuring the most common LLM services.

Example configuration file

The following example provides a minimal configuration. For more detailed examples , refer to each individual section’s detail page linked above.

version: 1

server:
  address: :4141

routes:
  - path: /demo-endpoint
    policy: demo-policy     # Maps a route to policy
    schema: v1/chat_completions   # Use the AI Gateway common schema for requests

policies:
  - name: demo-policy
    profiles:
      - name: demo-profile  # Maps a policy to a profile

profiles:
  - name: demo-profile
    inputStages:
      - name: set-system-prompt
        steps:
          - system-prompt
    services:
      - name: openai/public # Sends traffic to the OpenAI API

services:
  - name: openai/public
    type: gpt-4o
    executor: openai
    config:
      endpoint: "https://api.openai.com/v1/chat/completions"
      secrets:
        - source: EnvVar
          targets:
            apiKey: OPENAI_API_KEY
processors:
  - name: system-prompt
    type: external
    config:
      # Your endpoint may differ
      endpoint: http://aigw-processors-f5.ai-gateway.svc.cluster.local
      version: 1
      namespace: f5
    params:
      modify: true
      strip_existing: true
      rules:
        - |
          You are a helpful AI Assistant designed to help users with questions about
          helping users set up AI-assisted workflows. Only answer questions about
          the topic of creating AI-assisted workflows.