Skip to content

Configuration

Skipper stores its configuration in ~/.kip/config.yaml. This file is created automatically during kip install.

Config file

yaml
clusters:
  - name: production
    provider: baremetal
    host: 203.0.113.10
    domain: 203-0-113-10.kipper.run
    console_domain: skipper.example.com
    kubeconfig: ~/.kip/clusters/203-0-113-10.kipper.run.yaml
    gateway_token: 5b2bf14ef65250c82504a721c4353c2e...
    org: labbc                      # optional, set via kip install --org
    org_display_name: Labb Consulting

current_cluster: production

ai:
  provider: none

Fields

FieldDescription
clustersList of configured clusters
clusters[].nameCluster identifier (rename with kip cluster rename)
clusters[].providerInfrastructure provider (baremetal, future: hetzner, digitalocean, aws)
clusters[].hostServer hostname or IP address
clusters[].domainAuto-generated kipper.run subdomain (used internally for app routing)
clusters[].console_domainCustom console domain (set via kip cluster domain)
clusters[].kubeconfigPath to the cluster's kubeconfig file
clusters[].gateway_tokenToken for managing the kipper.run subdomain
clusters[].orgOrganisation short code (optional), prefixes all namespaces
clusters[].org_display_nameHuman-readable organisation name for the console
current_clusterWhich cluster kip commands target by default
aiAI provider configuration (optional, all features disabled by default)

Kubeconfig

Each cluster's kubeconfig is stored separately in ~/.kip/clusters/<domain>.yaml. This file provides full admin access to the Kubernetes API.

WARNING

The kubeconfig grants full cluster admin access. Treat it like a root password.

Multiple clusters

Skipper supports managing multiple clusters from the same machine. After installing each cluster, they all appear in your config:

bash
kip cluster list
    dev
      Console: https://console-203-0-113-10.kipper.run
      Server:  203.0.113.10

  → production
      Console: https://skipper.example.com
      Server:  195.148.1.1

The arrow () indicates the active cluster.

Switching clusters

Switch the active cluster:

bash
kip cluster use production

Partial name matching works if the name is unique:

bash
kip cluster use prod

Per-command override

Target a specific cluster for a single command without switching:

bash
kip --cluster dev app list

Or set the KIP_CLUSTER environment variable:

bash
export KIP_CLUSTER=dev
kip app list           # targets dev
kip service list       # targets dev

Resolution order: --cluster flag > KIP_CLUSTER env var > current_cluster in config.

Renaming clusters

Cluster names default to the kipper.run domain, which can be unwieldy. Give them short memorable names:

bash
kip cluster rename 203-0-113-10.kipper.run dev
kip cluster rename v2202503260491323449-happysrv-de.kipper.run production

After renaming, all commands use the short name:

bash
kip cluster use production
kip --cluster dev app list

Sharing cluster access

Export cluster credentials for a team member:

bash
kip cluster export > production.kip

They import it on their machine:

bash
kip cluster add production.kip --set-current

Removing a cluster

bash
kip cluster remove dev

This removes the cluster from your local config and deletes the stored kubeconfig. It does not affect the server.

Custom console domain

By default, the web console is available at console-{domain}.kipper.run. Set a custom domain:

bash
kip cluster domain skipper.example.com
  Setting up custom domain skipper.example.com...
  ✔  Console Ingress updated
  ✔  Console-API Ingress updated
  ✔  Dex redirect URI added
  ✔  Console-API redirect URI updated
  ...  Restarting services
  ✔  Dex and console-api restarted

  Console available at: https://skipper.example.com
  Make sure DNS for skipper.example.com points to 203.0.113.10

This command handles everything in one step:

  • Updates the console Ingress with the new hostname and TLS
  • Adds the redirect URI to Dex's OAuth configuration
  • Updates the console-api's callback URL
  • Restarts Dex and console-api to apply changes

TIP

Point your domain's DNS (A record) to the server's IP address before running this command. cert-manager will automatically issue a Let's Encrypt certificate once DNS resolves.

AI provider settings

Skipper's AI features (code assistant, log analysis, diagnostics, and resource optimisation) are all optional and disabled by default. To enable them, configure an AI provider in the web console under SettingsAI Configuration, or edit the ai section in ~/.kip/config.yaml directly.

Supported providers

Providerprovider valueRequirements
OpenAIopenaiAPI key, model name (e.g. gpt-4o)
AnthropicanthropicAPI key, model name (e.g. claude-sonnet-4-20250514)
Ollama (self-hosted)ollamaOllama URL, model name, no API key needed

Configuration example

yaml
ai:
  provider: anthropic
  api_key: sk-ant-...
  model: claude-sonnet-4-20250514
  ollama_url: ""
  features:
    log_analysis: true
    anomaly_detection: true
    dockerfile_generation: true

For Ollama, set provider: ollama and provide the URL where Ollama is running:

yaml
ai:
  provider: ollama
  api_key: ""
  model: llama3
  ollama_url: http://192.168.1.50:11434
  features:
    log_analysis: true
    anomaly_detection: true
    dockerfile_generation: true

Feature flags

Each AI feature can be toggled independently:

FeatureDescription
log_analysisAnalyse button in log viewers (apps, functions, jobs)
anomaly_detectionDiagnose button and resource optimisation in app detail panels
dockerfile_generationAI-assisted Dockerfile generation (planned)

Set provider: none to disable all AI features. API keys are stored locally in ~/.kip/config.yaml and are never sent to Skipper infrastructure.

Settings page

The web console Settings page (gear icon in the sidebar) provides a UI for configuring the AI provider without editing YAML. Select your provider, enter the API key and model, toggle individual features, and click Save. Changes take effect immediately.

Resource management mode

Skipper automatically manages CPU and memory for your apps. A background controller monitors usage and adjusts allocations to match. It scales up under load, scales down when idle, and recovers from OOM kills.

See Resource Management for full details on how the auto controller works, resource profiles, expert mode, and the resource log.

Slack notifications

Skipper can send alerts to a Slack channel when the auto controller makes resource changes, detects OOM kills, or clears stuck pods.

Setup

  1. Create a Slack incoming webhook for your channel
  2. In the web console, go to SettingsSlack
  3. Paste the webhook URL and click Save

Or configure via the API:

PUT /api/v1/settings/slack
{"webhook_url": "https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX"}

The webhook URL is stored as a Kubernetes secret (skipper-slack in the skipper-system namespace). The console displays a masked version for security.

What gets sent

Every alert generated by the resource controller is forwarded to Slack with a severity indicator:

  • Green: informational changes (scale down, profile defaults applied)
  • Yellow: warnings (resource increases, stuck pod recovery)
  • Red: critical events (OOM kills, emergency memory doubling)

To stop notifications, clear the webhook URL in Settings.

Released under the Apache 2.0 License.