Configuration
Skipper stores its configuration in ~/.kip/config.yaml. This file is created automatically during kip install.
Config file
clusters:
- name: production
provider: baremetal
host: 203.0.113.10
domain: 203-0-113-10.kipper.run
console_domain: skipper.example.com
kubeconfig: ~/.kip/clusters/203-0-113-10.kipper.run.yaml
gateway_token: 5b2bf14ef65250c82504a721c4353c2e...
org: labbc # optional, set via kip install --org
org_display_name: Labb Consulting
current_cluster: production
ai:
provider: noneFields
| Field | Description |
|---|---|
clusters | List of configured clusters |
clusters[].name | Cluster identifier (rename with kip cluster rename) |
clusters[].provider | Infrastructure provider (baremetal, future: hetzner, digitalocean, aws) |
clusters[].host | Server hostname or IP address |
clusters[].domain | Auto-generated kipper.run subdomain (used internally for app routing) |
clusters[].console_domain | Custom console domain (set via kip cluster domain) |
clusters[].kubeconfig | Path to the cluster's kubeconfig file |
clusters[].gateway_token | Token for managing the kipper.run subdomain |
clusters[].org | Organisation short code (optional), prefixes all namespaces |
clusters[].org_display_name | Human-readable organisation name for the console |
current_cluster | Which cluster kip commands target by default |
ai | AI provider configuration (optional, all features disabled by default) |
Kubeconfig
Each cluster's kubeconfig is stored separately in ~/.kip/clusters/<domain>.yaml. This file provides full admin access to the Kubernetes API.
WARNING
The kubeconfig grants full cluster admin access. Treat it like a root password.
Multiple clusters
Skipper supports managing multiple clusters from the same machine. After installing each cluster, they all appear in your config:
kip cluster list dev
Console: https://console-203-0-113-10.kipper.run
Server: 203.0.113.10
→ production
Console: https://skipper.example.com
Server: 195.148.1.1The arrow (→) indicates the active cluster.
Switching clusters
Switch the active cluster:
kip cluster use productionPartial name matching works if the name is unique:
kip cluster use prodPer-command override
Target a specific cluster for a single command without switching:
kip --cluster dev app listOr set the KIP_CLUSTER environment variable:
export KIP_CLUSTER=dev
kip app list # targets dev
kip service list # targets devResolution order: --cluster flag > KIP_CLUSTER env var > current_cluster in config.
Renaming clusters
Cluster names default to the kipper.run domain, which can be unwieldy. Give them short memorable names:
kip cluster rename 203-0-113-10.kipper.run dev
kip cluster rename v2202503260491323449-happysrv-de.kipper.run productionAfter renaming, all commands use the short name:
kip cluster use production
kip --cluster dev app listSharing cluster access
Export cluster credentials for a team member:
kip cluster export > production.kipThey import it on their machine:
kip cluster add production.kip --set-currentRemoving a cluster
kip cluster remove devThis removes the cluster from your local config and deletes the stored kubeconfig. It does not affect the server.
Custom console domain
By default, the web console is available at console-{domain}.kipper.run. Set a custom domain:
kip cluster domain skipper.example.com Setting up custom domain skipper.example.com...
✔ Console Ingress updated
✔ Console-API Ingress updated
✔ Dex redirect URI added
✔ Console-API redirect URI updated
... Restarting services
✔ Dex and console-api restarted
Console available at: https://skipper.example.com
Make sure DNS for skipper.example.com points to 203.0.113.10This command handles everything in one step:
- Updates the console Ingress with the new hostname and TLS
- Adds the redirect URI to Dex's OAuth configuration
- Updates the console-api's callback URL
- Restarts Dex and console-api to apply changes
TIP
Point your domain's DNS (A record) to the server's IP address before running this command. cert-manager will automatically issue a Let's Encrypt certificate once DNS resolves.
AI provider settings
Skipper's AI features (code assistant, log analysis, diagnostics, and resource optimisation) are all optional and disabled by default. To enable them, configure an AI provider in the web console under Settings → AI Configuration, or edit the ai section in ~/.kip/config.yaml directly.
Supported providers
| Provider | provider value | Requirements |
|---|---|---|
| OpenAI | openai | API key, model name (e.g. gpt-4o) |
| Anthropic | anthropic | API key, model name (e.g. claude-sonnet-4-20250514) |
| Ollama (self-hosted) | ollama | Ollama URL, model name, no API key needed |
Configuration example
ai:
provider: anthropic
api_key: sk-ant-...
model: claude-sonnet-4-20250514
ollama_url: ""
features:
log_analysis: true
anomaly_detection: true
dockerfile_generation: trueFor Ollama, set provider: ollama and provide the URL where Ollama is running:
ai:
provider: ollama
api_key: ""
model: llama3
ollama_url: http://192.168.1.50:11434
features:
log_analysis: true
anomaly_detection: true
dockerfile_generation: trueFeature flags
Each AI feature can be toggled independently:
| Feature | Description |
|---|---|
log_analysis | Analyse button in log viewers (apps, functions, jobs) |
anomaly_detection | Diagnose button and resource optimisation in app detail panels |
dockerfile_generation | AI-assisted Dockerfile generation (planned) |
Set provider: none to disable all AI features. API keys are stored locally in ~/.kip/config.yaml and are never sent to Skipper infrastructure.
Settings page
The web console Settings page (gear icon in the sidebar) provides a UI for configuring the AI provider without editing YAML. Select your provider, enter the API key and model, toggle individual features, and click Save. Changes take effect immediately.
Resource management mode
Skipper automatically manages CPU and memory for your apps. A background controller monitors usage and adjusts allocations to match. It scales up under load, scales down when idle, and recovers from OOM kills.
See Resource Management for full details on how the auto controller works, resource profiles, expert mode, and the resource log.
Slack notifications
Skipper can send alerts to a Slack channel when the auto controller makes resource changes, detects OOM kills, or clears stuck pods.
Setup
- Create a Slack incoming webhook for your channel
- In the web console, go to Settings → Slack
- Paste the webhook URL and click Save
Or configure via the API:
PUT /api/v1/settings/slack
{"webhook_url": "https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX"}The webhook URL is stored as a Kubernetes secret (skipper-slack in the skipper-system namespace). The console displays a masked version for security.
What gets sent
Every alert generated by the resource controller is forwarded to Slack with a severity indicator:
- Green: informational changes (scale down, profile defaults applied)
- Yellow: warnings (resource increases, stuck pod recovery)
- Red: critical events (OOM kills, emergency memory doubling)
To stop notifications, clear the webhook URL in Settings.
