Skip to content

Architecture

This document explains how Skipper works internally. It is aimed at contributors and anyone who wants to understand the system before diving into the code.

System overview

Two modes of operation

During install: The CLI connects to the server via SSH, runs commands remotely to install k3s and all components, then fetches the kubeconfig. SSH is only used during installation.

After install: All operations go through the Kubernetes API using the kubeconfig stored locally. The CLI never uses SSH again.

Repository structure

skipper/
├── kip/                    # CLI tool (Go)
│   ├── cmd/                # Cobra commands
│   └── internal/
│       ├── ssh/            # SSH client for remote execution
│       ├── k8s/            # Kubernetes API client
│       ├── installer/      # Cluster bootstrap logic
│       ├── deployer/       # App deployment (Deployment + Service + Ingress)
│       ├── service/        # Stateful service management (StatefulSet + PVC)
│       ├── infra/          # InfraProvider interface + BareMetalProvider
│       ├── git/            # GitProvider interface (GitHub, GitLab)
│       ├── domain/         # Gateway client (subdomain registration)
│       ├── config/         # Config file management
│       └── ai/             # AI provider interface (future)

├── console/                # Web console (Vue 3 + TypeScript)
│   └── src/
│       ├── api/            # Typed Axios client
│       ├── stores/         # Pinia stores (auth, cluster, apps, projects)
│       ├── composables/    # useDarkMode, useLogStream, useToast
│       ├── components/     # AppDetail panel, ToastContainer
│       ├── views/          # Dashboard, Projects, Apps, Services, Routes, Users, Login
│       └── layouts/        # Sidebar layout with dark mode toggle

├── console-api/            # Console backend (Go + Chi)
│   ├── api/v1alpha1/       # CRD type definitions (getkipper.com/v1alpha1)
│   ├── controllers/        # CRD reconcilers (controller-runtime)
│   ├── controller/         # Resource auto-tuning controller
│   ├── handlers/           # REST endpoints
│   ├── middleware/          # JWT auth, logging
│   └── ws/                 # WebSocket log streaming

├── gateway/                # Subdomain gateway (Go)
│   └── registry/           # In-memory + file-backed subdomain store

└── docs/                   # This documentation (VitePress)

Gateway architecture

The gateway is a lightweight reverse proxy that manages *.kipper.run subdomain routing.

  • A wildcard DNS record (*.kipper.run) points all subdomains to the gateway
  • Caddy terminates TLS using a Let's Encrypt wildcard certificate
  • The proxy extracts the cluster identifier from the subdomain (e.g. hello-203-0-113-10 → cluster 203-0-113-10)
  • It looks up the cluster IP in the registry and proxies the request
  • The original Host header is preserved so Traefik on the cluster can route to the correct app

Subdomain scheme

All subdomains are single-level to work with wildcard certificates:

URLClusterApp
203-0-113-10.kipper.run203-0-113-10(cluster itself)
console-203-0-113-10.kipper.run203-0-113-10console
hello-203-0-113-10.kipper.run203-0-113-10hello
api-203-0-113-10.kipper.run203-0-113-10api

Authentication flow

App deployment internals

When you run kip app deploy, Skipper creates an App Custom Resource. A controller-runtime reconciler watches App CRs and ensures the underlying Kubernetes resources exist and match:

This pattern applies to all Skipper resource types. The CLI and console API create CRs, and reconcilers handle the native Kubernetes resources. This enables GitOps. You can apply CRs directly with kubectl apply or sync them via ArgoCD/Flux.

Infrastructure provider interface

Skipper is designed to support multiple infrastructure providers through the InfraProvider interface:

go
type InfraProvider interface {
    Provision(ctx context.Context, spec MachineSpec) ([]Machine, error)
    Destroy(ctx context.Context, machineIDs []string) error
    GetLoadBalancer(ctx context.Context, spec LBSpec) (*LoadBalancer, error)
    StorageClass() string
    Name() string
}

The MVP implements BareMetalProvider only. Future providers (Hetzner, DigitalOcean, AWS) will implement the same interface without changing any core logic.

Released under the Apache 2.0 License.