Skip to content

Deploying Apps

Skipper deploys applications as Kubernetes Deployments with a Service and Ingress, all created automatically from a single command.

From a container image

bash
kip app deploy --name api --image ghcr.io/acme/api:latest --port 3000
  Deploying api...
  ✔  Deployment created
  ✔  Service created
  ✔  Ingress created
  ✔  Live at https://api-203-0-113-10.kipper.run

What this creates

Behind the scenes, Skipper creates an App Custom Resource (getkipper.com/v1alpha1). A reconciler then ensures the underlying Kubernetes resources exist:

  1. Deployment: runs your container with the specified number of replicas
  2. Service: internal load balancer that routes traffic to your pods
  3. Ingress: external hostname with automatic TLS via cert-manager

All three are owned by the App CR. Deleting the app cascades to all related resources automatically.

All flags

bash
kip app deploy \
  --name api \
  --image ghcr.io/acme/api:latest \
  --port 3000 \
  --replicas 2 \
  --project staging \
  --env LOG_LEVEL=info \
  --env API_URL=https://api.example.com
FlagRequiredDefaultDescription
--nameYesApplication name
--imageYesContainer image to deploy
--portYesPort the application listens on
--replicasNo1Number of pod replicas
--projectNodefaultProject namespace to deploy into
--envNoEnvironment variable (repeatable)
--routeNoPath route group (e.g. yourr-name/api/users)

From a Git repository

Deploy directly from source code. Skipper clones your repo, builds a container image using your Dockerfile, pushes it to the internal registry, and deploys.

bash
kip app deploy --name api --git https://github.com/acme/api.git --port 3000
  Deploying api...
  ✔  Deployment created
  ✔  Service created
  ✔  Git source configured: https://github.com/acme/api.git (main)
     Configure a webhook or run 'kip app rebuild api' to trigger the first build

Triggering builds

Manual rebuild:

bash
kip app rebuild api --project yourr-name --environment test

Automatic builds via webhook:

Configure your Git provider to send push events to the webhook URL. Skipper validates the token and triggers a build automatically.

Streaming build logs:

bash
kip app build-logs api --project yourr-name --environment test

How it works

  1. A webhook or manual kip app rebuild triggers a build
  2. Skipper creates a Kubernetes Job with two containers:
    • clone: fetches your repo (single branch, depth 1)
    • build: Kaniko builds the Dockerfile and pushes the image
  3. On success, the App CR's image is updated to the new Zot registry tag
  4. The App reconciler rolls out the new Deployment

Git deploy flags

FlagRequiredDefaultDescription
--gitYesGit repository URL
--branchNomainBranch to build from
--portYesPort the application listens on
--projectNodefaultProject namespace
--environmentNoTarget environment

Private repositories

For private repos, pass your Git access token when deploying:

bash
kip app deploy --name api \
  --git https://github.com/acme/private-api.git \
  --git-token ghp_xxxxxxxxxxxx \
  --port 3000 \
  --project yourr-name \
  --environment test
  Deploying api...
  ✔  Git credentials stored
  ✔  Deployment created
  ✔  Service created
  ✔  Git source configured: https://github.com/acme/private-api.git (main)

The token is stored as a Kubernetes Secret (api-git-credentials) and injected into the clone URL at build time. It is never embedded in the App CR itself.

FlagDescription
--git-tokenPersonal access token for HTTPS clone (GitHub PAT, GitLab PAT, etc.)
--git-keyPath to SSH key for SSH clone (planned)

TIP

For GitLab, create a token with read_repository scope. For GitHub, a fine-grained PAT with Contents: Read is sufficient.

Build status

The Source tab in the web console shows the current build status, commit SHA, timestamps, and error messages. You can also trigger rebuilds and cancel active builds from there.

See the Source tab in the web console for a visual overview.

Path-based routing (microservices)

For microservices architectures, multiple services can share a single domain with different path prefixes. Use the --route flag:

bash
kip app deploy --name frontend --image registry.example.com/frontend:latest --port 80 --route yourr-name/
kip app deploy --name users-api --image registry.example.com/users-api:latest --port 3000 --route yourr-name/api/users
kip app deploy --name dns-api --image registry.example.com/dns-api:latest --port 3001 --route yourr-name/api/dns

All three share the same subdomain (yourr-name-<cluster>.kipper.run) but route by path:

Services in the same route group share a single Ingress and TLS certificate. Traefik handles the path-based routing, so no separate API gateway is needed.

Without --route, each app gets its own subdomain (the default behaviour).

Scaling

bash
# Scale up
kip app scale api --replicas 3

# Scale down
kip app scale api --replicas 1

# Stop without deleting (zero replicas)
kip app scale api --replicas 0

The READY column in kip app list shows progress during scaling (e.g. 2/3 means 2 of 3 replicas are healthy). Kubernetes distributes traffic across all healthy replicas automatically.

Scaling is also available in the web console via the Scale tab in the app detail panel.

Autoscaling

Skipper supports automatic horizontal scaling based on CPU and memory usage.

bash
# Scale between 1 and 5 replicas, targeting 70% CPU
kip app autoscale api --min 1 --max 5 --cpu 70

# Scale based on both CPU and memory
kip app autoscale api --min 2 --max 10 --cpu 80 --memory 80

# Check current autoscaling status
kip app autoscale api --status

# Disable autoscaling (return to fixed replicas)
kip app autoscale api --off

When autoscaling is enabled, Kubernetes automatically adds replicas when CPU or memory exceeds the target and removes them when usage drops. The --min and --max flags set the boundaries.

Autoscaling is also configurable from the web console via the Scale tab. Toggle the autoscaling switch and set your thresholds.

Resource requests required

For CPU-based autoscaling to work, your deployment must have CPU resource requests set. Skipper sets sensible defaults, but if you override them, ensure requests are defined.

Linking apps

When one app needs to call another, link them to inject the target's URL as an environment variable.

Internal linking (backend-to-backend)

By default, links use the Kubernetes internal DNS. Fast, secure, and no external networking required:

bash
kip app link domain-service api-gateway
  ✔  Linked domain-service → api-gateway
     DOMAIN_SERVICE_URL=http://domain-service.yourr-name-test.svc.cluster.local:8081

Public linking (frontend-to-backend)

Frontend apps run in the browser and cannot reach internal cluster URLs. Use --public to inject the target's public HTTPS URL instead:

bash
kip app link domain-service webapp --public
  ✔  Linked domain-service → webapp
     DOMAIN_SERVICE_URL=https://domain-service-test-203-0-113-10.kipper.run

The target app must have a public route configured. If it doesn't, the command will tell you to create one first.

Env var naming

The env var name is derived from the target app name, uppercased with hyphens converted to underscores and _URL appended:

Target appEnv var
domain-serviceDOMAIN_SERVICE_URL
dns-serviceDNS_SERVICE_URL
email-serviceEMAIL_SERVICE_URL
paymentsPAYMENTS_URL

Link multiple apps:

bash
kip app link domain-service webapp --public
kip app link identity-service webapp --public
kip app link exchange-service webapp --public

Remove a link:

bash
kip app unlink domain-service webapp

Deleting an app automatically removes its URL from all apps that linked to it.

In the web console, links are managed from the app's Env tab. Select an app from the dropdown, check "public" if needed, and click Link. Existing links appear with an unlink button.

Route groups (path-based routing)

For microservices architectures, multiple apps can share a single domain with different path prefixes. Requests are routed by path, and the path prefix is automatically stripped before reaching the backend.

Creating a route group

From the Routes page in the web console, click + Create route:

  1. Set the domain (or leave empty for auto-generated)
  2. Add path mappings, where each path points to an app
  3. Save
Domain: webapp-test-203-0-113-10.kipper.run

/              → webapp
/domains-api   → domain-service
/identity-api  → identity-service
/exchange-api  → exchange-service

All apps share one TLS certificate. Traefik routes by path prefix and strips it before forwarding, so domain-service receives /api/v1/... not /domains-api/api/v1/....

CLI equivalent

bash
kip app deploy --name webapp --image registry.example.com/webapp:latest --port 3000 --route yourr-name/
kip app deploy --name domain-service --image registry.example.com/domain:latest --port 8080 --route yourr-name/domains-api

Editing and deleting

Click the pencil icon on any route group to add, remove, or change path mappings. Click the trash icon to remove all routes in the group.

Environment-aware domains

Auto-generated domains include the environment name to prevent collisions:

AppEnvironmentDomain
webapptestwebapp-test-203-0-113-10.kipper.run
webappaccwebapp-acc-203-0-113-10.kipper.run
webappprodwebapp-prod-203-0-113-10.kipper.run

Managing apps

List all apps

bash
kip app list --project staging
  NAME                 STATUS     IMAGE                             READY
  api                  running    ghcr.io/acme/api:latest           2/2
  frontend             running    ghcr.io/acme/frontend:latest      1/1

Stream logs

bash
kip app logs api

Streams logs from the first running pod with 100 lines of history. Press Ctrl+C to stop.

Update an app image

bash
kip app update api --image ghcr.io/acme/api:v2.1.0

Changes the container image and triggers a rolling update. Use this when you have a new version of your application to deploy.

For apps within a project environment:

bash
kip app update api --image ghcr.io/acme/api:v2.1.0 --project yourr-name --environment test

Rollback history

Skipper keeps the 3 most recent versions of each deployment. Kubernetes can roll back to any of these if a new version fails to start. The previous 2 versions are retained automatically, and older ones are cleaned up to save resources.

Restart an app

bash
kip app restart api

Triggers a rolling restart. Pods are replaced one at a time with zero downtime. Useful when you need to pick up new environment variables or pull a fresh :latest image.

Delete an app

bash
kip app delete api

Removes the Deployment, Service, Ingress, and all associated Secrets.

Browsing files

See Browsing Files for uploading, downloading, and editing files inside running containers.

AI diagnostics

See Observability for AI-powered log analysis and diagnostics.

Instance ID header

When you're running multiple replicas of an app, it's useful to know which pod handled a particular request. Skipper can add an X-Instance-ID response header to every HTTP response, identifying the pod that served it.

This is enabled by default for all apps with a route. You can toggle it off in the app's Settings tab under "Instance ID header".

How it works

Skipper injects a lightweight reverse proxy sidecar container into each pod. The sidecar sits in front of your app and adds the header transparently. Your app doesn't need any code changes.

The request flow looks like this:

Client → Traefik → Service:8080 → Sidecar(:18080) → Your app(:8080)

The sidecar listens on an offset port (your app's port + 10000). The Kubernetes Service routes traffic to the sidecar via targetPort, and the sidecar forwards it to your app on localhost. Your app keeps listening on its original port and never knows the sidecar is there.

The header value is a short hash of the pod name (8 hex characters). It doesn't reveal the full pod name or any infrastructure details. For example:

X-Instance-ID: f1582f7c

You can match this ID to a specific pod in the live logs viewer. The live logs tab lets you pick individual pods, so once you see which instance handled a failing request, you can jump straight to that pod's logs.

When to disable it

Most apps should leave this on. You might turn it off if:

  • Your app already adds its own instance tracking header
  • You want to avoid the ~5MB memory overhead of the sidecar container per pod
  • Your security policy doesn't allow extra response headers

From a Git repository (coming soon)

In a future release, Skipper will support deploying directly from a Git repository:

bash
kip app deploy --name api --git https://github.com/acme/api --port 3000

Skipper will clone the repo, detect the Dockerfile, build the image using Kaniko inside the cluster, and deploy it automatically.

Released under the Apache 2.0 License.