Team Access
Skipper is designed for teams. The person who installs the cluster (the admin) can share access with developers, contractors, or anyone who needs to work with the cluster, without giving them SSH access to the server.
How it works
When you run kip install, Skipper saves cluster credentials to ~/.kip/ on your machine. To give someone else access, you export those credentials to a file, share it with them, and they import it. From that point on, they can use kip commands against the cluster just like you can.
No SSH keys are shared. No server passwords are exchanged. The exported file contains only the credentials needed to talk to the cluster API.
Setting up a team member
Step 1: Export cluster credentials (admin)
The admin runs:
kip cluster export > acme-production.kipThis creates a file called acme-production.kip containing the cluster connection details and credentials.
Step 2: Share the file
Send the .kip file to your team member however you normally share files: Slack, email, a shared drive. The file is sensitive (it grants cluster access), so use a secure channel when possible.
Step 3: Import and connect (team member)
The team member installs the kip CLI, then imports the file:
kip cluster add acme-production.kip --set-currentThe --set-current flag makes this the active cluster immediately. Without it, the cluster is saved but not selected.
They can verify it worked:
kip status Cluster: acme.kipper.run
Host: 203.0.113.10
Nodes:
✔ ubuntu-8gb-nbg1-7 master Ready v1.34.5+k3s1
Components:
✔ k3s 1 node(s)
✔ Traefik 1/1 replicas available
✔ cert-manager 1/1 replicas available
✔ Longhorn 1/1 replicas available
✔ Dex 1/1 replicas available
✔ Console API 1/1 replicas available
✔ Console 1/1 replicas availableThat is it. The team member can now deploy apps, check logs, manage services, and use every kip command.
Managing multiple clusters
If you manage multiple servers (your own product, a client project, a separate cluster for a different region), each gets its own cluster entry. Import as many as you need and switch between them.
Clusters vs environments
You do not need a separate cluster for each environment. A single cluster can have test, acc, and prod environments using project environments. Use kip app promote to move code between them. Multiple clusters are for genuinely separate infrastructure: different servers, different customers, different regions.
List all clusters
kip cluster list→ my-startup (my-startup.kipper.run)
Host: 203.0.113.10
Provider: baremetal
client-project (client-project.kipper.run)
Host: 203.0.113.20
Provider: baremetalSwitch clusters
kip cluster use client-project ✔ Switched to client-project (client-project.kipper.run)All subsequent kip commands operate against the selected cluster.
Remove a cluster
When you no longer need access to a cluster:
kip cluster remove client-projectThis only removes the local credentials. It does not affect the cluster itself or anyone else's access.
Connecting to databases
Services like PostgreSQL, MySQL, and Redis run inside the cluster and are not exposed to the internet. To connect with a desktop database client (DBeaver, TablePlus, pgAdmin, or any other tool), use kip tunnel to create a secure connection from your machine to the service.
Open a tunnel
kip tunnel mydb ✔ Tunnel open: localhost:5432 → mydb (postgres)
Press Ctrl+C to closeThe tunnel maps the service's port to the same port on your local machine. PostgreSQL listens on 5432, Redis on 6379, MySQL on 3306, and so on.
Now open your database client and connect to:
- Host: localhost
- Port: 5432
- Username: skipper
- Password: (from
kip service info mydb) - Database: app
Use a custom local port
If port 5432 is already in use on your machine (perhaps you have a local PostgreSQL running), pick a different port:
kip tunnel mydb --local-port 15432 ✔ Tunnel open: localhost:15432 → mydb (postgres)
Press Ctrl+C to closeConnect your database client to localhost:15432 instead.
Tunnel to Redis
kip tunnel cache ✔ Tunnel open: localhost:6379 → cache (redis)
Press Ctrl+C to closeUse any Redis client (RedisInsight, redis-cli, or your application's Redis library) and point it at localhost:6379.
Tunnel to services in a specific environment
If your services are deployed to a project environment, specify it:
kip tunnel db --project yourr-name --environment stagingShell and terminal access
For debugging directly inside containers, see Web Terminal. You can also use kip exec from the CLI:
kip exec api --project myappQuick reference
| Task | Command |
|---|---|
| Export cluster credentials | kip cluster export > file.kip |
| Import cluster credentials | kip cluster add file.kip --set-current |
| List clusters | kip cluster list |
| Switch cluster | kip cluster use <name> |
| Remove local cluster config | kip cluster remove <name> |
| Tunnel to a service | kip tunnel <service> |
| Tunnel with custom port | kip tunnel <service> --local-port <port> |
| Open a shell | kip exec <app> |
| Run a command in a pod | kip exec <app> -- <command> |
| Open a web terminal | Console → App → Connect tab |
