K3d is an open-source wrapper around the Rancher/SUSE K3s Kubernetes distribution that lets you run the control plane inside Docker. The entire stack runs in Docker, giving you a fully containerized cluster that’s lightweight and easy to set up.
Whereas K3s is designed for a broad range of workflows, K3d focuses more specifically on development situations where you’re already using Docker. It lets you spin up a Kubernetes cluster on your existing Docker host without running a virtual machine or any other system services.
This article will show you how to get up and running with a simple K3d cluster. You’ll need both Kubectl and Docker v20.10.5 or newer already installed on your system before you begin. K3d works on Linux, Mac (including via Homebrew), and Windows (via Chocolatey). This guide focuses on use with Linux;
k3d CLI installation instructions for other platforms are available in the documentation.
Installing the K3d CLI
k3d CLI provides management commands for creating and managing your clusters. You can find the latest CLI on GitHub or run the installation script to automatically get the correct download for your system.
$ curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
The script deposits the
k3d binary into your
/usr/local/bin directory. Try running the
k3d version command to check your installation’s succeeded:
$ k3d version k3d version v5.4.6 k3s version v1.24.4-k3s1 (default)
Creating a Cluster
The K3d CLI provides a
cluster create command to automatically create and start a new cluster:
$ k3d cluster create INFO Prep: Network INFO Created network 'k3d-k3s-default' INFO Created image volume k3d-k3s-default-images INFO Starting new tools node... INFO Creating node 'k3d-k3s-default-server-0' INFO Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.6' INFO Pulling image 'docker.io/rancher/k3s:v1.24.4-k3s1' INFO Starting Node 'k3d-k3s-default-tools' INFO Creating LoadBalancer 'k3d-k3s-default-serverlb' INFO Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.6' INFO Using the k3d-tools node to gather environment information INFO HostIP: using network gateway 172.25.0.1 address INFO Starting cluster 'k3s-default' INFO Starting servers... INFO Starting Node 'k3d-k3s-default-server-0' INFO All agents already running. INFO Starting helpers... INFO Starting Node 'k3d-k3s-default-serverlb' INFO Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... INFO Cluster 'k3s-default' created successfully! INFO You can now use it like this: kubectl cluster-info
The cluster will be named
k3s-default when you run the command without any arguments. Set your own name by including it as the command’s first argument:
$ k3d cluster create demo ...
K3d automatically modifies your Kubernetes config file (
.kube/config) to include a connection to your new cluster. It marks the connection as the default so
kubectl commands will now target your K3d environment.
$ kubectl cluster-info Kubernetes control plane is running at https://0.0.0.0:42879 CoreDNS is running at https://0.0.0.0:42879/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://0.0.0.0:42879/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
docker ps will show two containers have been started, one for K3s and another for K3d’s proxy that forwards traffic into your cluster:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9b6b610ad312 ghcr.io/k3d-io/k3d-proxy:5.4.6 "/bin/sh -c nginx-pr..." 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:42879->6443/tcp k3d-k3s-default-serverlb 842cc90b78bf rancher/k3s:v1.24.4-k3s1 "/bin/k3s server --t..." 3 minutes ago Up 3 minutes k3d-k3s-default-server-0
Using Your Cluster
Use familiar Kubectl commands to interact with your cluster and deploy your Pods:
$ kubectl run nginx --image nginx:latest pod/nginx created $ kubectl expose pod/nginx --port 80 --type NodePort service/nginx exposed
To access your NGINX server, first find the IP address assigned to your Kubernetes Node:
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k3d-k3s-default-server-0 Ready control-plane,master 102s v1.24.4+k3s1 172.27.0.2 <none> K3s dev 5.4.0-125-generic containerd://1.6.6-k3s1
The correct IP to use is
Next find the NodePort assigned to your
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5m49s nginx NodePort 10.43.235.233 <none> 80:31214/TCP 1s
The exposed port number is
31214. Making a request to
172.17.0.2:31214 should issue the default NGINX welcome page:
$ curl http://172.17.0.2:31214 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
Enabling K3s Flags
cluster create command wraps the standard K3s cluster creation process. You can pass arguments through to K3s by supplying
--k3s-arg flags. The value of the flag should be an argument that will be included when K3d calls the K3s binary.
$ k3s cluster create --k3s-arg "--disable=traefik"
This example instructs K3s to disable its built-in Traefik component.
Accessing Services Running on Your Host
Some workloads you run in K3d might need to access services already running on your Docker host. K3d provides a hostname called
host.k3d.internal within its default DNS configuration. This will automatically resolve to your host machine. You can reference this special hostname within your Pods to access existing databases, file shares, and other APIs running outside of Kubernetes.
Using Local Docker Images
Your K3d/K3s cluster can’t access your local Docker images. The cluster and all its components is running inside Docker. Trying to use a private image that only exists on the host will fail and report an error.
There are two ways of resolving this: either push your image to a registry, or use K3d’s image import feature to copy a local image into your cluster. The first method is generally preferred as it centralizes your image storage and lets you access images from any environment. However, when quickly testing local changes you might want to directly import an image you’ve just built:
$ k3d image import demo-image:latest
This command will make
demo-image:latest available inside your cluster.
K3d can also create and expose an image registry for you. Registries are best created alongside your cluster as K3d can then automatically configure the cluster’s access:
$ k3d cluster create --registry-create demo-registry
This starts a new cluster with a registry called
demo-registry. The registry will run in its own Docker container. You can discover the port number that the registry is exposed on by running
docker ps -f name=<cluster-name>-registry, where
<cluster-name> is the name of your cluster. Pushing images to this registry will make them accessible to Pods in your cluster.
$ docker tag demo-image:latest k3d-demo-registry.localhost:12345/demo-image:latest $ docker push k3d-demo-registry.localhost:12345/demo-image:latest
Stopping Your Cluster
Your K3d cluster will run continually until you stop it yourself. The
cluster stop command stops running Docker containers while preserving your cluster’s data:
$ k3d cluster stop k3s-default
Restart your cluster in the future using the
cluster start command:
$ k3d cluster start k3s-default
Deleting Your Cluster
You can delete a cluster at any time by running the
cluster delete command and supplying its name. This will remove all trace of the cluster, deleting the Docker containers and volumes that provided it. Deleting all your clusters will take your host back to a clean slate with only the K3d CLI installed.
$ k3d cluster delete k3s-default INFO Deleting cluster 'k3s-default' INFO Deleting cluster network 'k3d-k3s-default' INFO Deleting 2 attached volumes... INFO Removing cluster details from default kubeconfig... INFO Removing standalone kubeconfig file (if there is one)... INFO Successfully deleted cluster k3s-default!
The deletion process automatically removes references to the cluster from your Kubeconfig.
K3d lets you run a containerized Kubernetes cluster. It provides a complete K3s environment wherever Docker is available. K3d supports multiple nodes, has integrated support for image registries, and can be used to create highly available clusters with multiple control planes.
Developers already running Docker can use K3d to quickly add Kubernetes to their working environment. K3d is lightweight, easy to manage, and adds no other system services to your machine. This makes it a great choice for local use but its reliance on Docker means it may not be suitable for production hosts where you don’t want to add another dependency. Other Kubernetes distributions such as Minikube, Microk8s, and plain K3s are all viable alternatives.