Setting Up Kubernetes Cluster with Kind
Complete guide for creating and managing local Kubernetes clusters using Kind (Kubernetes in Docker)
Setting Up Kubernetes Cluster with Kind
This guide provides detailed instructions for setting up and managing local Kubernetes clusters using Kind (Kubernetes in Docker), perfect for development and testing environments.
Video Tutorial
Learn more about setting up Kubernetes clusters with Kind in this comprehensive video tutorial:
What is Kind?
Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker container “nodes”. It was primarily designed for testing Kubernetes itself but is perfect for local development and CI.
Prerequisites
- Docker installed and running
- kubectl CLI tool
- Go (optional, for building from source)
- Linux, macOS, or Windows with WSL2
Installation
macOS
# Using Homebrew
brew install kind
# Using Binary
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-darwin-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
Linux
# Using Binary
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
Windows (PowerShell)
curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.20.0/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exe
Basic Cluster Operations
1. Create a Basic Cluster
kind create cluster --name my-cluster
2. Create Multi-Node Cluster
# multi-node-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
kind create cluster --name multi-node --config multi-node-config.yaml
3. Advanced Cluster Configuration
# advanced-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "127.0.0.1"
apiServerPort: 6443
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/12"
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
extraMounts:
- hostPath: /path/to/host/dir
containerPath: /path/in/container
- role: worker
extraMounts:
- hostPath: /path/to/host/dir
containerPath: /path/in/container
kind create cluster --name advanced --config advanced-config.yaml
Cluster Management
1. List Clusters
kind get clusters
2. Delete Cluster
kind delete cluster --name my-cluster
3. Load Docker Images
# Build your image
docker build -t my-app:latest .
# Load image into Kind cluster
kind load docker-image my-app:latest --name my-cluster
Setting Up Ingress
1. Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
2. Configure Ingress
# ingress-example.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
Storage Configuration
1. Local Path Provisioner
# local-path-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
2. Persistent Volume Example
# pv-example.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Networking Features
1. MetalLB Setup
# metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.18.255.1-172.18.255.250
2. Custom CNI Configuration
# custom-cni-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true
podSubnet: "10.244.0.0/16"
nodes:
- role: control-plane
- role: worker
Development Workflow
1. Hot Reload Setup
# skaffold.yaml
apiVersion: skaffold/v2beta26
kind: Config
build:
artifacts:
- image: my-app
context: .
docker:
dockerfile: Dockerfile
deploy:
kubectl:
manifests:
- k8s/*.yaml
2. Debug Configuration
# debug-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: debug-pod
spec:
containers:
- name: debug
image: ubuntu
command: ["sleep", "infinity"]
Monitoring Setup
1. Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
2. Prometheus & Grafana
# monitoring-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: ./prometheus-data
containerPath: /prometheus
- hostPath: ./grafana-data
containerPath: /grafana
CI/CD Integration
1. GitHub Actions Example
# .github/workflows/kind-test.yml
name: Kind Test
on: [push]
jobs:
kind:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: engineerd/setup-kind@v0.5.0
with:
version: "v0.20.0"
- name: Test
run: |
kubectl cluster-info
kubectl get pods -A
2. Jenkins Pipeline
// Jenkinsfile
pipeline {
agent {
docker {
image 'kindest/node:v1.27.3'
}
}
stages {
stage('Setup Kind') {
steps {
sh '''
kind create cluster
kubectl cluster-info
'''
}
}
}
}
Best Practices
- Resource Management
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
eviction-hard: 'memory.available<5%'
system-reserved: 'memory=1Gi'
- Security Configuration
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
audit-log-path: /var/log/audit.log
audit-policy-file: /etc/kubernetes/audit-policy.yaml
Troubleshooting
Common Issues and Solutions
- Cluster Creation Fails
# Check Docker resources
docker system info
# Clean up old clusters
kind delete clusters --all
# Check system resources
free -h
df -h
- Network Issues
# Check cluster networking
kubectl get nodes -o wide
kubectl get pods -A -o wide
# Debug DNS
kubectl run dnsutils --image=gcr.io/kubernetes-e2e-test-images/dnsutils --command -- sleep infinity
kubectl exec -it dnsutils -- nslookup kubernetes.default
- Resource Constraints
# Check node capacity
kubectl describe node
# Check pod resources
kubectl top pods -A