Managing Google Kubernetes Engine (GKE) with Terraform
Learn how to provision and manage Google Kubernetes Engine clusters using Terraform
In this guide, we’ll explore how to manage Google Kubernetes Engine (GKE) using Terraform.
Video Tutorial
Learn more about managing Google Kubernetes Engine with Terraform in this comprehensive video tutorial:
Prerequisites
- Google Cloud SDK installed and configured
- Terraform installed (version 1.0.0 or later)
- Basic understanding of Kubernetes concepts
- A GCP project with billing enabled
Project Structure
.
├── main.tf # Main Terraform configuration file
├── variables.tf # Variable definitions
├── outputs.tf # Output definitions
├── terraform.tfvars # Variable values
└── modules/
└── gke/
├── main.tf # GKE specific configurations
├── variables.tf # Module variables
├── clusters.tf # Cluster configurations
└── outputs.tf # Module outputs
Provider Configuration
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
}
}
provider "google" {
project = var.project_id
region = var.region
}
Variables
variable "project_id" {
description = "The ID of the GCP project"
type = string
}
variable "region" {
description = "The region to deploy resources to"
type = string
default = "us-central1"
}
variable "cluster_name" {
description = "Name of the GKE cluster"
type = string
}
variable "node_count" {
description = "Number of nodes in the cluster"
type = number
default = 3
}
variable "machine_type" {
description = "Machine type for the nodes"
type = string
default = "e2-standard-2"
}
Network Configuration
resource "google_compute_network" "vpc" {
name = "gke-network"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "subnet" {
name = "gke-subnet"
ip_cidr_range = "10.0.0.0/24"
network = google_compute_network.vpc.id
region = var.region
secondary_ip_range {
range_name = "services-range"
ip_cidr_range = "192.168.1.0/24"
}
secondary_ip_range {
range_name = "pod-ranges"
ip_cidr_range = "192.168.64.0/22"
}
}
GKE Cluster
resource "google_container_cluster" "primary" {
name = var.cluster_name
location = var.region
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
network = google_compute_network.vpc.name
subnetwork = google_compute_subnetwork.subnet.name
ip_allocation_policy {
cluster_secondary_range_name = "pod-ranges"
services_secondary_range_name = "services-range"
}
master_authorized_networks_config {
cidr_blocks {
cidr_block = "0.0.0.0/0"
display_name = "All"
}
}
}
resource "google_container_node_pool" "primary_nodes" {
name = "${google_container_cluster.primary.name}-node-pool"
location = var.region
cluster = google_container_cluster.primary.name
node_count = var.node_count
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
]
labels = {
env = "production"
}
machine_type = var.machine_type
tags = ["gke-node"]
metadata = {
disable-legacy-endpoints = "true"
}
}
}
Outputs
output "kubernetes_cluster_name" {
value = google_container_cluster.primary.name
description = "GKE Cluster Name"
}
output "kubernetes_cluster_host" {
value = google_container_cluster.primary.endpoint
description = "GKE Cluster Host"
}
Best Practices
-
Security:
- Enable Workload Identity
- Use Binary Authorization
- Implement Network Policies
-
Networking:
- Use VPC-native clusters
- Configure private clusters
- Implement proper firewall rules
-
Cost Optimization:
- Use preemptible nodes when possible
- Implement autoscaling
- Right-size node pools
-
Maintenance:
- Enable auto-upgrades
- Configure maintenance windows
- Use node auto-repair
Common Operations
Creating the Cluster
terraform init
terraform plan
terraform apply
Getting Cluster Credentials
gcloud container clusters get-credentials $(terraform output -raw kubernetes_cluster_name) --region $(terraform output -raw region)
Destroying the Cluster
terraform destroy
Best Practices and Tips
-
Cluster Management:
- Use multiple node pools
- Implement proper monitoring
- Regular security audits
-
Security:
- Use Workload Identity
- Enable network policies
- Regular security updates
-
Performance:
- Configure autoscaling
- Monitor resource usage
- Use appropriate machine types
Conclusion
You’ve learned how to set up and manage Google Kubernetes Engine using Terraform. This setup provides:
- Automated cluster deployment
- Secure and scalable infrastructure
- Best practices implementation
- Easy cluster management and maintenance