Setting up AWS EFS with Terraform
A detailed guide to deploying Amazon Elastic File System (EFS) using Terraform Infrastructure as Code
Setting up AWS EFS with Terraform
Amazon Elastic File System (EFS) provides scalable file storage for use with Amazon EC2 instances. This guide shows how to set up EFS using Terraform.
Prerequisites
- AWS CLI configured
- Terraform installed
- VPC and subnets already configured
- Basic understanding of network file systems
Project Structure
aws-efs-terraform/
├── main.tf
├── variables.tf
├── outputs.tf
└── terraform.tfvars
EFS Configuration
# main.tf
provider "aws" {
region = var.aws_region
}
# EFS File System
resource "aws_efs_file_system" "main" {
creation_token = "${var.project_name}-efs"
encrypted = true
lifecycle_policy {
transition_to_ia = "AFTER_30_DAYS"
}
tags = {
Name = "${var.project_name}-efs"
}
}
# Mount Targets
resource "aws_efs_mount_target" "main" {
count = length(var.subnet_ids)
file_system_id = aws_efs_file_system.main.id
subnet_id = var.subnet_ids[count.index]
security_groups = [aws_security_group.efs.id]
}
# Security Group
resource "aws_security_group" "efs" {
name = "${var.project_name}-efs-sg"
description = "Allow EFS inbound traffic"
vpc_id = var.vpc_id
ingress {
description = "NFS from VPC"
from_port = 2049
to_port = 2049
protocol = "tcp"
security_groups = var.allowed_security_group_ids
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.project_name}-efs-sg"
}
}
# Backup Policy (Optional)
resource "aws_efs_backup_policy" "policy" {
file_system_id = aws_efs_file_system.main.id
backup_policy {
status = "ENABLED"
}
}
# Access Point (Optional)
resource "aws_efs_access_point" "test" {
file_system_id = aws_efs_file_system.main.id
posix_user {
gid = 1000
uid = 1000
}
root_directory {
path = "/data"
creation_info {
owner_gid = 1000
owner_uid = 1000
permissions = "755"
}
}
tags = {
Name = "${var.project_name}-access-point"
}
}
Variables Configuration
# variables.tf
variable "aws_region" {
description = "AWS region"
type = string
default = "us-west-2"
}
variable "project_name" {
description = "Project name"
type = string
}
variable "vpc_id" {
description = "VPC ID"
type = string
}
variable "subnet_ids" {
description = "Subnet IDs for mount targets"
type = list(string)
}
variable "allowed_security_group_ids" {
description = "Security group IDs allowed to access EFS"
type = list(string)
}
Outputs
# outputs.tf
output "efs_id" {
description = "EFS File System ID"
value = aws_efs_file_system.main.id
}
output "efs_dns_name" {
description = "EFS DNS name"
value = aws_efs_file_system.main.dns_name
}
output "mount_target_ids" {
description = "Mount target IDs"
value = aws_efs_mount_target.main[*].id
}
output "access_point_id" {
description = "EFS Access Point ID"
value = aws_efs_access_point.test.id
}
Mounting EFS on EC2 Instances
Here’s an example user data script for EC2 instances:
resource "aws_instance" "example" {
# ... other configuration ...
user_data = <<-EOF
#!/bin/bash
yum install -y amazon-efs-utils
mkdir -p /mnt/efs
mount -t efs ${aws_efs_file_system.main.id}:/ /mnt/efs
echo "${aws_efs_file_system.main.id}:/ /mnt/efs efs defaults,_netdev 0 0" >> /etc/fstab
EOF
}
Best Practices
-
Security
- Always enable encryption at rest
- Use security groups to control access
- Implement proper IAM policies
- Use access points for application-specific entry points
-
Performance
- Use General Purpose performance mode for most workloads
- Consider Max I/O mode for high-throughput scenarios
- Place mount targets in each AZ for high availability
-
Cost Optimization
- Enable lifecycle management
- Use appropriate throughput modes
- Monitor storage usage
-
Backup
- Enable automatic backups
- Set appropriate backup retention periods
- Test backup restoration procedures
Lifecycle Management
resource "aws_efs_file_system" "main" {
# ... other configuration ...
lifecycle_policy {
transition_to_ia = "AFTER_30_DAYS"
}
lifecycle_policy {
transition_to_primary_storage_class = "AFTER_1_ACCESS"
}
}
Monitoring Configuration
resource "aws_cloudwatch_metric_alarm" "efs_burst_credit_balance" {
alarm_name = "${var.project_name}-efs-burst-credits"
comparison_operator = "LessThanThreshold"
evaluation_periods = "1"
metric_name = "BurstCreditBalance"
namespace = "AWS/EFS"
period = "300"
statistic = "Average"
threshold = "1000000000000"
alarm_description = "EFS Burst Credit Balance is too low"
alarm_actions = [var.sns_topic_arn]
dimensions = {
FileSystemId = aws_efs_file_system.main.id
}
}
Deployment Steps
- Initialize Terraform:
terraform init
- Plan the deployment:
terraform plan
- Apply the configuration:
terraform apply
Clean Up
Remove all resources when done:
terraform destroy
Common Use Cases
- Shared File Storage for Container Workloads
resource "aws_efs_access_point" "containers" {
file_system_id = aws_efs_file_system.main.id
root_directory {
path = "/container-data"
creation_info {
owner_gid = 0
owner_uid = 0
permissions = "755"
}
}
}
- WordPress File Storage
resource "aws_efs_access_point" "wordpress" {
file_system_id = aws_efs_file_system.main.id
root_directory {
path = "/wordpress"
creation_info {
owner_gid = 33
owner_uid = 33
permissions = "755"
}
}
}
Conclusion
This setup provides a solid foundation for deploying EFS using Terraform. Remember to:
- Consider your performance requirements
- Implement proper security measures
- Monitor usage and costs
- Regular backup testing
- Version control your Terraform configurations
The complete code can be customized based on your specific requirements and use cases.