Automating AWS S3 Bucket Creation with Terraform
Learn how to create and configure S3 buckets using Terraform, including versioning, encryption, and access policies
Automating AWS S3 Bucket Creation with Terraform
Amazon S3 (Simple Storage Service) is a highly scalable object storage service. This guide shows you how to automate S3 bucket creation and configuration using Terraform, including best practices for security and performance.
Video Tutorial
Learn more about managing AWS S3 with Terraform in this comprehensive video tutorial:
Prerequisites
- AWS CLI installed and configured
- Terraform installed (version 1.0.0 or later)
- Basic understanding of AWS S3 concepts
- Text editor of your choice
Project Structure
s3-terraform/
├── main.tf
├── variables.tf
├── outputs.tf
└── terraform.tfvars
Setting Up the Provider
Create main.tf
:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = var.aws_region
}
# S3 Bucket
resource "aws_s3_bucket" "main" {
bucket = var.bucket_name
tags = {
Environment = var.environment
Project = var.project_name
}
}
# Bucket Versioning
resource "aws_s3_bucket_versioning" "main" {
bucket = aws_s3_bucket.main.id
versioning_configuration {
status = "Enabled"
}
}
# Server-side encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "main" {
bucket = aws_s3_bucket.main.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
# Public access block
resource "aws_s3_bucket_public_access_block" "main" {
bucket = aws_s3_bucket.main.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# Bucket lifecycle rule
resource "aws_s3_bucket_lifecycle_configuration" "main" {
bucket = aws_s3_bucket.main.id
rule {
id = "transition-to-ia"
status = "Enabled"
transition {
days = 30
storage_class = "STANDARD_IA"
}
transition {
days = 60
storage_class = "GLACIER"
}
expiration {
days = 90
}
}
}
# Bucket policy
resource "aws_s3_bucket_policy" "main" {
bucket = aws_s3_bucket.main.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "EnforceHTTPS"
Effect = "Deny"
Principal = "*"
Action = "s3:*"
Resource = [
aws_s3_bucket.main.arn,
"${aws_s3_bucket.main.arn}/*"
]
Condition = {
Bool = {
"aws:SecureTransport": "false"
}
}
}
]
})
}
# CORS Configuration (if needed)
resource "aws_s3_bucket_cors_configuration" "main" {
count = var.enable_cors ? 1 : 0
bucket = aws_s3_bucket.main.id
cors_rule {
allowed_headers = ["*"]
allowed_methods = ["GET", "PUT", "POST"]
allowed_origins = var.cors_allowed_origins
expose_headers = ["ETag"]
max_age_seconds = 3000
}
}
# Bucket notification for Lambda (if needed)
resource "aws_s3_bucket_notification" "bucket_notification" {
count = var.enable_lambda_notification ? 1 : 0
bucket = aws_s3_bucket.main.id
lambda_function {
lambda_function_arn = var.lambda_function_arn
events = ["s3:ObjectCreated:*"]
filter_prefix = "uploads/"
filter_suffix = ".jpg"
}
}
Variables Configuration
Create variables.tf
:
variable "aws_region" {
description = "AWS region"
type = string
default = "us-west-2"
}
variable "bucket_name" {
description = "Name of the S3 bucket"
type = string
}
variable "environment" {
description = "Environment name"
type = string
default = "dev"
}
variable "project_name" {
description = "Name of the project"
type = string
}
variable "enable_cors" {
description = "Enable CORS configuration"
type = bool
default = false
}
variable "cors_allowed_origins" {
description = "List of allowed origins for CORS"
type = list(string)
default = ["*"]
}
variable "enable_lambda_notification" {
description = "Enable Lambda notifications"
type = bool
default = false
}
variable "lambda_function_arn" {
description = "ARN of the Lambda function for notifications"
type = string
default = ""
}
Output Configuration
Create outputs.tf
:
output "bucket_id" {
description = "The name of the bucket"
value = aws_s3_bucket.main.id
}
output "bucket_arn" {
description = "The ARN of the bucket"
value = aws_s3_bucket.main.arn
}
output "bucket_domain_name" {
description = "The bucket domain name"
value = aws_s3_bucket.main.bucket_domain_name
}
Best Practices
-
Security
- Enable versioning for data protection
- Implement server-side encryption
- Block public access by default
- Use bucket policies to enforce HTTPS
-
Cost Optimization
- Configure lifecycle rules
- Use appropriate storage classes
- Monitor usage patterns
-
Performance
- Enable transfer acceleration if needed
- Configure CORS appropriately
- Use appropriate region for reduced latency
Common S3 Configurations
1. Static Website Hosting
resource "aws_s3_bucket_website_configuration" "main" {
bucket = aws_s3_bucket.main.id
index_document {
suffix = "index.html"
}
error_document {
key = "error.html"
}
}
2. Replication Configuration
resource "aws_s3_bucket_replication_configuration" "main" {
bucket = aws_s3_bucket.main.id
role = aws_iam_role.replication.arn
rule {
id = "replica"
status = "Enabled"
destination {
bucket = aws_s3_bucket.replica.arn
}
}
}
3. Logging Configuration
resource "aws_s3_bucket_logging" "main" {
bucket = aws_s3_bucket.main.id
target_bucket = aws_s3_bucket.logs.id
target_prefix = "log/"
}
Deployment Steps
- Initialize Terraform:
terraform init
- Create
terraform.tfvars
:
aws_region = "us-west-2"
bucket_name = "my-unique-bucket-name"
project_name = "my-project"
environment = "dev"
- Review the plan:
terraform plan
- Apply the configuration:
terraform apply
Security Considerations
-
Access Control
- Use IAM roles and policies
- Implement least privilege access
- Regular security audits
-
Encryption
- Enable server-side encryption
- Consider KMS for sensitive data
- Enforce encryption in transit
-
Monitoring
- Enable access logging
- Set up CloudWatch alerts
- Regular compliance checks
Cleanup
Remove the S3 bucket and associated resources:
terraform destroy
Note: Ensure the bucket is empty before destruction.
Troubleshooting
-
Bucket Creation Issues
- Check name uniqueness
- Verify permissions
- Review region restrictions
-
Policy Conflicts
- Check policy syntax
- Verify principal formatting
- Review condition operators
-
Lifecycle Rule Issues
- Validate transition periods
- Check storage class compatibility
- Verify rule priorities
Conclusion
You’ve learned how to create and manage S3 buckets using Terraform. This approach ensures:
- Consistent bucket configurations
- Secure access controls
- Cost-effective storage management
- Automated deployment
Remember to:
- Follow security best practices
- Implement proper monitoring
- Optimize for cost and performance
- Maintain proper documentation