Setting up AWS SQS with Terraform
A comprehensive guide to deploying Amazon Simple Queue Service (SQS) using Terraform Infrastructure as Code
Setting up AWS SQS with Terraform
Amazon Simple Queue Service (SQS) is a fully managed message queuing service. This guide shows how to set up SQS using Terraform.
Prerequisites
- AWS CLI configured
- Terraform installed
- Basic understanding of message queues
- Producer and consumer applications ready
Project Structure
aws-sqs-terraform/
├── main.tf
├── variables.tf
├── outputs.tf
└── terraform.tfvars
Basic SQS Configuration
# main.tf
provider "aws" {
region = var.aws_region
}
# Standard Queue
resource "aws_sqs_queue" "standard" {
name = "${var.project_name}-queue"
visibility_timeout_seconds = 30
message_retention_seconds = 345600 # 4 days
max_message_size = 262144 # 256 KB
delay_seconds = 0
receive_wait_time_seconds = 0
tags = {
Environment = var.environment
}
}
# Queue Policy
resource "aws_sqs_queue_policy" "standard" {
queue_url = aws_sqs_queue.standard.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
AWS = "*"
}
Action = "SQS:*"
Resource = aws_sqs_queue.standard.arn
Condition = {
ArnEquals = {
"aws:SourceArn": var.source_arn
}
}
}
]
})
}
FIFO Queue Configuration
# FIFO Queue
resource "aws_sqs_queue" "fifo" {
name = "${var.project_name}-queue.fifo"
fifo_queue = true
content_based_deduplication = true
visibility_timeout_seconds = 30
message_retention_seconds = 345600
max_message_size = 262144
delay_seconds = 0
receive_wait_time_seconds = 0
tags = {
Environment = var.environment
}
}
# Dead Letter Queue
resource "aws_sqs_queue" "dlq" {
name = "${var.project_name}-dlq"
tags = {
Environment = var.environment
}
}
# Main Queue with DLQ
resource "aws_sqs_queue" "with_dlq" {
name = "${var.project_name}-with-dlq"
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.dlq.arn
maxReceiveCount = 3
})
tags = {
Environment = var.environment
}
}
Cross-Account Access
# Queue Policy for Cross-Account Access
resource "aws_sqs_queue_policy" "cross_account" {
queue_url = aws_sqs_queue.standard.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
AWS = var.allowed_account_arns
}
Action = [
"sqs:SendMessage",
"sqs:ReceiveMessage"
]
Resource = aws_sqs_queue.standard.arn
}
]
})
}
Variables Configuration
# variables.tf
variable "aws_region" {
description = "AWS region"
type = string
default = "us-west-2"
}
variable "project_name" {
description = "Project name"
type = string
}
variable "environment" {
description = "Environment name"
type = string
default = "dev"
}
variable "source_arn" {
description = "ARN of the source service"
type = string
}
variable "allowed_account_arns" {
description = "List of AWS account ARNs allowed to access the queue"
type = list(string)
default = []
}
Best Practices
-
Queue Management
- Use meaningful queue names
- Implement proper access controls
- Configure appropriate timeout values
- Use FIFO queues when message order matters
-
Security
- Implement least privilege access
- Use encryption for sensitive data
- Regularly audit queue policies
- Monitor failed message deliveries
-
Reliability
- Implement proper error handling
- Use DLQ for failed messages
- Monitor queue depth
- Implement proper retry policies
-
Cost Optimization
- Monitor message volume
- Clean up unused queues
- Use long polling when appropriate
- Consider message batching
Server-Side Encryption
# KMS Key for Queue Encryption
resource "aws_kms_key" "queue" {
description = "KMS key for SQS queue encryption"
deletion_window_in_days = 7
enable_key_rotation = true
tags = {
Environment = var.environment
}
}
# Encrypted Queue
resource "aws_sqs_queue" "encrypted" {
name = "${var.project_name}-encrypted"
kms_master_key_id = aws_kms_key.queue.key_id
kms_data_key_reuse_period_seconds = 300
tags = {
Environment = var.environment
}
}
Monitoring Configuration
# CloudWatch Alarms
resource "aws_cloudwatch_metric_alarm" "queue_depth" {
alarm_name = "${var.project_name}-queue-depth"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = "1"
metric_name = "ApproximateNumberOfMessagesVisible"
namespace = "AWS/SQS"
period = "300"
statistic = "Average"
threshold = "1000"
alarm_description = "This metric monitors queue depth"
alarm_actions = [var.sns_topic_arn]
dimensions = {
QueueName = aws_sqs_queue.standard.name
}
}
resource "aws_cloudwatch_metric_alarm" "dlq_messages" {
alarm_name = "${var.project_name}-dlq-messages"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = "1"
metric_name = "ApproximateNumberOfMessagesVisible"
namespace = "AWS/SQS"
period = "300"
statistic = "Sum"
threshold = "0"
alarm_description = "This metric monitors DLQ messages"
alarm_actions = [var.sns_topic_arn]
dimensions = {
QueueName = aws_sqs_queue.dlq.name
}
}
Deployment Steps
- Initialize Terraform:
terraform init
- Plan the deployment:
terraform plan
- Apply the configuration:
terraform apply
Clean Up
Remove all resources when done:
terraform destroy
Common Use Cases
- Message Processing Pipeline
resource "aws_sqs_queue" "processing" {
name = "${var.project_name}-processing"
visibility_timeout_seconds = 60
receive_wait_time_seconds = 20 # Enable long polling
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.dlq.arn
maxReceiveCount = 3
})
}
resource "aws_lambda_event_source_mapping" "processing" {
event_source_arn = aws_sqs_queue.processing.arn
function_name = var.lambda_function_arn
batch_size = 10
}
- Fan-Out Pattern
resource "aws_sns_topic" "main" {
name = "${var.project_name}-topic"
}
resource "aws_sqs_queue" "subscribers" {
count = length(var.subscriber_names)
name = "${var.project_name}-${var.subscriber_names[count.index]}"
}
resource "aws_sns_topic_subscription" "subscribers" {
count = length(aws_sqs_queue.subscribers)
topic_arn = aws_sns_topic.main.arn
protocol = "sqs"
endpoint = aws_sqs_queue.subscribers[count.index].arn
}
Conclusion
This setup provides a comprehensive foundation for deploying SQS using Terraform. Remember to:
- Plan your messaging architecture carefully
- Implement proper security measures
- Monitor queue metrics and DLQ
- Keep your configurations versioned
- Test thoroughly before production deployment
The complete code can be customized based on your specific requirements and use cases.