Concepts

Terraform

HashiCorp's Infrastructure as Code tool that enables defining, provisioning, and managing multi-cloud infrastructure through declarative HCL files.

evergreen#iac#terraform#cloud#devops

Terraform is the most widely adopted IaC tool on the market. It uses a declarative language (HCL) to define infrastructure and a state model to track which resources exist and how to update them.

How it works

Write (.tf files) → Plan (preview changes) → Apply (execute changes)

Lifecycle

terraform init      # Download providers and modules
terraform validate  # Verify syntax
terraform plan      # See what will change
terraform apply     # Apply changes
terraform destroy   # Delete all infrastructure

Fundamental concepts

Providers

Plugins that connect Terraform to service APIs:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}
 
provider "aws" {
  region = "us-east-1"
}

Over 4,000 providers: AWS, Azure, GCP, Kubernetes, GitHub, Datadog, Cloudflare, etc.

Resources

Basic unit — an infrastructure resource:

resource "aws_s3_bucket" "data" {
  bucket = "my-app-data-${var.environment}"
 
  tags = {
    Environment = var.environment
    ManagedBy   = "terraform"
  }
}
 
resource "aws_s3_bucket_versioning" "data" {
  bucket = aws_s3_bucket.data.id
  versioning_configuration {
    status = "Enabled"
  }
}

Variables

# variables.tf
variable "environment" {
  description = "Deployment environment"
  type        = string
  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Must be dev, staging, or prod."
  }
}
 
variable "instance_count" {
  description = "Number of instances"
  type        = number
  default     = 2
}
# terraform.tfvars
environment    = "prod"
instance_count = 3

Outputs

output "bucket_arn" {
  description = "ARN of the S3 bucket"
  value       = aws_s3_bucket.data.arn
}
 
output "api_url" {
  description = "API Gateway URL"
  value       = aws_apigatewayv2_api.main.api_endpoint
  sensitive   = false
}

Data sources

Read information from existing resources (not managed by Terraform):

data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]
 
  filter {
    name   = "name"
    values = ["al2023-ami-*-x86_64"]
  }
}
 
resource "aws_instance" "web" {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = "t3.micro"
}

State

The state file is the source of truth about which resources exist:

# Remote backend (mandatory for teams)
terraform {
  backend "s3" {
    bucket         = "company-terraform-state"
    key            = "services/api/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-locks"    # Locking
    encrypt        = true                  # Encryption at rest
  }
}

State commands

terraform state list                    # List resources
terraform state show aws_instance.web   # Resource detail
terraform state mv aws_instance.old aws_instance.new  # Rename
terraform state rm aws_instance.legacy  # Stop managing
terraform import aws_instance.web i-1234567890  # Import existing

Modules

Reusable infrastructure packages:

# Custom module
module "api" {
  source = "./modules/lambda-api"
 
  function_name = "my-api"
  runtime       = "nodejs20.x"
  memory_size   = 256
  environment   = var.environment
}
 
# Registry module
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.5.0"
 
  name = "production"
  cidr = "10.0.0.0/16"
}

Module structure

modules/lambda-api/
├── main.tf          # Resources
├── variables.tf     # Inputs
├── outputs.tf       # Outputs
├── versions.tf      # Provider requirements
└── README.md        # Documentation

Advanced patterns

Workspaces

Multiple environments with the same code:

terraform workspace new staging
terraform workspace new production
terraform workspace select staging
terraform apply -var-file="staging.tfvars"

for_each and count

# Create multiple resources
resource "aws_iam_user" "team" {
  for_each = toset(["alice", "bob", "carol"])
  name     = each.key
}
 
# Conditional
resource "aws_cloudwatch_log_group" "api" {
  count = var.enable_logging ? 1 : 0
  name  = "/api/${var.environment}"
}

Moved blocks (safe refactoring)

moved {
  from = aws_instance.server
  to   = aws_instance.web
}

Terraform vs alternatives

AspectTerraformPulumiAWS CDKCloudFormation
LanguageHCLTS, Python, GoTS, PythonYAML/JSON
Multi-cloudYesYesNo (AWS)No (AWS)
StateSelf-managed/CloudManaged/selfAWS-managedAWS-managed
Ecosystem4,000+ providers100+ providersAWS onlyAWS only
LicenseBSL 1.1Apache 2.0Apache 2.0Proprietary
OSS ForkOpenTofu

Security

# Never hardcode secrets
variable "db_password" {
  type      = string
  sensitive = true   # Won't appear in logs or plan
}
 
# Use data sources for secrets
data "aws_secretsmanager_secret_version" "db" {
  secret_id = "prod/db/password"
}

Anti-patterns

  • Local state — sharing state via Slack/email. Use remote backend with locking.
  • Mega-monolith — all infra in a single main.tf. Split by domain.
  • Unpinned versions — providers and modules without fixed versions break builds.
  • Apply without plan — always review the plan before applying.
  • Secrets in .tf — use sensitive variables, secret managers, or Vault.
  • Ignoring terraform fmt — consistent formatting makes code review easier.

Why it matters

Terraform has become the de facto standard for infrastructure as code. Mastering it is not optional for teams operating in the cloud — it is the difference between reproducible, auditable infrastructure and manual configuration that no one can reconstruct when it fails.

References

Concepts