Skip to content

DOMAIN:INFRASTRUCTURE:TERRAFORM_PATTERNS

OWNER: arjan UPDATED: 2026-03-24 SCOPE: all Terraform-managed infrastructure (UpCloud, TransIP DNS, BunnyCDN) AGENTS: arjan (primary), stef (TransIP DNS provider), karel (BunnyCDN provider), gerco (state backend)


TERRAFORM:OVERVIEW

PURPOSE: Infrastructure as Code for all GE cloud resources VERSION: Terraform >= 1.5.0 STATE: remote backend, NEVER local (compliance requirement) PROVIDERS: UpCloud (primary compute), TransIP (DNS), BunnyCDN (edge) LOCATIONS: de-fra1 (primary), nl-ams1 (DR) — EU only, always

RULE: no manual console changes — Terraform is the only way to provision RULE: every resource MUST be tagged (client, project, environment, managed_by, created_by, created_at) RULE: plan approval does NOT equal execution approval — separate gates RULE: terraform apply requires explicit Dirk-Jan approval for production resources


TERRAFORM:PROVIDER_CONFIG

UPCLOUD_PROVIDER

terraform {
  required_version = ">= 1.5.0"

  required_providers {
    upcloud = {
      source  = "UpCloudLtd/upcloud"
      version = ">= 5.0.0"
    }
  }
}

provider "upcloud" {
  # Credentials from Vault — NEVER hardcode
  # Vault path: admin-ui/api-keys/upcloud
  username = var.upcloud_username
  password = var.upcloud_password
}

CREDENTIAL_FLOW: 1. piotr stores UpCloud API credentials in Vault 2. arjan reads credentials at plan/apply time 3. credentials NEVER appear in Terraform state or logs

TRANSIP_PROVIDER (Stef's DNS)

terraform {
  required_providers {
    transip = {
      source  = "aequitas/transip"
      version = ">= 0.7.0"
    }
  }
}

provider "transip" {
  # Vault path: admin-ui/api-keys/transip
  account_name   = var.transip_account
  private_key_path = var.transip_key_path
}

BUNNYNET_PROVIDER (Karel's CDN)

terraform {
  required_providers {
    bunnynet = {
      source  = "bunnynet/bunnynet"
      version = ">= 0.4.0"
    }
  }
}

provider "bunnynet" {
  # Vault path: admin-ui/api-keys/bunny
  api_key = var.bunny_api_key
}

TERRAFORM:UPCLOUD_RESOURCES

SERVER (upcloud_server)

USE_WHEN: dedicated compute for client workloads (non-k8s)

resource "upcloud_server" "client_app" {
  hostname = "${var.client}-${var.project}-${var.environment}"
  zone     = "de-fra1"  # EU only — data sovereignty
  plan     = "2xCPU-4GB"

  template {
    storage = "Ubuntu Server 22.04 LTS"
    size    = 50  # GB
  }

  network_interface {
    type = "private"
    network = upcloud_network.client_network.id
  }

  network_interface {
    type = "public"
    # Only if direct public access needed
  }

  labels = {
    client      = var.client
    project     = var.project
    environment = var.environment
    managed_by  = "terraform"
    created_by  = "arjan"
    created_at  = timestamp()
  }
}

MANAGED_KUBERNETES (upcloud_managed_kubernetes_cluster)

USE_WHEN: client needs dedicated k8s cluster (Zone 2 staging, Zone 3 production)

resource "upcloud_managed_kubernetes_cluster" "staging" {
  name                = "${var.client}-${var.project}-staging"
  zone                = "de-fra1"
  network_cidr        = "172.16.0.0/16"
  control_plane_ip_filter = ["0.0.0.0/0"]  # restrict in production
  version             = "1.29"

  labels = {
    client      = var.client
    project     = var.project
    environment = "staging"
    managed_by  = "terraform"
    created_by  = "arjan"
  }
}

resource "upcloud_kubernetes_node_group" "workers" {
  cluster    = upcloud_managed_kubernetes_cluster.staging.id
  name       = "workers"
  node_count = 3
  plan       = "2xCPU-4GB"
  storage_size = 50

  labels = {
    role = "worker"
  }
}

HANDOFF: arjan provisions cluster -> hands kubeconfig to thijmen (Zone 2) or rutger (Zone 3) RULE: arjan provisions clusters, thijmen/rutger deploy workloads — never cross boundaries

MANAGED_DATABASE (upcloud_managed_database_postgresql)

USE_WHEN: client PostgreSQL databases (SSOT for client data)

resource "upcloud_managed_database_postgresql" "client_db" {
  name  = "${var.client}-${var.project}-${var.environment}"
  plan  = "1x1xCPU-2GB-25GB"
  zone  = "de-fra1"
  title = "${var.client} ${var.project} PostgreSQL"

  properties {
    version                 = "16"
    admin_username         = "admin"
    automatic_utility_network_ip_filter = true
    backup_hour            = 2   # 2 AM CET backup window
    backup_minute          = 0
    ip_filter              = []  # restrict to specific IPs
    pg_stat_statements     = true
    shared_buffers_percentage = 25
    timezone               = "Europe/Amsterdam"
    work_mem               = 4   # MB
  }

  labels = {
    client      = var.client
    project     = var.project
    environment = var.environment
    managed_by  = "terraform"
    created_by  = "arjan"
  }
}

HANDOFF: arjan provisions DB instance -> boris/yoanna create schemas and manage data

NETWORK (upcloud_network)

resource "upcloud_network" "client_network" {
  name = "${var.client}-${var.project}-${var.environment}"
  zone = "de-fra1"

  ip_network {
    address            = "10.0.${var.network_octet}.0/24"
    dhcp               = true
    dhcp_default_route = false
    dhcp_dns           = ["94.237.127.9", "94.237.40.9"]  # UpCloud DNS
    family             = "IPv4"
    gateway            = ""
  }

  labels = {
    client      = var.client
    managed_by  = "terraform"
  }
}

OBJECT_STORAGE (upcloud_object_storage)

USE_WHEN: backup storage, static assets, offsite backup copy

resource "upcloud_object_storage" "backups" {
  name        = "${var.client}-${var.project}-backups"
  size        = 250  # GB
  zone        = "nl-ams1"  # DR zone (different from primary de-fra1)
  access_key  = var.s3_access_key
  secret_key  = var.s3_secret_key
  description = "Backup storage for ${var.client} ${var.project}"
}

FIREWALL_RULES (upcloud_firewall_rules)

resource "upcloud_firewall_rules" "server_firewall" {
  server_id = upcloud_server.client_app.id

  firewall_rule {
    action                 = "accept"
    comment               = "Allow HTTPS from anywhere"
    destination_port_end  = "443"
    destination_port_start = "443"
    direction             = "in"
    family                = "IPv4"
    protocol              = "tcp"
  }

  firewall_rule {
    action                 = "accept"
    comment               = "Allow SSH from VPN only"
    destination_port_end  = "22"
    destination_port_start = "22"
    direction             = "in"
    family                = "IPv4"
    protocol              = "tcp"
    source_address_end    = "10.0.0.255"
    source_address_start  = "10.0.0.1"
  }

  firewall_rule {
    action    = "drop"
    comment   = "Default deny all other inbound"
    direction = "in"
    family    = "IPv4"
  }
}

TERRAFORM:STATE_MANAGEMENT

REMOTE_BACKEND

RULE: NEVER use local state — always remote backend REASON: state contains sensitive data (resource IDs, connection strings), must be locked and versioned

terraform {
  backend "pg" {
    conn_str = "postgres://terraform:${var.tf_db_password}@${var.db_host}/terraform_state"
    # PostgreSQL backend — GE standard (consistent with SSOT)
  }
}

ALTERNATIVE: UpCloud Object Storage as S3-compatible backend

terraform {
  backend "s3" {
    bucket   = "ge-terraform-state"
    key      = "${var.client}/${var.project}/${var.environment}/terraform.tfstate"
    region   = "nl-ams1"
    endpoint = "https://{account}.nl-ams1.upcloudobjects.com"

    skip_credentials_validation = true
    skip_metadata_api_check     = true
    force_path_style            = true
  }
}

STATE_LOCKING

PURPOSE: prevent concurrent terraform apply from corrupting state METHOD: PostgreSQL advisory locks (pg backend) or DynamoDB-compatible (S3 backend) RULE: never force-unlock unless you are certain no other process is running

CHECK_LOCK:

TOOL: terraform
RUN: terraform force-unlock {lock-id}
WARNING: only use when lock is stale (crashed process) — verify first

TERRAFORM:MODULE_PATTERNS

MODULE_STRUCTURE

ge-ops/master/terraform/modules/
  upcloud-server/
    main.tf
    variables.tf
    outputs.tf
    versions.tf
  upcloud-k8s-cluster/
    main.tf
    variables.tf
    outputs.tf
  upcloud-managed-db/
    main.tf
    variables.tf
    outputs.tf
  upcloud-network/
    main.tf
    variables.tf
    outputs.tf

MODULE_USAGE

module "client_server" {
  source = "../../ge-ops/master/terraform/modules/upcloud-server"

  client      = "acme-corp"
  project     = "web-portal"
  environment = "staging"
  zone        = "de-fra1"
  plan        = "2xCPU-4GB"
  storage_size = 50
}

RULES: - Modules MUST expose all mandatory tags as variables - Modules MUST set EU-only location defaults (de-fra1) - Modules MUST output connection details for handoff - Modules MUST NOT contain secrets — always variables with Vault lookup


TERRAFORM:DRIFT_DETECTION

SCHEDULED_DETECTION

TRIGGER: daily 6am recurring task (arjan) METHOD: terraform plan in check mode

TOOL: terraform
RUN: terraform plan -detailed-exitcode
EXIT_CODE_0: no changes — infrastructure matches state
EXIT_CODE_1: error
EXIT_CODE_2: changes detected — DRIFT

IF_DRIFT_DETECTED: 1. LOG drift to wiki/docs/development/reports/infra-drift/ 2. CLASSIFY: accidental (fix immediately) vs intentional (update Terraform to match) 3. IF accidental: terraform apply to correct drift 4. IF intentional manual change: update Terraform code to match reality, then apply 5. ALERT: notify victoria if production drift (potential unauthorized change)

ANTI_PATTERN: ignoring drift — leads to state/reality divergence that causes outages ANTI_PATTERN: manual fixes in UpCloud console — Terraform will undo on next apply


TERRAFORM:EU_DATA_SOVEREIGNTY

ENFORCEMENT

RULE: ALL resources MUST be in EU locations ALLOWED_ZONES: de-fra1 (Frankfurt), nl-ams1 (Amsterdam), fi-hel1 (Helsinki), pl-waw1 (Warsaw) BLOCKED: us-, sg-, au-*, uk-lon1 (post-Brexit, depends on adequacy decision)

ENFORCEMENT_IN_CODE:

variable "zone" {
  type        = string
  description = "UpCloud zone — must be EU"
  default     = "de-fra1"

  validation {
    condition     = contains(["de-fra1", "nl-ams1", "fi-hel1", "pl-waw1"], var.zone)
    error_message = "Zone must be EU-only for data sovereignty compliance."
  }
}

AUDIT: amber checks quarterly that all resources are in approved zones EVIDENCE: terraform state list + zone tags → compliance report


TERRAFORM:CI_CD_WORKFLOW

PLAN_APPLY_PIPELINE

1. Developer/agent creates Terraform code
2. terraform fmt -check (formatting)
3. terraform validate (syntax)
4. terraform plan -out=plan.tfplan
5. Cost estimate reviewed:
   → €0-20/month: auto-approve plan
   → €20-50/month: faye/sytske approval
   → €50-100/month: faye/sytske approval + human notification
   → >€100/month: STOP, human approval required
6. Plan output reviewed by arjan
7. terraform apply plan.tfplan (with explicit approval)
8. Outputs captured, credentials stored in Vault
9. Handoff document written

COST_ESTIMATION

TOOL: terraform
RUN: terraform plan -out=plan.tfplan
RUN: # Parse plan for resource types, cross-reference UpCloud pricing

COST_THRESHOLDS (from arjan's workflow): | Monthly Cost | Approval Level | |---|---| | €0-20 | Auto-approve (arjan) | | €20-50 | PM approval (faye/sytske) | | €50-100 | PM + human notification | | >€100 | Human approval required (Dirk-Jan) |


TERRAFORM:ANTI_PATTERNS

BEFORE_EVERY_TERRAFORM_ACTION: 1. Am I using local state? (NEVER — remote backend only) 2. Am I hardcoding secrets? (NEVER — Vault lookup) 3. Am I provisioning outside EU? (NEVER — data sovereignty) 4. Am I skipping tags? (NEVER — all resources tagged) 5. Am I running apply without plan output? (NEVER — always plan -out first) 6. Am I treating plan approval as apply approval? (NEVER — separate gates) 7. Am I making manual console changes? (NEVER — Terraform only) 8. Am I provisioning agent infrastructure? (That is gerco's domain, not Terraform)


TERRAFORM:CROSS_REFERENCES

KUBERNETES_OPERATIONS: domains/infrastructure/kubernetes-operations.md — k8s cluster management post-provisioning DEPLOYMENT_STRATEGIES: domains/infrastructure/deployment-strategies.md — infrastructure for blue-green/canary DNS_MANAGEMENT: domains/networking/dns-management.md — TransIP Terraform provider (stef) CDN_EDGE: domains/networking/cdn-edge.md — BunnyCDN Terraform provider (karel) BACKUP: domains/infrastructure/backup-disaster-recovery.md — backup storage provisioning COMPLIANCE: domains/compliance-frameworks/index.md — ISO 27001 infrastructure controls