Terraform + UpCloud — Patterns¶
OWNER: arjan
ALSO_USED_BY: gerco, thijmen, rutger
LAST_VERIFIED: 2026-03-26
GE_STACK_VERSION: terraform ~> 1.14.x, upcloud provider ~> 5.0
Overview¶
Terraform patterns used in GE: module structure, state management,
workspace-per-zone strategy, naming conventions, and tagging.
All patterns are optimised for the three-zone architecture.
Module Structure¶
terraform/
├── environments/
│ ├── dev/ # Zone 1 root module
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ │ ├── terraform.tfvars
│ │ └── backend.tf
│ ├── staging/ # Zone 2 root module
│ │ └── ...
│ └── production/ # Zone 3 root module
│ └── ...
├── modules/
│ ├── kubernetes-cluster/ # UKS cluster + node groups
│ ├── networking/ # VPC, subnets, firewalls
│ ├── database/ # Managed databases
│ ├── server/ # Cloud servers
│ └── load-balancer/ # Managed load balancers
└── shared/
└── remote-state/ # Backend configuration
CHECK: every root module has main.tf, variables.tf, outputs.tf, backend.tf
CHECK: shared logic lives in modules/, never duplicated across environments
IF: adding a new resource type
THEN: create a module in modules/ first
THEN: call it from environment root modules
ANTI_PATTERN: putting all resources in a single main.tf
FIX: separate into modules by infrastructure concern
State Management¶
GE uses remote state backend. State is never stored locally or committed to git.
terraform {
backend "s3" {
bucket = "ge-terraform-state"
key = "zones/${terraform.workspace}/terraform.tfstate"
region = "eu-central-1"
encrypt = true
}
}
CHECK: backend block present in every root module
CHECK: state is encrypted at rest
CHECK: state bucket has versioning enabled for rollback
IF: state file is corrupted
THEN: restore from bucket versioning
THEN: never manually edit state files
ANTI_PATTERN: using local backend (terraform.tfstate on disk)
FIX: configure remote backend — local state is a single point of failure
Workspace-Per-Zone Strategy¶
Each GE zone maps to a Terraform workspace.
Variables differ per workspace via terraform.tfvars or workspace-conditional locals.
locals {
zone_config = {
dev = {
server_plan = "DEV-1xCPU-2GB"
node_count = 2
region = "nl-ams1"
}
staging = {
server_plan = "HICPU-2xCPU-4GB"
node_count = 3
region = "nl-ams1"
}
production = {
server_plan = "HICPU-4xCPU-8GB"
node_count = 5
region = "nl-ams1"
}
}
config = local.zone_config[terraform.workspace]
}
IF: applying changes
THEN: verify workspace matches intended zone
RUN: terraform workspace show
ANTI_PATTERN: applying production config in dev workspace
FIX: always check workspace before terraform apply
Naming Conventions¶
All UpCloud resources follow this naming pattern:
Examples:
- ge-dev-k8s-cluster
- ge-staging-postgres-primary
- ge-production-lb-frontend
CHECK: every resource name starts with ge-
CHECK: zone is always included in the name
CHECK: names are lowercase with hyphens (no underscores, no camelCase)
Tagging / Labelling¶
All UpCloud resources MUST carry these tags:
resource "upcloud_server" "example" {
# ... config ...
labels = {
"ge.zone" = terraform.workspace
"ge.managed-by" = "terraform"
"ge.component" = "api"
"ge.owner" = "arjan"
"ge.team" = "shared"
}
}
| Tag | Required | Values |
|---|---|---|
ge.zone |
Yes | dev, staging, production |
ge.managed-by |
Yes | terraform |
ge.component |
Yes | api, database, networking, monitoring |
ge.owner |
Yes | Agent name |
ge.team |
Recommended | alfa, bravo, zulu, shared |
Variable Patterns¶
IF: variable is sensitive (passwords, tokens)
THEN: mark it sensitive = true
THEN: source from Vault, not from .tfvars
IF: variable has a sensible default for dev
THEN: set default but override in staging/production .tfvars
variable "node_count" {
description = "Number of Kubernetes worker nodes"
type = number
default = 2 # Dev default
}
CHECK: no default values for sensitive variables
CHECK: all variables have description and type
Output Patterns¶
Expose only what downstream modules or operators need:
output "cluster_endpoint" {
description = "Kubernetes API server endpoint"
value = upcloud_kubernetes_cluster.main.network_cidr
}
output "kubeconfig" {
description = "Kubeconfig for cluster access"
value = upcloud_kubernetes_cluster.main.kubeconfig
sensitive = true
}
CHECK: sensitive outputs are marked sensitive = true
CHECK: outputs have descriptive description fields
Lifecycle Rules¶
Prevent accidental destruction of stateful resources:
resource "upcloud_managed_database_postgresql" "main" {
# ... config ...
lifecycle {
prevent_destroy = true
}
}
CHECK: prevent_destroy = true on all databases
CHECK: prevent_destroy = true on all persistent storage
CHECK: prevent_destroy = true on Kubernetes clusters in staging/production
IF: genuinely need to destroy a protected resource
THEN: remove prevent_destroy in a separate commit with arjan's approval
THEN: apply the destroy
THEN: re-add prevent_destroy immediately
Data Sources¶
IF: referencing resources managed outside Terraform
THEN: use data sources, not hardcoded IDs
ANTI_PATTERN: hardcoding UpCloud zone IDs or UUIDs
FIX: use data sources to look up values dynamically
Moved Blocks¶
IF: refactoring resource addresses (renaming, moving to module)
THEN: use moved blocks to avoid destroy+recreate
CHECK: terraform plan shows no unexpected destroys after refactoring
Cross-References¶
READ_ALSO: wiki/docs/stack/terraform-upcloud/index.md
READ_ALSO: wiki/docs/stack/terraform-upcloud/upcloud-resources.md
READ_ALSO: wiki/docs/stack/terraform-upcloud/pitfalls.md
READ_ALSO: wiki/docs/stack/terraform-upcloud/checklist.md