DOMAIN:DATABASE — COMPLIANCE EVIDENCE GENERATION¶
OWNER: boris, yoanna
UPDATED: 2026-03-24
SCOPE: ISO 27001 and SOC 2 Type II evidence from PostgreSQL operations
COMPLIANCE:OVERVIEW¶
GE targets ISO 27001 and SOC 2 Type II certification.
Boris and Yoanna are responsible for generating database-layer compliance evidence.
Evidence must be automated — manual evidence collection does not scale and is error-prone.
Evidence is generated from actual database operations, not documentation alone.
Auditors want PROOF that controls exist AND work, not just that policies are written.
COMPLIANCE:ISO27001_A8_24_ENCRYPTION_AT_REST¶
CONTROL¶
A.8.24: Cryptography — encryption of data at rest
REQUIREMENT: sensitive data (PII, financial, credentials) must be encrypted when stored
UPCLOUD MANAGED DB¶
EVIDENCE: UpCloud encrypts all managed database storage at rest using AES-256
SOURCE: UpCloud compliance documentation + service description
VERIFY: request encryption-at-rest confirmation from UpCloud support (annually)
AUTOMATE: script that queries UpCloud API for database configuration and logs encryption status
COLUMN-LEVEL ENCRYPTION¶
FOR: highly sensitive fields (SSN, tax ID, payment tokens) — defense in depth beyond disk encryption
IMPLEMENTATION:
-- Using pgcrypto extension
CREATE EXTENSION IF NOT EXISTS pgcrypto;
-- Encrypt on insert
INSERT INTO sensitive_data (encrypted_field)
VALUES (pgp_sym_encrypt('sensitive value', current_setting('app.encryption_key')));
-- Decrypt on read
SELECT pgp_sym_decrypt(encrypted_field, current_setting('app.encryption_key'))
FROM sensitive_data;
NOTE: encryption key stored in Vault, injected via environment variable, set as session parameter
NOTE: column-level encryption prevents data exposure even if backup is stolen
ALERT: encrypted columns cannot be indexed for search — design queries accordingly
EVIDENCE GENERATION SCRIPT¶
-- Evidence: list all tables with encrypted columns
-- Run quarterly, save output as compliance evidence
SELECT table_schema, table_name, column_name, data_type
FROM information_schema.columns
WHERE data_type = 'bytea' -- pgp_sym_encrypt stores as bytea
AND table_schema NOT IN ('pg_catalog', 'information_schema')
ORDER BY table_schema, table_name;
#!/bin/bash
# scripts/compliance/evidence-encryption.sh
DATE=$(date +%Y-%m-%d)
OUTPUT="evidence/encryption-at-rest-${DATE}.json"
psql "$DATABASE_URL" -t -A -c "
SELECT json_agg(row_to_json(t)) FROM (
SELECT table_schema, table_name, column_name, data_type
FROM information_schema.columns
WHERE data_type = 'bytea'
AND table_schema NOT IN ('pg_catalog', 'information_schema')
) t;
" > "$OUTPUT"
echo "Encryption evidence generated: $OUTPUT"
FREQUENCY: quarterly
STORE: evidence directory, version controlled, timestamped
COMPLIANCE:ISO27001_A8_13_BACKUP_VERIFICATION¶
CONTROL¶
A.8.13: Information backup — backups must exist, be tested, and be recoverable
REQUIREMENT: regular backups with verified restore capability
BACKUP STRATEGY¶
LAYER_1: UpCloud automated daily backups (managed service, 7-day retention minimum)
LAYER_2: GE-managed WAL archiving for point-in-time recovery (PITR)
LAYER_3: Monthly pg_dump export to separate storage (disaster recovery, provider independence)
RETENTION: 7 days daily, 4 weeks weekly, 12 months monthly (align with client contract)
BACKUP VERIFICATION PROCESS¶
FREQUENCY: monthly (minimum), weekly (recommended)
PROCESS:
1. Restore backup to isolated test database
2. Verify row counts match expected (within backup window)
3. Verify schema integrity (all tables, indexes, constraints present)
4. Run application smoke tests against restored database
5. Log result as compliance evidence
AUTOMATED VERIFICATION SCRIPT¶
#!/bin/bash
# scripts/compliance/verify-backup.sh
set -euo pipefail
DATE=$(date +%Y-%m-%d)
EVIDENCE_FILE="evidence/backup-verification-${DATE}.json"
TEST_DB="backup_verify_${DATE//-/}"
echo "Starting backup verification for ${DATE}"
# 1. Restore latest backup to test database
createdb "$TEST_DB"
pg_restore --dbname="$TEST_DB" --no-owner latest_backup.dump 2>&1 | tee restore.log
# 2. Verify table counts
TABLES=$(psql "$TEST_DB" -t -A -c "
SELECT json_agg(row_to_json(t)) FROM (
SELECT schemaname, relname, n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC
) t;
")
# 3. Verify schema integrity
INDEXES=$(psql "$TEST_DB" -t -A -c "SELECT count(*) FROM pg_indexes WHERE schemaname = 'public';")
CONSTRAINTS=$(psql "$TEST_DB" -t -A -c "SELECT count(*) FROM information_schema.table_constraints WHERE constraint_schema = 'public';")
# 4. Generate evidence
cat > "$EVIDENCE_FILE" <<EOF
{
"date": "${DATE}",
"backup_source": "upcloud_daily",
"restore_success": true,
"tables": ${TABLES},
"index_count": ${INDEXES},
"constraint_count": ${CONSTRAINTS},
"verified_by": "automated_script",
"verification_time": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
}
EOF
# 5. Cleanup
dropdb "$TEST_DB"
echo "Backup verification complete. Evidence: $EVIDENCE_FILE"
EVIDENCE OUTPUT¶
{
"date": "2026-03-24",
"backup_source": "upcloud_daily",
"restore_success": true,
"tables": [
{"schemaname": "public", "relname": "orders", "n_live_tup": 125000},
{"schemaname": "public", "relname": "users", "n_live_tup": 5000}
],
"index_count": 47,
"constraint_count": 32,
"verified_by": "automated_script",
"verification_time": "2026-03-24T14:30:00Z"
}
STORE: evidence directory, version controlled, timestamped
ALERT: if restore fails, escalate to Boris/Yoanna + Arjan immediately — backup system may be broken
COMPLIANCE:SOC2_CC6_1_ACCESS_CONTROL¶
CONTROL¶
CC6.1: Logical and physical access controls — access restricted to authorized individuals
REQUIREMENT: database access granted on least-privilege basis, all access auditable
DATABASE ROLES¶
ROLE: app_readonly — SELECT only. For reporting, dashboards, analytics.
ROLE: app_readwrite — SELECT, INSERT, UPDATE, DELETE. For application runtime.
ROLE: app_admin — ALL on application schema. For migrations, emergency operations.
ROLE: superuser — PostgreSQL superuser. Infrastructure team ONLY (Arjan). Never application.
-- Create roles
CREATE ROLE app_readonly;
GRANT CONNECT ON DATABASE appdb TO app_readonly;
GRANT USAGE ON SCHEMA public TO app_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO app_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO app_readonly;
CREATE ROLE app_readwrite;
GRANT CONNECT ON DATABASE appdb TO app_readwrite;
GRANT USAGE ON SCHEMA public TO app_readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_readwrite;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO app_readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO app_readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE ON SEQUENCES TO app_readwrite;
-- Application user inherits from appropriate role
CREATE USER app_user LOGIN PASSWORD '...' IN ROLE app_readwrite;
ACCESS AUDIT EVIDENCE¶
#!/bin/bash
# scripts/compliance/evidence-access-control.sh
DATE=$(date +%Y-%m-%d)
OUTPUT="evidence/access-control-${DATE}.json"
psql "$DATABASE_URL" -t -A -c "
SELECT json_agg(row_to_json(t)) FROM (
SELECT
r.rolname AS role_name,
r.rolsuper AS is_superuser,
r.rolcreatedb AS can_create_db,
r.rolcreaterole AS can_create_role,
r.rolcanlogin AS can_login,
r.rolconnlimit AS connection_limit,
ARRAY(
SELECT b.rolname FROM pg_auth_members m
JOIN pg_roles b ON m.roleid = b.oid
WHERE m.member = r.oid
) AS member_of
FROM pg_roles r
WHERE r.rolname NOT LIKE 'pg_%'
ORDER BY r.rolname
) t;
" > "$OUTPUT"
echo "Access control evidence generated: $OUTPUT"
FREQUENCY: monthly
REVIEW: Boris/Yoanna verify no unauthorized role changes
ALERT: any new superuser role = immediate investigation
CONNECTION LOGGING¶
ENABLE in postgresql.conf (or UpCloud dashboard):
EVIDENCE: export connection logs monthly as compliance evidence
NOTE: UpCloud managed DB may provide connection logs through their dashboard — verify with Arjan
COMPLIANCE:SOC2_CC8_1_CHANGE_MANAGEMENT¶
CONTROL¶
CC8.1: Change management — changes to infrastructure and software are controlled
REQUIREMENT: database schema changes tracked, reviewed, approved, and auditable
MIGRATION CHANGE TRAIL¶
EVIDENCE_SOURCE_1: Git history of drizzle/migrations/ directory — who authored, who reviewed, when merged
EVIDENCE_SOURCE_2: CI/CD pipeline logs — migration execution timestamp, success/failure
EVIDENCE_SOURCE_3: __drizzle_migrations table — applied migrations with timestamps
EVIDENCE_SOURCE_4: Pull request reviews — DBA approval before production migration
AUTOMATED EVIDENCE¶
#!/bin/bash
# scripts/compliance/evidence-change-management.sh
DATE=$(date +%Y-%m-%d)
OUTPUT="evidence/change-management-${DATE}.json"
# Get applied migrations from database
MIGRATIONS=$(psql "$DATABASE_URL" -t -A -c "
SELECT json_agg(row_to_json(t)) FROM (
SELECT id, hash, created_at
FROM __drizzle_migrations
ORDER BY created_at DESC
LIMIT 50
) t;
")
# Get git history for migration files
GIT_LOG=$(cd /path/to/project && git log --format='{"hash":"%H","author":"%an","date":"%ai","message":"%s"}' --since="30 days ago" -- drizzle/migrations/)
cat > "$OUTPUT" <<EOF
{
"date": "${DATE}",
"applied_migrations": ${MIGRATIONS},
"git_changes_last_30_days": [${GIT_LOG}],
"review_policy": "DBA approval required before production migration",
"generated_by": "automated_script"
}
EOF
echo "Change management evidence generated: $OUTPUT"
SCHEMA DIFF EVIDENCE¶
#!/bin/bash
# Generate schema snapshot for change tracking
DATE=$(date +%Y-%m-%d)
pg_dump --schema-only --no-owner --no-privileges "$DATABASE_URL" > "evidence/schema-snapshot-${DATE}.sql"
FREQUENCY: after every production migration
COMPARE: diff against previous snapshot to show exactly what changed
STORE: version controlled, timestamped
COMPLIANCE:QUERY_AUDIT_LOGGING¶
FOR SENSITIVE DATA ACCESS¶
-- Enable query logging for sensitive tables
ALTER SYSTEM SET log_statement = 'mod'; -- logs INSERT, UPDATE, DELETE
-- For full audit: log_statement = 'all' (WARNING: high volume, expensive storage)
PG_AUDIT EXTENSION (RECOMMENDED)¶
-- If available on UpCloud managed DB
CREATE EXTENSION IF NOT EXISTS pgaudit;
-- Log all DDL and sensitive DML
ALTER SYSTEM SET pgaudit.log = 'ddl, write';
ALTER SYSTEM SET pgaudit.log_relation = 'on';
NOTE: pg_audit provides structured audit logs — superior to basic query logging
NOTE: verify pg_audit availability on UpCloud managed DB tier
ALERT: full query logging generates enormous log volume — use targeted auditing
COMPLIANCE:EVIDENCE_SCHEDULE¶
| Evidence | Frequency | Script | Reviewer |
|---|---|---|---|
| Encryption at rest | Quarterly | evidence-encryption.sh | Boris/Yoanna |
| Backup verification | Monthly | verify-backup.sh | Boris/Yoanna + Arjan |
| Access control | Monthly | evidence-access-control.sh | Boris/Yoanna |
| Change management | Per migration | evidence-change-management.sh | Boris/Yoanna |
| Schema snapshot | Per migration | schema-snapshot.sh | Boris/Yoanna |
| Connection logs | Monthly | UpCloud export | Arjan |
| Query audit logs | Monthly | pg_audit export | Boris/Yoanna |
AUTOMATION¶
IDEAL: CronJob in k3s that runs evidence scripts on schedule, stores in versioned evidence repository
FALLBACK: manual execution by Boris/Yoanna with calendar reminders
STORAGE: evidence/ directory in project repo, or dedicated compliance repository
RETENTION: 3 years minimum (ISO 27001 requirement)
COMPLIANCE:AUDITOR_QUESTIONS_AND_ANSWERS¶
Q: How is data encrypted at rest?
A: UpCloud managed PostgreSQL uses AES-256 disk encryption. Column-level encryption (pgcrypto) for highly sensitive fields. Evidence: encryption audit script output.
Q: How do you verify backups are recoverable?
A: Monthly automated restore to test database with row count verification, schema integrity check, and application smoke test. Evidence: backup verification script output.
Q: Who has access to the production database?
A: Four roles with least-privilege: readonly, readwrite, admin, superuser. Application uses readwrite. Superuser restricted to infrastructure team. Evidence: access control audit script output.
Q: How are schema changes controlled?
A: All changes via version-controlled migration files, reviewed by DBA before production execution, applied through CI/CD pipeline. Evidence: git history + migration table + PR reviews.
Q: How long are database logs retained?
A: Connection logs and query audit logs retained for 12 months minimum, 3 years in archived storage. Evidence: log retention configuration + archive verification.