Skip to content

DOMAIN:SECURITY:THREAT_MODELING

OWNER: victoria
ALSO_USED_BY: pol, hugo, piotr, faye, sytske
UPDATED: 2026-03-24
SCOPE: every new client project, every new API endpoint, every architecture change
PREREQUISITE: domains/security/index.md (SECURITY:THREAT_MODELING section)


CORE_PRINCIPLE

RULE: threat model BEFORE writing code — not after
RULE: threat model is a LIVING DOCUMENT — update on every architecture change
RULE: every data flow crossing a trust boundary MUST be analyzed
RULE: threat modeling is a TEAM activity — devs + security + PM together

IF no threat model exists for a project THEN flag as A04:INSECURE_DESIGN (OWASP)
IF threat model last updated >6 months ago THEN schedule review
IF new external integration added THEN update threat model immediately


STRIDE_METHODOLOGY

STANDARD: STRIDE — Microsoft Threat Modeling Framework
SOURCE: Shostack, "Threat Modeling: Designing for Security" (2014)

STRIDE_CATEGORIES

category threat property violated question to ask
Spoofing attacker pretends to be someone else Authentication can someone impersonate a user or service?
Tampering attacker modifies data in transit or at rest Integrity can someone change data they shouldn't?
Repudiation attacker denies performing an action Non-repudiation can we prove who did what and when?
Information Disclosure attacker reads data they shouldn't Confidentiality can someone access unauthorized data?
Denial of Service attacker makes system unavailable Availability can someone crash or overload the system?
Elevation of Privilege attacker gains higher access Authorization can someone do things they shouldn't be allowed to?

STRIDE_PER_ELEMENT

RULE: apply STRIDE to EACH element in the data flow diagram

element type most relevant threats example
external entity Spoofing attacker impersonates user via stolen session
data flow Tampering, Information Disclosure MITM on unencrypted API call
data store Tampering, Information Disclosure, Repudiation SQL injection modifying DB, missing audit log
process all six business logic bypass, privilege escalation
trust boundary all six every crossing is a potential attack surface

APPLYING_STRIDE_TO_GE_STACK

NEXT_JS_FRONTEND

  • Spoofing: session hijacking via XSS, cookie theft
  • Tampering: client-side state manipulation, form field injection
  • Information Disclosure: source maps in production, verbose error pages, exposed API keys in client bundle
  • DoS: client-side ReDoS, infinite re-render loops
  • EoP: admin routes accessible without server-side auth check

CHECK: are Next.js API routes protected with middleware auth?
CHECK: is next.config.js productionBrowserSourceMaps set to false?
CHECK: are environment variables prefixed correctly (NEXT_PUBLIC_ only for truly public values)?

HONO_API

  • Spoofing: missing auth middleware on routes, JWT algorithm confusion
  • Tampering: request body manipulation, parameter pollution
  • Information Disclosure: stack traces in error responses, overly verbose logging
  • DoS: unbounded request body size, missing rate limiting
  • EoP: broken function-level auth (admin endpoints reachable by regular user)

CHECK: does every Hono route group have auth middleware?
CHECK: is request body size limited? (c.req.raw.body size check or middleware)
CHECK: are Zod schemas used for ALL request validation?

POSTGRESQL

  • Spoofing: connection without SSL, missing client certificate verification
  • Tampering: SQL injection, missing RLS bypassing tenant isolation
  • Repudiation: no audit trail on data changes
  • Information Disclosure: excessive column access, missing column-level encryption for PII
  • DoS: unbounded queries, missing connection pool limits
  • EoP: overly permissive database roles

CHECK: is RLS enabled for ALL tenant-scoped tables?
CHECK: does the application connect with a least-privilege role (not superuser)?
CHECK: are slow query logs enabled to detect injection attempts?

K3S_CLUSTER

  • Spoofing: service impersonation without mTLS
  • Tampering: container image without signature verification
  • Information Disclosure: secrets mounted as env vars visible in /proc
  • DoS: pod without resource limits consuming all node resources
  • EoP: container running as root, hostPath mounts, privileged mode

CHECK: are NetworkPolicies in place for every namespace?
CHECK: are PodSecurityStandards enforced (restricted profile)?
CHECK: are all images from trusted registries only?


DATA_FLOW_DIAGRAMS

RULE: draw DFD BEFORE analyzing threats — you cannot model what you cannot see
RULE: start with Level 0 (context diagram), then Level 1 (major processes)

DFD_ELEMENTS

symbol element description
rectangle external entity users, third-party services, external systems
rounded rectangle process application component that transforms data
parallel lines data store database, file system, cache, queue
arrow data flow data moving between elements
dashed line trust boundary security perimeter change

TRUST_BOUNDARIES_FOR_GE_PROJECTS

TRUST BOUNDARY MAP (typical GE client project):

[Internet] ---- TRUST BOUNDARY 1: CDN/WAF (BunnyCDN) ----
  |
[Browser] ---- TRUST BOUNDARY 2: Next.js edge/server ----
  |
[Next.js SSR] ---- TRUST BOUNDARY 3: API layer ----
  |
[Hono API] ---- TRUST BOUNDARY 4: Database ----
  |
[PostgreSQL with RLS]
  |
[Hono API] ---- TRUST BOUNDARY 5: External services ----
  |
[Keycloak / Vault / Third-party APIs]
  |
[k3s cluster] ---- TRUST BOUNDARY 6: Host OS ----
  |
[Node OS / k3s runtime]

LEVEL_0_DFD_TEMPLATE

FOR EVERY GE CLIENT PROJECT, draw:

1. Client browser (external entity)
2. CDN/WAF (trust boundary)
3. Next.js application (process)
4. API backend (process)
5. Database (data store)
6. Auth provider (external entity — Keycloak)
7. Secrets manager (external entity — Vault)
8. External integrations (external entities — payment, email, etc.)
9. ALL data flows between them with labels
10. ALL trust boundaries marked

LEVEL_1_DFD_TEMPLATE

EXPAND each process from Level 0:

Next.js application:
  - Static page rendering
  - Server-side rendering (getServerSideProps / server components)
  - API routes (if used)
  - Middleware (auth checks, redirects)

API backend (Hono):
  - Auth middleware
  - Request validation layer
  - Business logic handlers
  - Database access layer
  - External service clients

RISK_SCORING

STANDARD: OWASP Risk Rating Methodology
FORMULA: Risk = Likelihood x Impact

LIKELIHOOD_FACTORS

factor low (1) medium (2) high (3)
skill required advanced exploitation moderate skill script kiddie
motive unlikely target possible target high-value target
opportunity requires insider access requires network access public internet
population small attacker group moderate large (automated bots)
discoverability requires deep analysis requires some probing obvious/public

LIKELIHOOD_SCORE = average of all factors (1-3)

IMPACT_FACTORS

factor low (1) medium (2) high (3)
confidentiality non-sensitive data PII of few users mass PII breach
integrity minor data corruption business data altered financial/health data altered
availability degraded performance partial outage full outage
financial < EUR 1,000 EUR 1,000-50,000 > EUR 50,000
compliance no regulatory impact minor violation GDPR Art. 33 notification required
reputation unnoticed local coverage national press

IMPACT_SCORE = highest single factor (1-3)

RISK_MATRIX

              IMPACT
              Low(1)    Medium(2)   High(3)
LIKELIHOOD
High(3)       MEDIUM    HIGH        CRITICAL
Medium(2)     LOW       MEDIUM      HIGH
Low(1)        INFO      LOW         MEDIUM

RISK_RESPONSE

risk level response SLA
CRITICAL fix before deployment, escalate to human 24 hours
HIGH fix before next release 7 days
MEDIUM plan fix in upcoming sprint 30 days
LOW add to backlog 90 days
INFO document, no action required track only

GE_THREAT_MODEL_TEMPLATE

TEMPLATE_HEADER

# Threat Model: {client_name} - {project_name}

**Version:** 1.0
**Date:** {date}
**Author:** {agent_name}
**Reviewed by:** victoria, pol
**Status:** DRAFT | REVIEWED | APPROVED

## Project Context
- **Client:** {client_name}
- **Description:** {one paragraph}
- **Stack:** Next.js + Hono + PostgreSQL (GE standard)
- **Auth:** NextAuth + Keycloak
- **Hosting:** k3s cluster
- **External integrations:** {list}
- **Data sensitivity:** {LOW | MEDIUM | HIGH | CRITICAL}
- **Regulatory:** GDPR, {others}

TEMPLATE_BODY

## Data Flow Diagram
{Level 0 DFD — paste or link to diagram}

## Trust Boundaries
| ID | boundary | description | controls |
|---|---|---|---|
| TB-01 | Internet → CDN | public traffic enters | BunnyCDN WAF, rate limiting |
| TB-02 | CDN → Next.js | filtered traffic | TLS, CSP headers |
| TB-03 | Next.js → Hono API | internal API calls | JWT validation, CORS |
| TB-04 | Hono API → PostgreSQL | data persistence | RLS, parameterized queries, SSL |
| TB-05 | Hono API → Keycloak | auth delegation | OIDC, TLS |
| TB-06 | Hono API → Vault | secrets retrieval | AppRole auth, TLS |

## Assets
| ID | asset | sensitivity | owner |
|---|---|---|---|
| A-01 | user credentials | CRITICAL | auth system |
| A-02 | user PII | HIGH | database |
| A-03 | session tokens | HIGH | auth system |
| A-04 | business data | MEDIUM-HIGH | database |
| A-05 | API keys/secrets | CRITICAL | Vault |

## Threats (STRIDE)
| ID | category | threat | element | likelihood | impact | risk | mitigation |
|---|---|---|---|---|---|---|---|
| T-01 | Spoofing | session hijacking via XSS | browser→Next.js | 2 | 3 | HIGH | CSP, httpOnly cookies, SameSite |
| T-02 | Tampering | SQL injection | API→PostgreSQL | 1 | 3 | MEDIUM | Drizzle ORM, parameterized queries |
| T-03 | Info Disclosure | PII in error responses | API→browser | 2 | 2 | MEDIUM | global error handler, sanitized responses |
| {continue for all identified threats} |

## Mitigations
| ID | threat_ids | mitigation | status | owner |
|---|---|---|---|---|
| M-01 | T-01 | implement CSP header with nonce | PLANNED | {dev agent} |
| {continue} |

## Residual Risks
{risks that cannot be fully mitigated — document and accept}

## Review Schedule
- Next review: {date + 3 months or next major change}

WHEN_TO_THREAT_MODEL

MANDATORY_TRIGGERS

trigger scope who leads
new client project onboarding full model victoria
new API endpoint incremental — STRIDE on new endpoint dev agent + victoria review
new external integration incremental — trust boundary analysis dev agent + victoria review
architecture change update existing model victoria
new auth flow full auth threat model hugo
new data type with PII data flow + privacy analysis victoria + compliance
pre-launch security review validate model matches implementation pol
post-incident update model with discovered threats victoria

INCREMENTAL_THREAT_MODELING

RULE: not every change needs a full model — use incremental approach for small changes

IF new API endpoint THEN:
  1. identify data flows (input, output, stores accessed)
  2. apply STRIDE to the endpoint
  3. score each threat (likelihood x impact)
  4. document mitigations
  5. add to existing threat model document
  TIME: 15-30 minutes

IF new external integration THEN:
  1. add external entity to DFD
  2. identify new trust boundary
  3. apply STRIDE to all new data flows
  4. review auth mechanism for integration
  5. update threat model document
  TIME: 30-60 minutes

IF architecture change THEN:
  1. update DFD completely
  2. identify changed trust boundaries
  3. re-apply STRIDE to all affected elements
  4. review all existing mitigations still valid
  5. get victoria review
  TIME: 1-2 hours

TOOLS_FOR_THREAT_MODELING

DIAGRAMMING

TOOL: draw.io / diagrams.net — free, DFD templates available
TOOL: Mermaid in Markdown — for version-controlled diagrams in wiki
TOOL: Microsoft Threat Modeling Tool — generates STRIDE threats from DFD automatically (Windows only)
TOOL: OWASP Threat Dragon — open-source, browser-based, generates report

AUTOMATION

TOOL: threagile — threat model as code (YAML input, risk report output)
RUN: threagile -model threat-model.yaml -output report/
BENEFIT: version-controlled, CI-integrated threat models
TOOL: pytm (Python Threat Modeling) — programmatic DFD + threat generation
RUN: python3 tm.py --dfd | dot -Tpng -o dfd.png

GE_CONVENTION

RULE: store threat models at wiki/docs/clients/{client}/security/threat-model.md
RULE: link from project README to threat model
RULE: threat model MUST be reviewed by victoria before deployment approval
RULE: incremental updates go in same file with version history at top


COMMON_MISTAKES

ANTI_PATTERN: threat modeling only the happy path — ignoring error flows, admin paths, background jobs
FIX: model ALL data flows including error responses, cron jobs, webhooks, admin panels

ANTI_PATTERN: listing threats without mitigations
FIX: every threat MUST have a mitigation or explicit risk acceptance

ANTI_PATTERN: threat model as one-time document filed and forgotten
FIX: link threat model to CI — generate reminder on architecture changes

ANTI_PATTERN: only modeling external threats — ignoring insider threats
FIX: include malicious insider in threat actors (compromised agent, rogue admin)

ANTI_PATTERN: treating all threats as equal priority
FIX: use risk scoring — focus effort on HIGH/CRITICAL, accept LOW/INFO

ANTI_PATTERN: modeling at wrong granularity — too high (misses detail) or too low (analysis paralysis)
FIX: Level 0 for overview, Level 1 for each major component, stop there unless HIGH risk area


PRIVACY_THREAT_MODELING

METHODOLOGY: LINDDUN (supplements STRIDE with privacy focus)
STANDARD: GDPR Art. 25 — Data Protection by Design and by Default
STANDARD: GDPR Art. 35 — Data Protection Impact Assessment (DPIA)

LINDDUN_CATEGORIES

category threat question
Linkability connecting two items of interest can actions be linked to same person across contexts?
Identifiability linking data to individual can we identify who did what?
Non-repudiation inability to deny action is user tracked more than necessary?
Detectability ability to detect existence of data can attacker detect that data exists?
Disclosure exposure of personal data can PII be accessed by unauthorized party?
Unawareness user not informed of processing does user know what we do with their data?
Non-compliance violating privacy regulation are we GDPR compliant?

IF project handles PII THEN apply LINDDUN alongside STRIDE
IF DPIA required (GDPR Art. 35) THEN LINDDUN analysis is mandatory input


SELF_CHECK

BEFORE_COMPLETING_THREAT_MODEL:
- [ ] DFD drawn with all elements, flows, and trust boundaries?
- [ ] STRIDE applied to every element?
- [ ] risk scored for every identified threat?
- [ ] mitigations documented for HIGH and CRITICAL threats?
- [ ] residual risks explicitly documented and accepted?
- [ ] privacy threats assessed (LINDDUN) if PII involved?
- [ ] review scheduled for next architecture change?
- [ ] stored at correct wiki path?


READ_ALSO: domains/security/index.md, domains/security/owasp-testing.md, domains/security/authentication-patterns.md