DOMAIN:AUDIT_PROCEDURES¶
OWNER: amber
UPDATED: 2026-03-24
SCOPE: internal audit methodology per ISO 27001 clause 9.2
SERVES: amber (auditor), julian (compliance officer)
ALSO_USED_BY: dirk-jan (management review), all agents (audit subjects)
OVERVIEW¶
ISO 27001 clause 9.2 requires planned internal audits at planned intervals.
PURPOSE: determine whether the ISMS conforms to requirements and is effectively implemented and maintained.
AUDITOR: Amber (Participation Auditor) — shared team, independent of all audited processes.
INDEPENDENCE: Amber does not implement controls — only evaluates them. This satisfies ISO 27001 auditor independence requirement.
AUDIT_PLANNING¶
TAG:ISO27001_9.2 TAG:AUDIT_PLANNING
ANNUAL_AUDIT_PROGRAM¶
CYCLE: calendar year (January-December)
FREQUENCY: quarterly internal audits, covering full ISMS over 12 months
APPROACH: risk-based — higher risk areas audited more frequently
Q1_AUDIT (January-March):
- CC6 (access controls) — critical, high change frequency
- A.8.2 (privileged access) — verify no privilege creep
- A.8.5 (authentication) — WebAuthn, Vault tokens
- Access review verification — confirm quarterly access reviews performed
RATIONALE: access controls are highest-risk area; set baseline for year
Q2_AUDIT (April-June):
- CC8 (change management) — anti-LLM pipeline effectiveness
- A.8.25 (SDLC) — secure development lifecycle
- A.8.28 (secure coding) — Semgrep, code review quality
- A.5.3 (segregation of duties) — verify pipeline enforcement
RATIONALE: development process is GE's core activity; verify controls operating
Q3_AUDIT (July-September):
- CC7 (system operations) — monitoring, incident response
- A.5.24-A.5.27 (incident management) — procedures, records, learning
- GDPR compliance — DPA status, DSR handling, breach register
- Data protection controls — masking, deletion, retention
RATIONALE: mid-year check on operational and privacy controls
Q4_AUDIT (October-December):
- CC1 (control environment) — constitution, organizational structure
- CC3 (risk assessment) — risk register currency
- CC4 (monitoring) — control effectiveness trends
- Management review preparation — aggregate findings for annual review
RATIONALE: year-end review, feed into management review and planning
AUDIT_SCOPE_DETERMINATION¶
PER_AUDIT:
1. Define specific controls/processes in scope
2. Define audit period (evidence from which timeframe)
3. Define systems and agents to be examined
4. Identify previous audit findings to follow up
5. Define sampling methodology (see SAMPLING below)
SCOPE_DOCUMENT_TEMPLATE:
AUDIT: [Q#-YEAR] [FOCUS AREA]
PERIOD: [start date] to [end date]
CONTROLS_IN_SCOPE: [list of control IDs]
AGENTS_IN_SCOPE: [list of agents to examine]
SYSTEMS_IN_SCOPE: [list of systems/tools]
PREVIOUS_FINDINGS: [list of open findings to follow up]
SAMPLING_APPROACH: [statistical/judgmental/combination]
ESTIMATED_DURATION: [days]
AGENT: amber (planning), julian (scope validation)
EVIDENCE_COLLECTION¶
TAG:EVIDENCE_COLLECTION
EVIDENCE_TYPES¶
TYPE_1_DOCUMENTARY:
- Policy documents (constitution, CODEBASE-STANDARDS.md, wiki domain pages)
- Procedure documents (incident response, change management, access management)
- Records (incident logs, change records, access reviews, training records)
- Configurations (RBAC manifests, network policies, Vault policies, alert rules)
SOURCE: wiki brain, PostgreSQL, git history, k8s manifests
TYPE_2_SYSTEM_GENERATED:
- Automated scan results (Trivy, Semgrep, kube-bench)
- Monitoring data (Grafana dashboards, Loki queries, health dumps)
- Pipeline execution logs (DAG records, PTY captures)
- Cost gate enforcement logs
SOURCE: PostgreSQL (SSOT), monitoring systems, CI/CD pipeline
TYPE_3_TESTIMONIAL:
- Agent session transcripts (PTY capture)
- Discussion records (multi-agent consensus decisions)
- Escalation records (human notification files)
SOURCE: PTY capture files, PostgreSQL discussions table, notification files
TYPE_4_OBSERVATIONAL:
- Process walkthroughs (observe pipeline execution end-to-end)
- System configuration inspection (verify running state matches documented state)
- Access verification (attempt access as unauthorized agent)
SOURCE: live system observation, test execution
EVIDENCE_COLLECTION_METHODS¶
METHOD_AUTOMATED:
- Query PostgreSQL for structured evidence (incidents, changes, access records)
- Parse git log for change management evidence
- Retrieve scan results from CI/CD pipeline
- Extract monitoring data from Grafana/Loki APIs
- Check k8s resource state against documented manifests
TOOL: scripts/audit-evidence-collect.sh (to be developed)
METHOD_MANUAL:
- Review policy documents for currency and completeness
- Inspect configuration files for correctness
- Interview process (not applicable to AI agents — use session transcript review)
- Walkthrough of procedures (trace a work item through full pipeline)
EVIDENCE_INTEGRITY¶
REQUIREMENT: evidence must be reliable, relevant, and sufficient
INTEGRITY_CHECKS:
- Timestamps verified (not backdated)
- Records from SSOT (PostgreSQL, not filesystem copies)
- Git history immutable (signed commits where applicable)
- Evidence collected within audit period
- No evidence modified after collection (read-only access during audit)
AGENT: amber (collection), boris (database evidence queries)
SAMPLING_STRATEGIES¶
TAG:SAMPLING
STATISTICAL_SAMPLING¶
WHEN_USED: large populations of similar items (e.g., PRs, access reviews, incidents)
METHOD: random selection with defined confidence level and tolerable error rate
SAMPLE_SIZE_GUIDANCE:
- Population < 50: examine all (100%)
- Population 50-100: sample 25-30 items
- Population 100-500: sample 30-50 items
- Population > 500: sample 50-60 items (or use statistical tables)
SELECTION: random number generator, documented seed for reproducibility
DOCUMENTATION: record population size, sample size, selection method, confidence level
JUDGMENTAL_SAMPLING¶
WHEN_USED: small populations, risk-targeted examination, follow-up on specific concerns
CRITERIA:
- High-value transactions (large client projects)
- Changes to critical systems (production infrastructure)
- Access changes for privileged accounts
- Incidents classified SEV1 or SEV2
- Items from periods of known system stress
COMBINATION_APPROACH (recommended)¶
STANDARD: statistical sample for baseline + judgmental overlay for risk areas
EXAMPLE_CC8_AUDIT:
1. Statistical: random sample of 30 PRs from audit period
2. Judgmental: ALL PRs touching security-critical code (authentication, authorization, cryptography)
3. Judgmental: ALL emergency/hotfix changes
4. Follow-up: ALL PRs from agents with previous findings
DOCUMENTATION: clearly separate statistical and judgmental samples in audit workpapers
AGENT: amber (sampling design and execution)
FINDING_CLASSIFICATION¶
TAG:FINDING_CLASSIFICATION
MAJOR_NONCONFORMITY¶
DEFINITION: absence or total breakdown of a control required by the ISMS
CRITERIA:
- Required control not implemented at all
- Control consistently not operating as designed (systematic failure)
- Breach of legal/regulatory requirement
- Risk of significant harm to information security
EXAMPLES:
- No access reviews performed in 12 months
- Production deployments with zero code review (pipeline bypassed)
- Personal data breach not recorded in breach register
- Privileged access granted without authorization process
- No incident response procedures documented or tested
RESPONSE_REQUIRED: corrective action plan within 30 days
VERIFICATION: auditor verifies correction before next audit
ESCALATION: immediately reported to Julian and Dirk-Jan
IMPACT_ON_CERTIFICATION: must be resolved before ISO 27001 certification (or recertification)
MINOR_NONCONFORMITY¶
DEFINITION: control exists but has isolated failures or weaknesses
CRITERIA:
- Control generally operates but with occasional lapses
- Documentation exists but is outdated or incomplete
- Process followed but not consistently documented
- Single instance of control failure (not systematic)
EXAMPLES:
- Access review performed but 2 of 15 agents not covered
- Incident record missing one required field
- Policy document not reviewed within annual cycle
- One PR merged without complete review checklist
RESPONSE_REQUIRED: corrective action plan within 60 days
VERIFICATION: verified at next scheduled audit
ESCALATION: reported to Julian
IMPACT_ON_CERTIFICATION: acceptable if corrective action demonstrated
OBSERVATION (OPPORTUNITY FOR IMPROVEMENT)¶
DEFINITION: not a nonconformity but could be improved
CRITERIA:
- Control effective but could be more efficient
- Good practice not yet adopted
- Emerging risk not yet addressed
- Process works but lacks formalization
EXAMPLES:
- Evidence collection could be further automated
- Monitoring thresholds could be more granular
- Documentation could include more cross-references
- Training content could be updated with recent examples
RESPONSE: optional improvement, tracked but not mandatory
ESCALATION: included in audit report for management awareness
IMPACT_ON_CERTIFICATION: none
FINDING_TEMPLATE¶
FINDING_ID: [AUDIT-Q#-YEAR-NNN]
CLASSIFICATION: [major | minor | observation]
CONTROL_REF: [ISO 27001 / SOC 2 control reference]
DESCRIPTION: [factual description of what was found]
EVIDENCE: [specific evidence supporting the finding]
RISK: [what could go wrong if not addressed]
RECOMMENDATION: [suggested corrective action]
AGENT_AFFECTED: [which agent(s) or process]
DUE_DATE: [corrective action deadline]
OWNER: [who is responsible for correction]
STATUS: [open | in-progress | resolved | verified]
AGENT: amber (classification), julian (review and acceptance)
CORRECTIVE_ACTION_TRACKING¶
TAG:CORRECTIVE_ACTION TAG:CAR
CORRECTIVE_ACTION_REQUEST (CAR)¶
TRIGGER: major or minor nonconformity identified during audit
PROCESS:
1. Amber issues CAR with finding details and deadline
2. Responsible agent/owner acknowledges CAR within 5 business days
3. Owner performs root cause analysis (not just symptom fix)
4. Owner proposes corrective action (address root cause)
5. Julian reviews and approves proposed corrective action
6. Owner implements corrective action
7. Owner provides evidence of implementation
8. Amber verifies corrective action effective
9. CAR closed with verification record
ROOT_CAUSE_ANALYSIS¶
REQUIREMENT: corrective action must address ROOT CAUSE, not just finding
METHODS:
- 5 Whys: trace finding back to underlying cause
- Fishbone/Ishikawa: categorize potential causes (process, people, technology, policy)
ANTI_PATTERN: "fix the specific instance" without addressing why it happened
EXAMPLE:
- Finding: PR merged without code review
- Symptom fix: review the specific PR retroactively
- Root cause fix: add branch protection rule preventing merge without approval
- Root cause fix: add monitoring alert when branch protection is bypassed
TRACKING¶
STORAGE: PostgreSQL corrective_actions table
FIELDS:
- car_id, finding_id, audit_id
- description, root_cause, proposed_action
- owner_agent, due_date
- implementation_evidence, verified_by, verified_at
- status (open, acknowledged, in-progress, implemented, verified, closed)
DASHBOARD: admin-ui compliance dashboard (corrective action status, aging, trends)
ESCALATION: overdue CARs escalated weekly (amber -> julian -> dirk-jan at 30 days overdue)
AGENT: amber (tracking), julian (oversight), responsible agent (implementation)
MANAGEMENT_REVIEW_INPUT¶
TAG:ISO27001_9.3 TAG:MANAGEMENT_REVIEW
ISO_27001_CLAUSE_9.3_REQUIREMENTS¶
FREQUENCY: at least annually (GE: quarterly summary, annual comprehensive)
ATTENDEES: Dirk-Jan (management), Julian (compliance), Amber (audit)
REQUIRED_INPUT (clause 9.3(2))¶
INPUT_A — STATUS_OF_PREVIOUS_REVIEWS:
- Previous management review actions: completed, in-progress, overdue
- Trend: are actions being completed on time?
PRODUCED_BY: amber (action tracking)
INPUT_B — CHANGES_IN_EXTERNAL_ISSUES:
- New regulations (EU AI Act high-risk Aug 2026, CRA Sep 2026, NIS2 Q2 2026)
- New threat landscape developments
- Industry changes affecting security requirements
- Client requirement changes
PRODUCED_BY: julian (regulatory monitoring), annegreet (threat intelligence)
INPUT_C — CHANGES_IN_INTERNAL_ISSUES:
- New agents onboarded or decommissioned
- Infrastructure changes (k3s, PostgreSQL, Redis)
- Process changes (pipeline modifications, new tools)
- Organizational changes (new teams, role changes)
PRODUCED_BY: julian (internal change log)
INPUT_D — INFORMATION_SECURITY_PERFORMANCE:
- Nonconformities and corrective actions (count, trends, aging)
- Monitoring and measurement results (SLA compliance, scan results)
- Audit results (findings per category, closure rates)
- Fulfillment of information security objectives
PRODUCED_BY: amber (audit results), ron (monitoring data)
INPUT_E — FEEDBACK:
- Client feedback on security posture
- Agent session learnings related to security
- Incident post-mortem findings
- Discussion outcomes affecting security
PRODUCED_BY: eltjo (learning analysis), mira (incident feedback)
INPUT_F — RISK_ASSESSMENT_RESULTS:
- Current risk register status
- New risks identified since last review
- Risk treatment plan progress
- Residual risk levels
PRODUCED_BY: julian (risk register)
INPUT_G — OPPORTUNITIES_FOR_IMPROVEMENT:
- Observations from audits (not nonconformities)
- Automation opportunities for evidence collection
- Process efficiency improvements
- New security tools or techniques
PRODUCED_BY: amber (observations), annegreet (knowledge curation)
REQUIRED_OUTPUT (clause 9.3(3))¶
OUTPUT_A: decisions on continual improvement opportunities
OUTPUT_B: any need for changes to the ISMS
OUTPUT_C: resource requirements (budget, agents, tools)
MANAGEMENT_REVIEW_TEMPLATE¶
# Management Review [Q#-YEAR]
Date: [date]
Attendees: [names]
## A. Previous Review Actions
| Action | Owner | Status | Notes |
|--------|-------|--------|-------|
## B. External Changes
[regulatory, threat, industry, client]
## C. Internal Changes
[agents, infrastructure, processes, organization]
## D. Performance
- Nonconformities: [major: #, minor: #, obs: #]
- Corrective actions: [open: #, closed: #, overdue: #]
- SLA compliance: [%]
- Scan results: [critical: #, high: #]
- Incident count: [SEV1: #, SEV2: #, SEV3: #, SEV4: #]
## E. Feedback
[client, agent, incident, discussion summaries]
## F. Risk Assessment
[new risks, changed risks, risk treatment progress]
## G. Improvement Opportunities
[observations, automation, tools]
## Decisions
1. [decision with rationale]
## Action Items
| Action | Owner | Deadline |
|--------|-------|----------|
## Resource Requirements
[budget, agents, tools needed]
STORAGE: PostgreSQL management_reviews table, wiki page for human-readable version
AGENT: amber (preparation), julian (co-preparation), dirk-jan (approval)
AUDIT_REPORT_FORMAT¶
TAG:AUDIT_REPORT
STRUCTURE¶
# Internal Audit Report [Q#-YEAR]
AUDIT_ID: IA-[YEAR]-Q[#]
AUDITOR: amber
PERIOD: [start] to [end]
REPORT_DATE: [date]
CLASSIFICATION: INTERNAL — CONFIDENTIAL
## 1. Executive Summary
[2-3 paragraphs: scope, key findings, overall assessment]
OPINION: [conforming | conforming with findings | significant concerns]
## 2. Scope
[controls, systems, agents, period, sampling approach]
## 3. Findings
### Finding 1: [AUDIT-Q#-YEAR-001]
[finding template as above]
### Finding 2: [AUDIT-Q#-YEAR-002]
...
## 4. Follow-Up on Previous Findings
| Previous Finding | Status | Evidence | Assessment |
|-----------------|--------|----------|------------|
## 5. Observations (Opportunities for Improvement)
[numbered list with rationale]
## 6. Conclusion
[overall ISMS health assessment, key risks, recommendations]
## 7. Distribution
[who receives this report]
DISTRIBUTION: julian, dirk-jan, and responsible agents for specific findings
RETENTION: 5 years minimum
STORAGE: PostgreSQL audit_reports table (SSOT), wiki for management review access
AUDIT_INDEPENDENCE¶
TAG:INDEPENDENCE
REQUIREMENT: auditors shall not audit their own work (ISO 27001 clause 9.2(e))
GE_IMPLEMENTATION:
- Amber's role is EXCLUSIVELY audit — Amber does not implement controls
- Amber does not participate in development, operations, or incident response
- Amber does not author policies (Julian does)
- Amber does not configure systems (Rutger does)
- Amber does not manage access (Julian/Rutger do)
VERIFICATION: agent role definitions in AGENT-REGISTRY.json confirm no overlap
EXCEPTION: if Amber must audit the audit process itself, Julian performs that specific audit
CONTINUOUS_IMPROVEMENT¶
TAG:CONTINUAL_IMPROVEMENT TAG:ISO27001_10
PRINCIPLE: audit is not a point-in-time exercise — it feeds continuous improvement
MECHANISM:
1. Audit findings produce CARs (corrective actions)
2. CARs drive control improvements
3. Improved controls produce better evidence
4. Better evidence enables more efficient audits
5. Management review allocates resources for improvement
6. Wiki brain captures audit learnings for all agents
METRICS_TRACKED:
- Finding trend by category (increasing/decreasing/stable)
- CAR closure rate and aging
- Repeat findings (same control, multiple audits)
- Time from finding to verification
- Evidence collection automation percentage
AGENT: amber (metrics), julian (improvement program)
READ_ALSO: evidence-automation.md, iso27001-controls.md, soc2-criteria.md, domains/incident-response/index.md