Kubernetes — Networking¶
OWNER: gerco (dev), thijmen (staging), rutger (production)
ALSO_USED_BY: arjan, alex, karel, stef
LAST_VERIFIED: 2026-03-26
GE_STACK_VERSION: k3s v1.34.x (Zone 1), UpCloud Managed K8s (Zones 2+3)
Overview¶
Kubernetes networking in GE covers service discovery, ingress routing,
network policies, and DNS resolution. The critical difference between
Zone 1 (k3s) and Zones 2+3 (UpCloud) is the ingress controller and
CNI plugin. This page documents patterns and known issues.
Service Discovery¶
Pods communicate via Kubernetes Service DNS names.
Format: {service}.{namespace}.svc.cluster.local
| Service | DNS Name | Port |
|---|---|---|
| Admin UI | admin-ui.ge-system.svc.cluster.local |
3000 |
| Wiki | wiki.ge-wiki.svc.cluster.local |
8000 |
| Redis | redis.ge-data.svc.cluster.local |
6381 |
| PostgreSQL | postgres.ge-data.svc.cluster.local |
5432 |
| Orchestrator | ge-orchestrator.ge-agents.svc.cluster.local |
8080 |
CHECK: services reference each other by DNS name, never by IP
CHECK: Redis port is 6381 — read from config/ports.yaml
ANTI_PATTERN: hardcoding ClusterIP addresses in config
FIX: use DNS names — ClusterIPs change on Service recreation
Ingress — Zone 1 (Traefik)¶
k3s bundles Traefik as the default Ingress controller.
Traefik listens on ports 80 (HTTP) and 443 (HTTPS) on the host.
IF: creating a new externally-accessible service in Zone 1
THEN: create an Ingress resource with Traefik annotations
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ge-{service}
namespace: ge-{namespace}
annotations:
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
rules:
- host: "{service}.ge.local"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ge-{service}
port:
name: http
CHECK: Traefik IngressRoute CRDs are available
RUN: kubectl get crd | grep traefik
Ingress — Zones 2+3 (UpCloud)¶
UpCloud Managed Kubernetes uses UpCloud Managed Load Balancer
for ingress. TLS termination happens at the load balancer.
IF: creating Ingress in Zone 2 or 3
THEN: use UpCloud Load Balancer annotations
THEN: TLS certificates managed via cert-manager or UpCloud
NetworkPolicy¶
GE enforces network segmentation between namespaces.
Default deny all ingress, then whitelist explicitly.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: ge-{namespace}
spec:
podSelector: {}
policyTypes:
- Ingress
Then allow specific traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-admin-ui-to-postgres
namespace: ge-data
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: postgres
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ge-system
podSelector:
matchLabels:
app.kubernetes.io/name: admin-ui
ports:
- protocol: TCP
port: 5432
CHECK: every namespace has a default-deny-ingress policy
CHECK: cross-namespace access is explicitly whitelisted
IF: orchestrator needs to talk to postgres
THEN: create a NetworkPolicy in ge-data allowing from ge-agents namespace
THEN: the orchestrator-to-postgres policy already exists — verify before creating duplicates
DNS Resolution¶
CoreDNS handles all cluster DNS in both k3s and UKS.
IF: DNS resolution fails inside a pod
THEN: check CoreDNS pods
RUN: kubectl get pods -n kube-system -l k8s-app=kube-dns
IF: external DNS resolution works but internal fails
THEN: check the ndots setting in pod /etc/resolv.conf
RUN: kubectl exec -it {pod} -- cat /etc/resolv.conf
The default ndots: 5 means short names trigger 5 search domain lookups
before falling back to absolute resolution. This is expected behaviour.
ANTI_PATTERN: setting dnsPolicy: Default on pods
FIX: use dnsPolicy: ClusterFirst (the default) for internal service discovery
ClusterIP Gotchas¶
CRITICAL: k3s ClusterIP (10.43.0.1:443) is BROKEN from inside pods.
Connection refused when trying to reach the Kubernetes API from pods.
IF: pod needs to talk to the Kubernetes API
THEN: mount kubeconfig and kubectl as hostPath volumes
THEN: do NOT rely on in-cluster config with the ClusterIP
volumes:
- name: kubeconfig
hostPath:
path: /etc/rancher/k3s/k3s.yaml
type: File
- name: kubectl
hostPath:
path: /usr/local/bin/kubectl
type: File
This is a Zone 1 (k3s) specific issue. Zones 2+3 (UpCloud) have
working in-cluster Kubernetes API access.
Port Allocation¶
GE manages ports centrally via config/ports.yaml.
CHECK: never hardcode port numbers in manifests
CHECK: read port assignments from config/ports.yaml
IF: a new service needs a port
THEN: add it to config/ports.yaml first
THEN: reference the config in manifests
ANTI_PATTERN: picking random ports and hoping they do not conflict
FIX: consult config/ports.yaml and register new allocations
hostNetwork¶
ANTI_PATTERN: using hostNetwork: true on pods
FIX: causes port conflicts on rolling updates — two pods cannot bind the same host port
IF: a service needs host-level network access
THEN: use NodePort Services or hostPath volume mounts instead
READ_ALSO: wiki/docs/stack/kubernetes/pitfalls.md
Cross-Namespace Communication¶
Pods in different namespaces communicate via fully qualified DNS:
{service}.{namespace}.svc.cluster.local
IF: Service A in ge-system needs to reach Service B in ge-agents
THEN: use service-b.ge-agents.svc.cluster.local
THEN: ensure NetworkPolicy in ge-agents allows ingress from ge-system
External Access Patterns¶
| Pattern | Zone 1 | Zones 2+3 |
|---|---|---|
| Web traffic | Traefik Ingress | UpCloud LB + bunny.net CDN |
| LAN access | NodePort | N/A |
| API endpoints | Traefik Ingress | UpCloud LB |
| Monitoring | NodePort | UpCloud LB (internal) |
READ_ALSO: wiki/docs/stack/bunnynet/index.md
Cross-References¶
READ_ALSO: wiki/docs/stack/kubernetes/index.md
READ_ALSO: wiki/docs/stack/kubernetes/manifests.md
READ_ALSO: wiki/docs/stack/kubernetes/security.md
READ_ALSO: wiki/docs/stack/kubernetes/pitfalls.md
READ_ALSO: wiki/docs/stack/kubernetes/checklist.md