Recipes
Common architecture patterns as remaid diagrams. Copy, paste, adapt.
Each example is self-contained — paste it into the editor and it will render. The source under each diagram is the full file.
Cache-aside read
The classic miss-fill pattern. The packet visits the cache, falls through to the database on a miss, populates the cache, and returns.
diagram "Cache-aside read" {
node app { kind: service, label: "App" }
node cache { kind: cache, label: "Redis" }
node db { kind: database, label: "Postgres" }
edge get: app -> cache { label: "GET key" }
edge miss: cache -> db { label: "fallback" }
edge fill: db -> cache { label: "SET" }
edge ret: cache -> app
scene s {
loopEvery: 4s
packet-flow get { speed: 400ms }
packet-flow miss { speed: 600ms, after: get }
packet-flow fill { speed: 600ms, after: miss }
packet-flow ret { speed: 400ms, after: fill }
}
}Queue fan-out
One producer, two consumers, fanned out via a message queue.
diagram "Queue fan-out" {
node producer { kind: service, label: "Producer" }
node mq { kind: queue, label: "events" }
node workerA { kind: service, label: "Worker A" }
node workerB { kind: service, label: "Worker B" }
edge pub: producer -> mq
edge a: mq -> workerA
edge b: mq -> workerB
scene s {
loopEvery: 2s
packet-flow pub { speed: 500ms }
replication-sync a, b { speed: 600ms, after: pub }
}
}Round-robin load balancing
Three back-end services rotate. The trick is alternating two edges in the chain — one packet on the ingress, then one on the chosen backend, repeating.
diagram "Round-robin LB" {
node client { kind: client }
node lb { kind: service, label: "LB" }
node a { kind: service, label: "API-1" }
node b { kind: service, label: "API-2" }
node c { kind: service, label: "API-3" }
edge in: client -> lb
edge to_a: lb -> a
edge to_b: lb -> b
edge to_c: lb -> c
scene s {
loopEvery: 1500ms
packet-flow in { speed: 250ms }
packet-flow to_a { speed: 250ms, after: in }
packet-flow in { speed: 250ms, after: to_a }
packet-flow to_b { speed: 250ms, after: in }
packet-flow in { speed: 250ms, after: to_b }
packet-flow to_c { speed: 250ms, after: in }
}
}Writes to primary, reads from replicas
Combines replication-sync for the write path and offset reads to communicate that replicas catch up after the write lands.
diagram "Reads from replicas" {
direction: down
node app { kind: service, label: "App" }
node primary { kind: database, label: "Primary" }
node r1 { kind: database, label: "Replica 1" }
node r2 { kind: database, label: "Replica 2" }
edge write: app -> primary { label: "INSERT" }
edge sync1: primary -> r1 { label: "WAL" }
edge sync2: primary -> r2 { label: "WAL" }
edge read1: r1 -> app { label: "SELECT" }
edge read2: r2 -> app { label: "SELECT" }
scene s {
loopEvery: 4s
packet-flow write { speed: 600ms }
replication-sync sync1, sync2 { speed: 700ms, after: write }
packet-flow read1 { speed: 600ms, after: sync1, delay: 400ms }
packet-flow read2 { speed: 600ms, after: sync2, delay: 700ms }
}
}Retry storm
Three packets pile up on a downed service. Each one ends with a pulse to indicate “arrived but failed.”
diagram "Retry storm" {
node client { kind: client }
node api { kind: service, label: "API (down)" }
edge t1: client -> api
edge t2: client -> api
edge t3: client -> api
scene s {
loopEvery: 2.5s
packet-flow t1 { speed: 500ms }
pulse t1 { speed: 400ms, after: t1 }
packet-flow t2 { speed: 500ms, after: t1, delay: 200ms }
pulse t2 { speed: 400ms, after: t2 }
packet-flow t3 { speed: 500ms, after: t2, delay: 200ms }
pulse t3 { speed: 400ms, after: t3 }
}
}Database failover
Primary fails, standby promotes. The recovery path runs an extra packet to make “the new primary is now serving traffic” legible.
diagram "DB failover" {
node api { kind: service }
node primary { kind: database, label: "Primary" }
node standby { kind: database, label: "Standby" }
edge p: api -> primary
edge s: api -> standby
scene live {
loopEvery: 4s
packet-flow p { speed: 500ms }
failover p, s { speed: 1.2s, after: p }
packet-flow s { speed: 500ms, after: s }
packet-flow s { speed: 500ms, after: s, delay: 600ms }
}
}