diff --git a/.agents/skills/redis-use-case-ports/assets/audit-checklist.md b/.agents/skills/redis-use-case-ports/assets/audit-checklist.md
index 05b7cacbeb..69acc2c078 100644
--- a/.agents/skills/redis-use-case-ports/assets/audit-checklist.md
+++ b/.agents/skills/redis-use-case-ports/assets/audit-checklist.md
@@ -205,7 +205,54 @@ A naked `DEL` without the token check is a bug: if the lock expired and was re-a
---
-## 14. Subscribe-acknowledgement race in pub/sub-style helpers
+## 14. Empty-fields `HSET` guard in change-event consumers
+
+**What to scan for:** any code path that takes a "fields" payload from a change event / message / callback and forwards it to `HSET` (or the client-equivalent `hSet` / `hSetMultiple` / `HashSet` / `hMSet` / etc.). Typically this is a CDC consumer, sync worker, or write-through path.
+
+**Pass criterion:** before the `HSET` call, the code explicitly guards against `fields` being null, missing, or empty, and returns early on the malformed case (or routes to a dead-letter, etc.). The guard must run before the pipeline / transaction is opened.
+
+**Sample audit prompt:**
+
+> Audit every code path in the 9 client implementations under `content/develop/use-cases/{{USE_CASE_NAME}}/` that forwards a fields payload from a change-event / callback / message to `HSET` (or the client equivalent). For each, confirm there is an explicit early-return guard for null / missing / empty fields **before** any pipeline or transaction is constructed. Flag any port without the guard with file path and line number.
+
+**Why on list:** Every Redis client tested in the prefetch-cache use case raises or panics on `HSET` with an empty fields mapping: redis-py `DataError`, node-redis throws, Predis "wrong number of arguments", redis-rs **panics** on `pipe().hset_multiple(&key, &[])`, Jedis errors, go-redis errors. A defensive `|| {}` fallback that LOOKS like it handles the empty case is actually misleading — Cursor bugbot caught this on the reference implementation. ([PR #3317 comment](https://github.com/redis/docs/pull/3317))
+
+---
+
+## 15. TTL sentinel preservation across libraries
+
+**What to scan for:** any `TTL` / `ttl_remaining` / `ttlRemaining` helper that wraps the client's TTL command. Particularly any code that converts the library's return type (often `time.Duration`, `TimeSpan?`, `Long`) into integer seconds.
+
+**Pass criterion:** the helper returns **`-2`** for a missing key and **`-1`** for a key with no TTL, as integer seconds (or the language's native integer type). Libraries encode these sentinels inconsistently:
+
+- **redis-py**: returns `int` directly with `-2` / `-1` preserved.
+- **go-redis**: returns `time.Duration` with `-2` / `-1` as **raw nanoseconds** (not seconds-scaled). A naive `int(d.Seconds())` truncates to `0`.
+- **StackExchange.Redis**: `KeyTimeToLive` returns `TimeSpan?` and collapses **both** missing-key and no-TTL into `null` — a null-coalesce loses the `-2` sentinel.
+- **node-redis / Jedis / Lettuce / Predis / redis-rb**: return integer-typed seconds with `-2` / `-1` preserved.
+
+The recommended cross-client idiom is to **bypass the library wrapper** and send the raw command (`client.Do(ctx, "TTL", key).Int64()` in Go, `IDatabase.Execute("TTL", key)` in .NET) so the integer reply comes through untouched.
+
+**Sample audit prompt:**
+
+> For each port's `TTLRemaining` (or equivalent) under `content/develop/use-cases/{{USE_CASE_NAME}}/`, confirm it returns `-2` for a missing key and `-1` for a key with no TTL. Test each by reading a non-existent ID and by running `PERSIST` on an existing cache key then reading it. Flag any port that returns `0`, `null`, or collapses the two sentinels into one value.
+
+**Why on list:** Caught in the prefetch-cache cross-port audit. go-redis and StackExchange.Redis both shipped with subtle bugs in their TTL conversion that the audit caught. ([PR #3317 audit B](https://github.com/redis/docs/pull/3317))
+
+---
+
+## 16. Locked-emit ordering for producer/consumer queues
+
+**What to scan for:** any mock primary store, in-memory writer, or producer that (a) mutates internal state under a lock and (b) appends a corresponding event to an out-of-process or out-of-thread queue/stream/channel. Typical methods: `add_record` / `update_field` / `delete_record`, `enqueue`, `publish_change`.
+
+**Pass criterion:** the queue append happens **inside the same locked section** as the state mutation, not after it. Without this, two concurrent mutations can complete in one order but enqueue their events in the opposite order, and a downstream consumer applies them out of order — the cache ends up divergent from the source. For cross-process producers (PHP, etc.), the equivalent is wrapping the mutation + `LPUSH` in a Lua script so the server enforces ordering.
+
+**Sample audit prompt:**
+
+> Audit every mutation method in each port's mock primary store (or equivalent producer) under `content/develop/use-cases/{{USE_CASE_NAME}}/`. For each, confirm the change event is appended to the queue / stream / channel **while the mutation lock is still held** (or, for cross-process ports, wrapped in a Lua script that combines the record write and the LPUSH server-side). Flag any port where the emit happens after the lock release.
+
+**Why on list:** Locked-emit ordering is what guarantees a CDC consumer can replay events deterministically. Caught and fixed in the prefetch-cache reference's `_emit_change_locked` pattern after Codex review; the prefetch-cache cross-port audit confirmed all 9 ports preserve the invariant, including PHP's Lua-script equivalent. ([PR #3317 audit C](https://github.com/redis/docs/pull/3317))
+
+## 17. Subscribe-acknowledgement race in pub/sub-style helpers
**What to scan for:** the constructor or registration path of any subscriber object (pub/sub Subscription, message-listener, channel consumer). Specifically, the code path between "request the SUBSCRIBE / PSUBSCRIBE" and "return the Subscription handle to the caller".
@@ -219,7 +266,7 @@ A naked `DEL` without the token check is a bug: if the lock expired and was re-a
---
-## 15. Concurrent-name reservation race in async helpers
+## 18. Concurrent-name reservation race in async helpers
**What to scan for:** any helper that does "check map for duplicate → release lock → do async work → acquire lock → insert". This shape is common in Rust (`std::sync::Mutex` is `!Send`, so can't be held across `await`) and any async language where the check and the insert are bracketed by an `await` that releases the lock implicitly.
@@ -233,7 +280,7 @@ A naked `DEL` without the token check is a bug: if the lock expired and was re-a
---
-## 16. Detached-worker PID capture
+## 19. Detached-worker PID capture
**What to scan for:** in any port that spawns subscriber/worker processes from a request handler (typically PHP under `php -S`, but any helper that uses `proc_open`, `subprocess.Popen`, `child_process.spawn`, `posix_spawn`, etc.), how is the worker's PID recorded? Look for `proc_get_status()['pid']` after `proc_open([...])`, or `pid` properties on subprocess handles.
@@ -247,7 +294,7 @@ A naked `DEL` without the token check is a bug: if the lock expired and was re-a
---
-## 17. Silent timeout fallthrough in readiness waits
+## 20. Silent timeout fallthrough in readiness waits
**What to scan for:** functions named `waitFor*`, `pollUntil*`, `awaitReady`, etc. that loop with a deadline. Especially ones that return `void` / `None` / `()` instead of a status.
@@ -261,7 +308,7 @@ A naked `DEL` without the token check is a bug: if the lock expired and was re-a
---
-## 18. Pub/sub introspection commands are server-wide
+## 21. Pub/sub introspection commands are server-wide
**What to scan for:** any test or smoke-test step that asserts an **absolute** value of `PUBSUB CHANNELS`, `PUBSUB NUMSUB`, or `PUBSUB NUMPAT`. Especially common in pub/sub-style use cases.
diff --git a/.agents/skills/redis-use-case-ports/assets/cross-diff-checklist.md b/.agents/skills/redis-use-case-ports/assets/cross-diff-checklist.md
index 7e0fd10038..3f5e48cf54 100644
--- a/.agents/skills/redis-use-case-ports/assets/cross-diff-checklist.md
+++ b/.agents/skills/redis-use-case-ports/assets/cross-diff-checklist.md
@@ -25,7 +25,7 @@ A sub-agent can run this in read-only mode. For each row, produce a 9-column com
| Write path | `HSET` (with all fields) + `EXPIRE`, ideally pipelined or in a single `HSET ... EXPIRE` MULTI. |
| Invalidate | `DEL` (not `EXPIRE 0`, not `UNLINK`). |
| Field update | `HSET key field value` + `EXPIRE` inside a conditional transaction or `Condition.KeyExists`. |
-| TTL inspection | `TTL` (not `PTTL`, not `OBJECT`). |
+| TTL inspection | `TTL` (not `PTTL`, not `OBJECT`). The wrapper must preserve the `-2` (missing key) and `-1` (no TTL) sentinels as integer seconds; if the client's typed wrapper collapses or rescales them (go-redis's `time.Duration` with nanosecond-encoded sentinels, StackExchange.Redis's `KeyTimeToLive` returning `null` for both cases), bypass it with the raw command (`Do("TTL", ...)` / `Execute("TTL", ...)`). See audit-checklist row 15. |
| Single-flight acquire | Lua script using `SET NX PX`. |
| Single-flight release | Lua script using `GET == token` check + `DEL`. |
| Counters (where stats are in Redis, e.g. PHP) | `HINCRBY`. |
@@ -187,10 +187,11 @@ The only per-client variation should be the **pill text** at the top of `
`
| `Get the source files` subsection | Every `_index.md` has a `### Get the source files` subsection as the first child of `## Running the demo`. It contains a `mkdir -demo && cd -demo`, a `BASE=https://raw.githubusercontent.com/redis/docs/main/...` variable, and one `curl -O $BASE/` per source file the port needs. |
| Files curled match files run | The set of files in the curl block matches what the existing run command (e.g. `python3 demo_server.py`, `dotnet run`, `php -S ... demo_server.php`) actually requires. No missing config files (`package.json`, `composer.json`, `*.csproj`, `go.mod`, `Cargo.toml`), no extras (`Cargo.lock` only if `cargo` expects it; build outputs never). |
| Rust folder layout | The curl block matches the port's on-disk layout: if files live under `src/`, the block does `mkdir -p .../src && cd ...` then `curl -o src/ $BASE/src/`; if files are flat at the project root (driven by explicit `path =` in `Cargo.toml`), `curl -O $BASE/` for all of them. |
+| Source-file count in prose matches curl block | Prose like *"The demo consists of N files"* in `### Get the source files` must match the actual number of `curl -O` lines in the block. Easy drift when a port adds an extra worker entry point (e.g. PHP's separate `sync_worker.php`) and the count is not updated. |
**Audit prompt:**
-> For each of the 9 client implementations of `content/develop/use-cases/{{USE_CASE_NAME}}/`, grep `_index.md` with `grep -nE "\]\(([^h)][^)]*\.[a-z]+)\)"` — the result must be empty (no relative file links). Then confirm `## Running the demo` is followed by `### Get the source files`, and that the curl block downloads the same files the run command needs. Flag any port where the curl-block file set diverges from the run-time requirements, or where a Rust port's `src/` layout doesn't match its on-disk reality.
+> For each of the 9 client implementations of `content/develop/use-cases/{{USE_CASE_NAME}}/`, grep `_index.md` with `grep -nE "\]\(([^h)][^)]*\.[a-z]+)\)"` — the result must be empty (no relative file links). Then confirm `## Running the demo` is followed by `### Get the source files`, and that the curl block downloads the same files the run command needs. Count the `curl -O` lines and confirm the prose intro ("The demo consists of N files") matches. Flag any port where the curl-block file set diverges from the run-time requirements, or where a Rust port's `src/` layout doesn't match its on-disk reality.
## File names per client
diff --git a/.agents/skills/redis-use-case-ports/assets/redis-conventions.md b/.agents/skills/redis-use-case-ports/assets/redis-conventions.md
index a2dffe2b06..4c602df21c 100644
--- a/.agents/skills/redis-use-case-ports/assets/redis-conventions.md
+++ b/.agents/skills/redis-use-case-ports/assets/redis-conventions.md
@@ -271,6 +271,9 @@ PHP runs each HTTP request in a fresh process under `php -S`. This means:
- **In-process state doesn't persist.** Cache stats, primary record state, primary read counters, and per-job-queue counters must live in Redis (under a `demo:*` keyspace, or a `:{name}:stats` hash).
- **Spawning sub-processes from a request handler must detach from the dev server's listen socket.** This bites both `pcntl_fork` (forked children inherit the accept socket) and `proc_open` (children inherit FDs unless explicitly redirected). The fix is **`setsid` on Linux**, and a shell-based new-session wrapper on macOS (which lacks `setsid(1)`). The detach also needs to redirect stdin/stdout/stderr to files; closing them alone isn't enough.
- **Predis 3.x's `hset()` is variadic, not associative.** The 1.x `$redis->hset($key, ['field' => 'value'])` form raises `wrong number of arguments for 'hset'` against a 3.x client/server. Use `$redis->hset($key, 'field', 'value', 'field2', 'value2', ...)` and write a small `flattenFields()` helper if you're storing a map.
+- **Predis `BRPOP` only accepts whole-second timeouts.** Sub-second polling intervals (e.g. a 50 ms `next_change` loop in the reference Python) need a workaround: use a 1 s `BRPOP` for change draining plus a separate fast pause-flag poll (e.g. 20 ms `usleep`) so pause/resume latency stays low even when the main `BRPOP` is parked.
+- **Cross-process pause/resume goes through Redis flags.** Where threaded ports use a `threading.Event` (or equivalent) inside one process, PHP needs the demo server and the long-running sync worker to coordinate across processes. The pattern is two keys: `demo:sync:paused` (writer to worker) and `demo:sync:idle` (worker acks parked state). The demo's `/clear` and `/reprefetch` handlers set `paused=1`, spin-wait for `idle=1` with a 10 ms poll and a 2 s timeout, do the cache write, then set `paused=0`. The worker checks `paused` on each loop iteration; if set, writes `idle=1` and spin-waits for it to clear. Established in the prefetch-cache PHP port.
+- **Mutation + change-event emit needs Lua-script atomicity** when the producer is also stateless (PHP). The reference Python uses an in-process `Lock` to make "mutate-then-emit" atomic; the PHP equivalent is wrapping the record write and the `LPUSH` (or `XADD`) onto the change feed in a single `EVAL`. Without this, two concurrent mutations on the same key can land in queue order opposite to their server-side commit order. (Audit-checklist row 16.)
- The brief should call out that the cross-process supervision approach is **PHP-specific** in the production-usage section.
## .NET-specific notes
@@ -281,20 +284,25 @@ PHP runs each HTTP request in a fresh process under `php -S`. This means:
- **StackExchange.Redis intentionally does not expose blocking pops** (`BRPOPLPUSH` / `BLMOVE` with a timeout) because they would monopolise the multiplexer's single command pipeline. Use cases that need a blocking claim (job queue, etc.) should poll the non-blocking `IDatabase.ListRightPopLeftPush` on a short interval (50 ms is a reasonable default). Document this in the helper's "Claiming jobs" / "How it works" section.
- **`RedisChannel` no longer has an implicit `string` conversion in 2.7+.** `db.Publish(...)` needs `RedisChannel.Literal("channel:name")` or `RedisChannel.Pattern(...)` explicitly.
- StackExchange.Redis transparently caches Lua scripts: the first `ScriptEvaluate(script, keys, args)` sends `EVAL`, subsequent calls switch to `EVALSHA` automatically. No need to manage SHAs by hand.
+- **`IDatabase.KeyTimeToLive` collapses the `-2` (missing) and `-1` (no TTL) sentinels into a single `TimeSpan?` null.** For any `TTL` lookup that needs to distinguish them, send the raw command instead: `(long) db.Execute("TTL", key)` returns the integer the server actually replied with. (Audit-checklist row 15.)
+- **`IServer.Keys` (the typed SCAN enumerator) requires `AllowAdmin = true` on the `ConfigurationOptions`** — which also grants `FLUSHDB` / `CONFIG`, a real security concern in production. Where SCAN-style enumeration is needed (e.g. a `clear()` helper) prefer `db.Execute("SCAN", cursor, "MATCH", pattern, "COUNT", count)` so the demo doesn't pull in admin-privileged client config.
## Java-specific notes
- **Jedis**: use `JedisPool` and acquire a `Jedis` instance per call with try-with-resources. Each transaction gets its own connection; no in-process lock is needed.
- **Jedis 5.x's `brpoplpush` takes integer seconds.** Sub-second blocking-claim timeouts (e.g. 500 ms polling windows) round up to 1 s on the wire. The polling loop still observes its stop flag promptly enough; just be aware the per-iteration block is longer than the reference suggests.
- **Lettuce**: by default the demo shares one `StatefulRedisConnection` across HTTP handlers. Lettuce is thread-safe for individual commands but pipelined sequences and transactions are connection-scoped — concurrent pipelines or `MULTI`/`EXEC` blocks on one connection can interleave. Options when an enqueue / update needs two-or-more commands atomic-ish: (a) wrap in a `ReentrantLock`; (b) use `MULTI`/`EXEC` with the same lock; (c) merge into a Lua script (preferred — atomic server-side and lock-free, but requires writing the script). The production-usage section should explain you'd switch to `ConnectionPoolSupport.createGenericObjectPool(...)` in production and drop the lock.
+- **Lettuce sync API does not cooperate with `setAutoFlushCommands(false)`.** Each sync call internally awaits its `CompletableFuture`; with auto-flush off, those futures never complete because nothing flushes. Symptom: bulk-load deadlocks silently — no exception, just a hung process. Use the **async API** (`RedisAsyncCommands`) for any pipelined batch where you intend to flush at the end: queue commands without awaiting each one, then `connection.flushCommands()` and await the futures in bulk. Documented after the prefetch-cache Lettuce port hit it during testing.
- Lettuce's `BLMOVE` accepts a `double` timeout in seconds with sub-second precision (`bRPopLPush(timeout: double)`). Don't use the older `long`-overload — pre-6.x builds treated values < 1 as "block forever".
- Both Java demos depend on a small classpath. The `_index.md` should give an example `javac` + `java` command listing the jars by name.
+- **JDK version: pick text blocks (15+) or string concatenation (11+) and apply it across both Java ports of the same use case.** Text blocks (`"""..."""`) keep the inlined HTML readable; concatenation works on older JDKs. The cache-aside Java ports use concatenation with JDK 11+ prereq; the prefetch-cache Java ports use text blocks with JDK 17+ prereq. Either is fine — just don't mix within a use case, and set Prerequisites accordingly.
## Go-specific notes
- Use `package ` (e.g., `package cacheaside`) for all files, including the demo server. Expose the entry point as a `RunDemoServer()` function rather than `main()` directly.
- Ask the user to create a one-line `main.go` next to the files: `package main; import ""; func main() { .RunDemoServer() }`. This avoids the Go limitation that `package main` can't coexist with another package in the same directory.
- `go.mod` should declare `module ` and `require github.com/redis/go-redis/v9` at a recent stable version.
+- **go-redis encodes the `TTL` sentinels `-2` / `-1` as raw nanoseconds**, not seconds-scaled. `client.TTL(...).Result()` returns `time.Duration(-2)` (one nanosecond) for a missing key, and a naive `int(d.Seconds())` truncates it to `0`. For any `TTL` lookup, bypass the typed wrapper: `client.Do(ctx, "TTL", key).Int64()` returns the integer reply directly. Same idiom maps to the .NET `Execute("TTL", ...)` workaround. (Audit-checklist row 15.)
## Rust-specific notes
@@ -387,6 +395,27 @@ Every client has a `MockPrimaryStore` that:
- Is thread-safe (mutex around the records map, atomic on the counter).
- Lives entirely in-process — except in PHP, where it persists in Redis under `demo:primary:*` keys for cross-request survival.
+### Locked-emit ordering for producer/consumer use cases
+
+When the mock primary store doubles as the *producer* of a change feed that some downstream consumer (CDC worker, sync worker, replicator) drains — as in the prefetch-cache use case — every mutation method must emit its change event **while the mutation lock is still held**. The append-to-queue cannot happen after the lock is released, even though the queue itself is thread-safe.
+
+Without this, two concurrent `update_field` calls can mutate the records map in one order (T1 then T2 → primary state ends at T2's value) and then enqueue their events in the opposite order (T2 then T1 → consumer applies T1 last → cache ends at T1's value, divergent from primary).
+
+The reference Python pattern is an `_emit_change_locked(...)` helper called inside each `with self._lock:` block. The equivalent in other languages:
+
+| Language | Pattern |
+|---|---|
+| Python | `_emit_change_locked` inside `with self._lock:` |
+| Node.js | mutation + emit are synchronous within the same function; no `await` between them (single-threaded event loop guarantees serial execution) |
+| Go | `defer mu.Unlock()` + `emitChangeLocked` before the deferred unlock |
+| Java | `synchronized (lock) { ...mutate...; emitChangeLocked(...); }` |
+| C# | `lock (_lock) { ...mutate...; EmitChangeLocked(...); }` |
+| PHP | Lua scripts that combine the record write and the `LPUSH` server-side (no in-process lock to hold across requests) |
+| Ruby | `@lock.synchronize { ...mutate...; emit_change_locked(...); }` |
+| Rust | `emit_locked(...)` while the `MutexGuard` is still in scope (call before drop) |
+
+See audit-checklist row 16 for the audit prompt.
+
## Library versions to standardise (when this skill is updated)
Pin the recommended versions in the `_index.md` Prerequisites section. As of the cache-aside use case:
diff --git a/content/develop/use-cases/_index.md b/content/develop/use-cases/_index.md
index 1f377ce457..79e83e1a1a 100644
--- a/content/develop/use-cases/_index.md
+++ b/content/develop/use-cases/_index.md
@@ -22,4 +22,5 @@ This section provides practical examples and reference implementations for commo
* [Time series dashboard]({{< relref "/develop/use-cases/time-series-dashboard" >}}) - Build a rolling sensor graph demo with Redis time series data
* [Leaderboards]({{< relref "/develop/use-cases/leaderboard" >}}) - Build a ranked leaderboard with sorted sets and user metadata
* [Job queue]({{< relref "/develop/use-cases/job-queue" >}}) - Run a reliable background job queue with at-least-once delivery and visibility-timeout reclaim
+* [Prefetch cache]({{< relref "/develop/use-cases/prefetch-cache" >}}) - Pre-load reference data into Redis so every read is a cache hit, kept current by a CDC sync worker
* [Pub/sub messaging]({{< relref "/develop/use-cases/pub-sub" >}}) - Broadcast real-time events to many consumers with channel and pattern subscriptions
diff --git a/content/develop/use-cases/prefetch-cache/_index.md b/content/develop/use-cases/prefetch-cache/_index.md
new file mode 100644
index 0000000000..0f6012ec1a
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/_index.md
@@ -0,0 +1,105 @@
+---
+categories:
+- docs
+- develop
+- stack
+- oss
+- rs
+- rc
+description: Pre-load reference data into Redis so every read is a cache hit.
+hideListLinks: true
+linkTitle: Prefetch cache
+title: Redis prefetch cache
+weight: 5
+---
+
+## When to use Redis prefetch cache
+
+Use a Redis prefetch cache when you need to pre-load reference or master data into cache before the first request arrives, so every read is a hit and no request ever falls through to the primary database.
+
+## Why the problem is hard
+
+Cache-aside guarantees cold-start misses: the first request for every key hits the primary, and between TTL expiry and the next read, every service re-fetches the same rows from a slow backend. At scale this creates latency spikes and sustained read pressure on the system of record — the load pattern is worst exactly when traffic is highest.
+
+Prefetch solves this by loading data proactively, but that brings its own constraints. The entire working set must fit in memory, and it must stay current as the source of truth changes. Building and maintaining the sync pipeline from the source database adds engineering cost and ongoing operational burden — once the cache is the only read path, any sync lag becomes a correctness problem rather than a freshness one.
+
+This pattern is distinct from cache-aside, where the cache populates reactively on miss and the primary is always available as a fall-back. With prefetch, the application assumes the cache is authoritative on the read path; on a miss, it does not fall back to the primary (and a sustained miss rate is treated as an incident). It is also distinct from write-through caching, where every write to the application writes both the cache and the primary in lock-step — prefetch decouples the write path from the cache and lets a separate sync pipeline catch up.
+
+## What you can expect from a Redis solution
+
+You can:
+
+- Achieve near-100% cache hit ratios for country codes, product categories, translations, configuration, and other reference tables.
+- Keep P95 read latency under 1 ms for lookup-heavy request paths at peak traffic (that is to say 95% of requests have a latency of 1 ms or less).
+- Sync source database changes into cache within seconds using a managed CDC pipeline (such as Redis Data Integration), or a small consumer in front of Debezium, Kafka, or a Redis stream.
+- Offload all reference-data reads from the primary database, avoiding the cost of dedicated read replicas.
+- Pre-warm the cache on deploy or restart so cold starts never reach the backend.
+- Bound memory with a long safety-net TTL that expires entries if the sync pipeline ever stops, so a silent failure never serves stale data forever.
+
+## How Redis supports the solution
+
+In practice, the application loads the full working set into Redis once at startup using a pipelined bulk write, then a separate sync worker keeps Redis current as the source of truth changes. Every reference-data read goes to Redis only — there is no fall-back path to the primary on the request critical path.
+
+Redis provides the following features that make it a good fit for prefetch caching:
+
+- [Hashes]({{< relref "/develop/data-types/hashes" >}})
+ ([`HSET`]({{< relref "/commands/hset" >}}),
+ [`HGETALL`]({{< relref "/commands/hgetall" >}})) and native
+ [JSON]({{< relref "/develop/data-types/json" >}}) documents
+ ([`JSON.SET`]({{< relref "/commands/json.set" >}}),
+ [`JSON.GET`]({{< relref "/commands/json.get" >}})) map directly to common
+ reference-data lookup patterns — id-keyed records with a fixed set of fields,
+ or richer nested documents accessed by JSONPath.
+- [Pipelined]({{< relref "/develop/clients/pools-and-muxing" >}})
+ [`HSET`]({{< relref "/commands/hset" >}}) or
+ [`MSET`]({{< relref "/commands/mset" >}}) batches make the initial bulk load
+ fast: a few thousand records load in a single round trip, so the application
+ starts serving from a fully-warm cache within seconds of boot.
+- [`EXPIRE`]({{< relref "/commands/expire" >}}) sets a long safety-net TTL on
+ each entry so memory stays bounded even if the sync pipeline silently stops —
+ not as the freshness mechanism, but as a guardrail.
+- [`SCAN`]({{< relref "/commands/scan" >}}) iterates the prefetched keyspace
+ without blocking the server, so the application can audit cache coverage,
+ list available IDs, or run a periodic reconciliation pass against the source.
+- [Streams]({{< relref "/develop/data-types/streams" >}})
+ ([`XADD`]({{< relref "/commands/xadd" >}}),
+ [`XREAD`]({{< relref "/commands/xread" >}})) provide a durable, replayable
+ change feed when the sync worker needs to resume from a known offset after
+ a restart — the canonical pattern for CDC consumers feeding Redis.
+- Sub-millisecond reads from memory, so reference-data lookups never appear on
+ a flame graph. If Redis is already in the stack for sessions, rate limiting,
+ or cache-aside, prefetch runs on the same instance at zero marginal cost.
+
+## Ecosystem
+
+The following libraries and frameworks support Redis-backed prefetch caching:
+
+- **Java**:
+ [Spring Cache abstraction (`@Cacheable` with Redis cache store)](https://docs.spring.io/spring-data/redis/reference/redis/redis-cache.html),
+ populated by a startup `CommandLineRunner` for the bulk load.
+- **Node.js**:
+ [Redis OM](https://github.com/redis/redis-om-node) for object-mapping
+ prefetched JSON documents.
+- **Change-data-capture (CDC)** pipelines that stream source-database changes
+ into Redis without custom application code:
+ [Redis Data Integration (RDI)]({{< relref "/integrate/redis-data-integration" >}})
+ for relational and NoSQL sources on Redis Enterprise / Redis Cloud;
+ [Debezium](https://debezium.io/) plus a lightweight Redis consumer for
+ open-source Redis.
+- **API gateways**:
+ [Kong](https://docs.konghq.com/hub/) plugins to route reference-data reads to
+ Redis directly, bypassing the backend service entirely.
+
+## Code examples to build your own Redis prefetch cache
+
+The following guides show how to build a simple Redis-backed prefetch cache in front of a primary store of reference data. Each guide includes a runnable interactive demo that pre-loads records on startup, runs a background sync worker that applies primary-store changes to Redis within milliseconds, and lets you watch the cache stay current as records are added, updated, and deleted on the source.
+
+* [redis-py (Python)]({{< relref "/develop/use-cases/prefetch-cache/redis-py" >}})
+* [node-redis (Node.js)]({{< relref "/develop/use-cases/prefetch-cache/nodejs" >}})
+* [go-redis (Go)]({{< relref "/develop/use-cases/prefetch-cache/go" >}})
+* [Jedis (Java)]({{< relref "/develop/use-cases/prefetch-cache/java-jedis" >}})
+* [Lettuce (Java)]({{< relref "/develop/use-cases/prefetch-cache/java-lettuce" >}})
+* [StackExchange.Redis (C#)]({{< relref "/develop/use-cases/prefetch-cache/dotnet" >}})
+* [Predis (PHP)]({{< relref "/develop/use-cases/prefetch-cache/php" >}})
+* [redis-rb (Ruby)]({{< relref "/develop/use-cases/prefetch-cache/ruby" >}})
+* [redis-rs (Rust)]({{< relref "/develop/use-cases/prefetch-cache/rust" >}})
diff --git a/content/develop/use-cases/prefetch-cache/dotnet/MockPrimaryStore.cs b/content/develop/use-cases/prefetch-cache/dotnet/MockPrimaryStore.cs
new file mode 100644
index 0000000000..910996579e
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/dotnet/MockPrimaryStore.cs
@@ -0,0 +1,197 @@
+using System.Collections.Concurrent;
+
+namespace PrefetchCacheDemo;
+
+///
+/// Mock primary data store for the prefetch-cache demo.
+///
+/// This stands in for a source-of-truth database (Postgres, MySQL,
+/// Mongo, etc.) that holds reference data the application serves to
+/// users.
+///
+/// Every mutation appends a change event to an in-process queue, which
+/// the sync worker drains and applies to Redis. In a real system the
+/// queue is replaced by a CDC pipeline — Redis Data Integration,
+/// Debezium plus a lightweight consumer, or an equivalent tool that
+/// tails the source's binlog/WAL and pushes changes into Redis.
+///
+/// The store also exposes so the demo can
+/// illustrate how much slower a direct primary read would be than a
+/// Redis hit.
+///
+public class MockPrimaryStore
+{
+ public int ReadLatencyMs { get; }
+
+ private readonly object _lock = new();
+ private long _reads;
+ private readonly Dictionary> _records;
+ private readonly BlockingCollection _changes = new(new ConcurrentQueue());
+
+ public MockPrimaryStore(int readLatencyMs = 80)
+ {
+ ReadLatencyMs = readLatencyMs;
+ _records = new Dictionary>(StringComparer.Ordinal)
+ {
+ ["cat-001"] = new()
+ {
+ ["id"] = "cat-001",
+ ["name"] = "Beverages",
+ ["display_order"] = "1",
+ ["featured"] = "true",
+ ["parent_id"] = "",
+ },
+ ["cat-002"] = new()
+ {
+ ["id"] = "cat-002",
+ ["name"] = "Bakery",
+ ["display_order"] = "2",
+ ["featured"] = "true",
+ ["parent_id"] = "",
+ },
+ ["cat-003"] = new()
+ {
+ ["id"] = "cat-003",
+ ["name"] = "Pantry Staples",
+ ["display_order"] = "3",
+ ["featured"] = "false",
+ ["parent_id"] = "",
+ },
+ ["cat-004"] = new()
+ {
+ ["id"] = "cat-004",
+ ["name"] = "Frozen",
+ ["display_order"] = "4",
+ ["featured"] = "false",
+ ["parent_id"] = "",
+ },
+ ["cat-005"] = new()
+ {
+ ["id"] = "cat-005",
+ ["name"] = "Specialty Cheeses",
+ ["display_order"] = "5",
+ ["featured"] = "false",
+ ["parent_id"] = "cat-002",
+ },
+ };
+ }
+
+ public List ListIds()
+ {
+ lock (_lock)
+ {
+ var ids = _records.Keys.ToList();
+ ids.Sort(StringComparer.Ordinal);
+ return ids;
+ }
+ }
+
+ /// Return every record. Used by the cache's bulk-load path on startup.
+ public List> ListRecords()
+ {
+ Thread.Sleep(ReadLatencyMs);
+ lock (_lock)
+ {
+ Interlocked.Increment(ref _reads);
+ return _records.Values
+ .Select(r => new Dictionary(r, StringComparer.Ordinal))
+ .ToList();
+ }
+ }
+
+ /// Single-record read. Not on the demo's normal read path.
+ public Dictionary? Read(string entityId)
+ {
+ Thread.Sleep(ReadLatencyMs);
+ lock (_lock)
+ {
+ Interlocked.Increment(ref _reads);
+ return _records.TryGetValue(entityId, out var record)
+ ? new Dictionary(record, StringComparer.Ordinal)
+ : null;
+ }
+ }
+
+ public bool AddRecord(Dictionary record)
+ {
+ if (!record.TryGetValue("id", out var entityId) || string.IsNullOrEmpty(entityId?.Trim()))
+ {
+ return false;
+ }
+ entityId = entityId.Trim();
+ lock (_lock)
+ {
+ if (_records.ContainsKey(entityId))
+ {
+ return false;
+ }
+ _records[entityId] = new Dictionary(record, StringComparer.Ordinal);
+ // Emit while the lock is held so the queue order matches the
+ // mutation order. Two concurrent callers cannot interleave
+ // mutation A -> mutation B -> emit B -> emit A.
+ EmitChangeLocked(ChangeOp.Upsert, entityId, new Dictionary(record, StringComparer.Ordinal));
+ }
+ return true;
+ }
+
+ public bool UpdateField(string entityId, string field, string value)
+ {
+ lock (_lock)
+ {
+ if (!_records.TryGetValue(entityId, out var record))
+ {
+ return false;
+ }
+ record[field] = value;
+ EmitChangeLocked(
+ ChangeOp.Upsert,
+ entityId,
+ new Dictionary(record, StringComparer.Ordinal));
+ }
+ return true;
+ }
+
+ public bool DeleteRecord(string entityId)
+ {
+ lock (_lock)
+ {
+ if (!_records.Remove(entityId))
+ {
+ return false;
+ }
+ EmitChangeLocked(ChangeOp.Delete, entityId, null);
+ }
+ return true;
+ }
+
+ /// Block up to for the next change event.
+ public ChangeEvent? NextChange(TimeSpan timeout)
+ {
+ if (_changes.TryTake(out var change, timeout))
+ {
+ return change;
+ }
+ return null;
+ }
+
+ public long Reads => Interlocked.Read(ref _reads);
+
+ public void ResetReads() => Interlocked.Exchange(ref _reads, 0);
+
+ ///
+ /// Append a change event to the feed. Caller must hold _lock.
+ ///
+ /// is itself thread-safe
+ /// and never tries to acquire _lock, so calling it while
+ /// holding the records lock cannot deadlock. Holding the lock here
+ /// is what guarantees that the queue order matches the order in
+ /// which the records dict was mutated.
+ ///
+ private void EmitChangeLocked(ChangeOp op, string entityId, Dictionary? fields)
+ {
+ // Use millisecond-precision unix timestamp so the sync-lag
+ // metric is in the same shape as the Python reference.
+ var timestampMs = (double) DateTimeOffset.UtcNow.ToUnixTimeMilliseconds();
+ _changes.Add(new ChangeEvent(op, entityId, fields, timestampMs));
+ }
+}
diff --git a/content/develop/use-cases/prefetch-cache/dotnet/PrefetchCache.cs b/content/develop/use-cases/prefetch-cache/dotnet/PrefetchCache.cs
new file mode 100644
index 0000000000..571238686e
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/dotnet/PrefetchCache.cs
@@ -0,0 +1,307 @@
+using StackExchange.Redis;
+
+namespace PrefetchCacheDemo;
+
+///
+/// Redis prefetch-cache helper.
+///
+/// Each cached entity is stored as a Redis hash under
+/// {prefix}{id} with a long safety-net TTL that bounds memory if
+/// the sync pipeline ever stops, but is not the freshness mechanism.
+/// Freshness comes from the path, which the
+/// sync worker calls every time a primary mutation arrives.
+///
+/// Reads run HGETALL against Redis only. A miss is not a
+/// fall-back trigger — the application treats it as an error or a
+/// deliberate for testing. In production a
+/// sustained miss rate means the prefetch or the sync pipeline is
+/// broken, not that the primary should be re-queried on the request
+/// path.
+///
+public class PrefetchCache
+{
+ private readonly IDatabase _db;
+ private readonly string _prefix;
+ private readonly int _ttlSeconds;
+
+ private readonly object _statsLock = new();
+ private long _hits;
+ private long _misses;
+ private long _prefetched;
+ private long _syncEventsApplied;
+ private double _syncLagMsTotal;
+ private long _syncLagSamples;
+
+ public PrefetchCache(IDatabase db, string prefix = "cache:category:", int ttlSeconds = 3600)
+ {
+ _db = db ?? throw new ArgumentNullException(nameof(db));
+ if (ttlSeconds < 1) throw new ArgumentException("ttlSeconds must be at least 1 second", nameof(ttlSeconds));
+ _prefix = string.IsNullOrEmpty(prefix) ? "cache:category:" : prefix;
+ _ttlSeconds = ttlSeconds;
+ }
+
+ public int TtlSeconds => _ttlSeconds;
+ public string Prefix => _prefix;
+
+ public sealed record ReadResult(Dictionary? Record, bool Hit, double RedisLatencyMs);
+
+ ///
+ /// Pipeline DEL + HSET + EXPIRE for every record. Returns the count loaded.
+ ///
+ /// The batch is non-transactional: it is fast on startup (when
+ /// nothing is reading the cache) and on the live /reprefetch
+ /// path (when the demo pauses the sync worker around the call).
+ /// Calling BulkLoad on a cache that is actively being read
+ /// and written to can briefly expose a key that has been deleted
+ /// but not yet rewritten; pause the writers first or use a
+ /// transaction if that matters.
+ ///
+ public int BulkLoad(IEnumerable> records)
+ {
+ var batch = _db.CreateBatch();
+ var tasks = new List();
+ var loaded = 0;
+ foreach (var record in records)
+ {
+ if (!record.TryGetValue("id", out var entityId) || string.IsNullOrEmpty(entityId))
+ {
+ continue;
+ }
+ var cacheKey = CacheKey(entityId);
+ tasks.Add(batch.KeyDeleteAsync(cacheKey));
+ tasks.Add(batch.HashSetAsync(
+ cacheKey,
+ record.Select(p => new HashEntry(p.Key, p.Value)).ToArray()));
+ tasks.Add(batch.KeyExpireAsync(cacheKey, TimeSpan.FromSeconds(_ttlSeconds)));
+ loaded++;
+ }
+ if (loaded > 0)
+ {
+ batch.Execute();
+ Task.WaitAll(tasks.ToArray());
+ }
+ lock (_statsLock)
+ {
+ _prefetched += loaded;
+ }
+ return loaded;
+ }
+
+ ///
+ /// Return (record, hit, redisLatencyMs) for an HGETALL against Redis.
+ ///
+ /// Prefetch-cache reads do not fall back to the primary. A miss is
+ /// a signal that the cache is incomplete, not a trigger to re-query
+ /// the source. The caller decides how to surface it.
+ ///
+ public ReadResult Get(string entityId)
+ {
+ var cacheKey = CacheKey(entityId);
+ var sw = System.Diagnostics.Stopwatch.StartNew();
+ var entries = _db.HashGetAll(cacheKey);
+ sw.Stop();
+ var redisLatencyMs = sw.Elapsed.TotalMilliseconds;
+
+ if (entries.Length > 0)
+ {
+ lock (_statsLock) { _hits++; }
+ return new ReadResult(ToDict(entries), Hit: true, redisLatencyMs);
+ }
+
+ lock (_statsLock) { _misses++; }
+ return new ReadResult(null, Hit: false, redisLatencyMs);
+ }
+
+ ///
+ /// Apply a primary change event to Redis.
+ ///
+ /// The sync worker calls this for every event the primary emits.
+ /// For an upsert, the helper rewrites the hash and refreshes the
+ /// safety-net TTL inside a transaction. For a delete, it removes
+ /// the cache key.
+ ///
+ public void ApplyChange(ChangeEvent change)
+ {
+ if (string.IsNullOrEmpty(change.Id)) return;
+ var cacheKey = CacheKey(change.Id);
+
+ if (change.Op == ChangeOp.Upsert)
+ {
+ if (change.Fields is null || change.Fields.Count == 0)
+ {
+ // Malformed upsert with no fields. Skip rather than
+ // crash the sync worker: HSET with an empty array
+ // throws, and there's nothing to write anyway. A real
+ // CDC consumer would route this to a dead-letter queue
+ // and alert; the demo just drops it.
+ return;
+ }
+ // StackExchange.Redis transactions are optimistic (WATCH-
+ // based) rather than full MULTI/EXEC, but the three commands
+ // here have no conditions and can be queued and dispatched
+ // atomically in one round trip via CreateTransaction.
+ var tx = _db.CreateTransaction();
+ _ = tx.KeyDeleteAsync(cacheKey);
+ _ = tx.HashSetAsync(
+ cacheKey,
+ change.Fields.Select(p => new HashEntry(p.Key, p.Value)).ToArray());
+ _ = tx.KeyExpireAsync(cacheKey, TimeSpan.FromSeconds(_ttlSeconds));
+ tx.Execute();
+ }
+ else if (change.Op == ChangeOp.Delete)
+ {
+ _db.KeyDelete(cacheKey);
+ }
+ else
+ {
+ return;
+ }
+
+ lock (_statsLock)
+ {
+ _syncEventsApplied++;
+ if (change.TimestampMs > 0.0)
+ {
+ var nowMs = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds();
+ var lagMs = Math.Max(0.0, nowMs - change.TimestampMs);
+ _syncLagMsTotal += lagMs;
+ _syncLagSamples++;
+ }
+ }
+ }
+
+ /// Delete one cache key. Demo-only: simulates a broken sync pipeline.
+ public bool Invalidate(string entityId)
+ {
+ return _db.KeyDelete(CacheKey(entityId));
+ }
+
+ /// Delete every key under this cache's prefix and return the count.
+ public int Clear()
+ {
+ var deleted = 0;
+ var batch = new List(500);
+ foreach (var key in ScanKeys())
+ {
+ batch.Add(key);
+ if (batch.Count >= 500)
+ {
+ deleted += (int) _db.KeyDelete(batch.ToArray());
+ batch.Clear();
+ }
+ }
+ if (batch.Count > 0)
+ {
+ deleted += (int) _db.KeyDelete(batch.ToArray());
+ }
+ return deleted;
+ }
+
+ /// Return every entity id currently in the cache.
+ public List Ids()
+ {
+ var ids = new List();
+ foreach (var key in ScanKeys())
+ {
+ var s = (string) key!;
+ ids.Add(s.StartsWith(_prefix, StringComparison.Ordinal) ? s.Substring(_prefix.Length) : s);
+ }
+ ids.Sort(StringComparer.Ordinal);
+ return ids;
+ }
+
+ ///
+ /// Iterate every key under the cache's prefix using a raw SCAN command.
+ ///
+ /// Sending SCAN through IDatabase.Execute avoids
+ /// IServer.Keys, which would require AllowAdmin=true on
+ /// the connection options — a flag that also grants
+ /// FLUSHDB/CONFIG and is best avoided in production.
+ ///
+ private IEnumerable ScanKeys()
+ {
+ var cursor = "0";
+ var match = $"{_prefix}*";
+ do
+ {
+ var reply = (RedisResult[]) _db.Execute(
+ "SCAN", cursor, "MATCH", match, "COUNT", 500)!;
+ cursor = (string) reply[0]!;
+ var keys = (RedisResult[]) reply[1]!;
+ foreach (var key in keys)
+ {
+ yield return (RedisKey) (string) key!;
+ }
+ } while (cursor != "0");
+ }
+
+ public int Count() => Ids().Count;
+
+ public long TtlRemaining(string entityId)
+ {
+ // Use Execute("TTL", ...) rather than KeyTimeToLive: the latter
+ // returns `null` for BOTH a missing key and a key without a TTL,
+ // collapsing the -2 and -1 sentinels. Execute returns the raw
+ // integer so the demo UI can show the correct value in each case.
+ return (long) _db.Execute("TTL", CacheKey(entityId));
+ }
+
+ public Dictionary Stats()
+ {
+ lock (_statsLock)
+ {
+ var total = _hits + _misses;
+ var hitRate = total == 0 ? 0.0 : Math.Round(100.0 * _hits / total, 1);
+ var avgLag = _syncLagSamples == 0
+ ? 0.0
+ : Math.Round(_syncLagMsTotal / _syncLagSamples, 2);
+ return new Dictionary
+ {
+ ["hits"] = _hits,
+ ["misses"] = _misses,
+ ["hit_rate_pct"] = hitRate,
+ ["prefetched"] = _prefetched,
+ ["sync_events_applied"] = _syncEventsApplied,
+ ["sync_lag_ms_avg"] = avgLag,
+ };
+ }
+ }
+
+ public void ResetStats()
+ {
+ lock (_statsLock)
+ {
+ _hits = 0;
+ _misses = 0;
+ _prefetched = 0;
+ _syncEventsApplied = 0;
+ _syncLagMsTotal = 0.0;
+ _syncLagSamples = 0;
+ }
+ }
+
+ private static Dictionary ToDict(HashEntry[] entries)
+ {
+ var result = new Dictionary(entries.Length, StringComparer.Ordinal);
+ foreach (var entry in entries)
+ {
+ result[entry.Name!] = entry.Value!;
+ }
+ return result;
+ }
+
+ private string CacheKey(string id) => _prefix + id;
+}
+
+public enum ChangeOp
+{
+ Upsert,
+ Delete,
+}
+
+///
+/// A single primary change event. is null for
+/// deletes and a fully-formed record for upserts.
+/// is the unix epoch in milliseconds (with sub-millisecond precision).
+///
+public sealed record ChangeEvent(ChangeOp Op, string Id, Dictionary? Fields, double TimestampMs);
diff --git a/content/develop/use-cases/prefetch-cache/dotnet/PrefetchCacheDemo.csproj b/content/develop/use-cases/prefetch-cache/dotnet/PrefetchCacheDemo.csproj
new file mode 100644
index 0000000000..909cf1efd3
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/dotnet/PrefetchCacheDemo.csproj
@@ -0,0 +1,14 @@
+
+
+
+ net8.0
+ enable
+ enable
+ PrefetchCacheDemo
+
+
+
+
+
+
+
diff --git a/content/develop/use-cases/prefetch-cache/dotnet/Program.cs b/content/develop/use-cases/prefetch-cache/dotnet/Program.cs
new file mode 100644
index 0000000000..375dfb2a0d
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/dotnet/Program.cs
@@ -0,0 +1,642 @@
+using PrefetchCacheDemo;
+using StackExchange.Redis;
+
+// .NET grows its ThreadPool gradually (~2 threads/sec under load),
+// which can starve polling threads in the pause/resume race test and
+// produce false fall-through reads. Raising the floor up front keeps
+// the demo's "cache converges to the primary state under load"
+// behaviour clean. A production helper would be async (HashGetAllAsync,
+// await Task.Delay) and avoid this entirely.
+ThreadPool.SetMinThreads(64, 64);
+
+// pauseMu serialises /clear and /reprefetch so two concurrent admin
+// callers cannot pause/resume each other into a sync-worker live state.
+// Mirrors the `pauseMu sync.Mutex` in the go-redis port.
+var pauseMu = new object();
+
+var host = "127.0.0.1";
+var port = 8787;
+var redisHost = "localhost";
+var redisPort = 6379;
+var cachePrefix = "cache:category:";
+var ttlSeconds = 3600;
+var primaryLatencyMs = 80;
+
+for (var i = 0; i < args.Length; i++)
+{
+ switch (args[i])
+ {
+ case "--host" when i + 1 < args.Length: host = args[++i]; break;
+ case "--port" when i + 1 < args.Length: port = int.Parse(args[++i]); break;
+ case "--redis-host" when i + 1 < args.Length: redisHost = args[++i]; break;
+ case "--redis-port" when i + 1 < args.Length: redisPort = int.Parse(args[++i]); break;
+ case "--cache-prefix" when i + 1 < args.Length: cachePrefix = args[++i]; break;
+ case "--ttl-seconds" when i + 1 < args.Length: ttlSeconds = int.Parse(args[++i]); break;
+ case "--primary-latency-ms" when i + 1 < args.Length: primaryLatencyMs = int.Parse(args[++i]); break;
+ }
+}
+
+port = int.TryParse(Environment.GetEnvironmentVariable("PORT"), out var envPort) ? envPort : port;
+redisHost = Environment.GetEnvironmentVariable("REDIS_HOST") ?? redisHost;
+redisPort = int.TryParse(Environment.GetEnvironmentVariable("REDIS_PORT"), out var envRedisPort)
+ ? envRedisPort
+ : redisPort;
+
+ConnectionMultiplexer redis;
+try
+{
+ var configuration = ConfigurationOptions.Parse($"{redisHost}:{redisPort}");
+ redis = ConnectionMultiplexer.Connect(configuration);
+ redis.GetDatabase().Ping();
+}
+catch (Exception ex)
+{
+ Console.Error.WriteLine($"Failed to connect to Redis at {redisHost}:{redisPort}: {ex.Message}");
+ return 1;
+}
+
+var cache = new PrefetchCache(redis.GetDatabase(), prefix: cachePrefix, ttlSeconds: ttlSeconds);
+var primary = new MockPrimaryStore(primaryLatencyMs);
+var sync = new SyncWorker(primary, cache);
+
+var startupSw = System.Diagnostics.Stopwatch.StartNew();
+cache.Clear();
+var initialLoaded = cache.BulkLoad(primary.ListRecords());
+startupSw.Stop();
+sync.Start();
+
+var builder = WebApplication.CreateBuilder();
+builder.WebHost.UseUrls($"http://{host}:{port}");
+builder.Logging.SetMinimumLevel(LogLevel.Warning);
+var app = builder.Build();
+
+Dictionary BuildStats()
+{
+ var stats = cache.Stats();
+ stats["primary_reads_total"] = primary.Reads;
+ stats["primary_read_latency_ms"] = primary.ReadLatencyMs;
+ return stats;
+}
+
+double Round2(double value) => Math.Round(value, 2);
+
+app.MapGet("/", () => Results.Content(HtmlPage.Generate(cache.TtlSeconds), "text/html; charset=utf-8"));
+
+app.MapGet("/categories", () => Results.Json(new
+{
+ cache_ids = cache.Ids(),
+ primary_ids = primary.ListIds(),
+}));
+
+app.MapGet("/read", (string? id) =>
+{
+ if (string.IsNullOrEmpty(id))
+ {
+ return Results.BadRequest(new { error = "Missing 'id' query parameter." });
+ }
+ var result = cache.Get(id);
+ return Results.Json(new
+ {
+ id,
+ record = result.Record,
+ hit = result.Hit,
+ redis_latency_ms = Round2(result.RedisLatencyMs),
+ ttl_remaining = cache.TtlRemaining(id),
+ stats = BuildStats(),
+ });
+});
+
+app.MapGet("/stats", () => Results.Json(BuildStats()));
+
+app.MapPost("/update", async (HttpContext ctx) =>
+{
+ var form = await ctx.Request.ReadFormAsync();
+ var id = form["id"].ToString();
+ var field = form["field"].ToString();
+ var value = form["value"].ToString();
+ if (string.IsNullOrEmpty(id) || string.IsNullOrEmpty(field))
+ {
+ return Results.BadRequest(new { error = "Missing 'id' or 'field'." });
+ }
+ if (!primary.UpdateField(id, field, value))
+ {
+ return Results.NotFound(new { error = $"Unknown category '{id}'." });
+ }
+ return Results.Json(new { id, field, value, stats = BuildStats() });
+});
+
+app.MapPost("/add", async (HttpContext ctx) =>
+{
+ var form = await ctx.Request.ReadFormAsync();
+ var id = form["id"].ToString().Trim();
+ var name = form["name"].ToString().Trim();
+ if (string.IsNullOrEmpty(id) || string.IsNullOrEmpty(name))
+ {
+ return Results.BadRequest(new { error = "Missing 'id' or 'name'." });
+ }
+ var displayOrder = form["display_order"].ToString();
+ if (string.IsNullOrEmpty(displayOrder)) displayOrder = "99";
+ var featured = form["featured"].ToString();
+ if (string.IsNullOrEmpty(featured)) featured = "false";
+ var parentId = form["parent_id"].ToString();
+ var record = new Dictionary(StringComparer.Ordinal)
+ {
+ ["id"] = id,
+ ["name"] = name,
+ ["display_order"] = displayOrder,
+ ["featured"] = featured,
+ ["parent_id"] = parentId,
+ };
+ if (!primary.AddRecord(record))
+ {
+ return Results.Json(new { error = $"Category '{id}' already exists." }, statusCode: 409);
+ }
+ return Results.Json(new { id, record, stats = BuildStats() });
+});
+
+app.MapPost("/delete", async (HttpContext ctx) =>
+{
+ var form = await ctx.Request.ReadFormAsync();
+ var id = form["id"].ToString();
+ if (string.IsNullOrEmpty(id))
+ {
+ return Results.BadRequest(new { error = "Missing 'id'." });
+ }
+ if (!primary.DeleteRecord(id))
+ {
+ return Results.NotFound(new { error = $"Unknown category '{id}'." });
+ }
+ return Results.Json(new { id, stats = BuildStats() });
+});
+
+app.MapPost("/invalidate", async (HttpContext ctx) =>
+{
+ var form = await ctx.Request.ReadFormAsync();
+ var id = form["id"].ToString();
+ if (string.IsNullOrEmpty(id))
+ {
+ return Results.BadRequest(new { error = "Missing 'id'." });
+ }
+ var deleted = cache.Invalidate(id);
+ return Results.Json(new { id, deleted, stats = BuildStats() });
+});
+
+app.MapPost("/clear", () =>
+{
+ // Serialise admin handlers so two concurrent callers cannot
+ // pause/resume each other into a sync-worker live state.
+ lock (pauseMu)
+ {
+ // Pause the sync worker so it cannot recreate keys between SCAN
+ // and DEL. Queued events accumulate and apply after resume.
+ sync.Pause();
+ int deleted;
+ try
+ {
+ deleted = cache.Clear();
+ }
+ finally
+ {
+ sync.Resume();
+ }
+ return Results.Json(new { deleted, stats = BuildStats() });
+ }
+});
+
+app.MapPost("/reprefetch", () =>
+{
+ // Serialise admin handlers so two concurrent callers cannot
+ // pause/resume each other into a sync-worker live state.
+ lock (pauseMu)
+ {
+ // Pause the sync worker so it cannot interleave with the
+ // clear + snapshot + bulk_load sequence. Without this, a change
+ // applied between ListRecords() and BulkLoad() would be overwritten
+ // by the stale snapshot.
+ sync.Pause();
+ int loaded;
+ double elapsedMs;
+ try
+ {
+ var sw = System.Diagnostics.Stopwatch.StartNew();
+ cache.Clear();
+ loaded = cache.BulkLoad(primary.ListRecords());
+ sw.Stop();
+ elapsedMs = sw.Elapsed.TotalMilliseconds;
+ }
+ finally
+ {
+ sync.Resume();
+ }
+ return Results.Json(new
+ {
+ loaded,
+ elapsed_ms = Round2(elapsedMs),
+ stats = BuildStats(),
+ });
+ }
+});
+
+app.MapPost("/reset", () =>
+{
+ cache.ResetStats();
+ primary.ResetReads();
+ return Results.Json(BuildStats());
+});
+
+Console.WriteLine($"Redis prefetch-cache demo server listening on http://{host}:{port}");
+Console.WriteLine(
+ $"Using Redis at {redisHost}:{redisPort}" +
+ $" with cache prefix '{cachePrefix}' and TTL {ttlSeconds}s");
+Console.WriteLine($"Prefetched {initialLoaded} records in {startupSw.Elapsed.TotalMilliseconds:F1} ms; sync worker running");
+
+AppDomain.CurrentDomain.ProcessExit += (_, _) => sync.Stop();
+Console.CancelKeyPress += (_, _) => sync.Stop();
+
+app.Run();
+sync.Stop();
+return 0;
+
+static class HtmlPage
+{
+ public static string Generate(int cacheTtl)
+ {
+ return Template.Replace("__CACHE_TTL__", cacheTtl.ToString());
+ }
+
+ // Verbatim copy of the Python reference's HTML_TEMPLATE. The pill
+ // text is changed to describe the .NET stack; everything else is
+ // identical so the demo UI matches across clients.
+ private const string Template = """
+
+
+
+
+
+ Redis Prefetch Cache Demo
+
+
+
+
+
StackExchange.Redis + ASP.NET Core minimal API
+
Redis Prefetch Cache Demo
+
+ Every record from the primary store has been pre-loaded into Redis.
+ Reads run HGETALL against Redis only — there is no
+ fall-back to the primary on the read path. When you add, update, or
+ delete a record, the primary emits a change event that a background
+ sync worker applies to Redis within a few milliseconds. A long
+ safety-net TTL (__CACHE_TTL__ s) is refreshed on every add or update
+ event (delete events remove the key) and bounds memory if sync ever stops.
+
+
+
+
+
Cache state
+
Loading...
+
+
+
+
+
Read a category
+
Reads come from Redis only. Every read should be a hit because
+ the cache was pre-loaded and the sync worker keeps it current.
+
+
+
+
+
+
+
Update a field
+
Updates write to the primary. The sync worker picks up the
+ change event and rewrites the cache hash within milliseconds.
+
+
+
+
+
+
+
+
+
+
+
Add a category
+
Inserts to the primary propagate to the cache through the same
+ sync path.
+
+
+
+
+
+
+
+
+
+
+
Delete a category
+
Deletes remove the record from the primary, and the sync worker
+ removes the cache entry.
+
+
+
+
+
+
+
Break the cache
+
Simulate a failure of the sync pipeline. Reads against the
+ affected key(s) return a miss until you re-prefetch.
+
+
+
+
+
+
+
+
+
+
+
Cache stats
+
Loading...
+
+
+
+
+
Last result
+
Read a category to see the cached record and timing.
+
+
+
+
+
+
+
+
+
+""";
+}
+
diff --git a/content/develop/use-cases/prefetch-cache/dotnet/SyncWorker.cs b/content/develop/use-cases/prefetch-cache/dotnet/SyncWorker.cs
new file mode 100644
index 0000000000..7c5d12f621
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/dotnet/SyncWorker.cs
@@ -0,0 +1,139 @@
+namespace PrefetchCacheDemo;
+
+///
+/// Background sync worker for the prefetch-cache demo.
+///
+/// A long-running background drains the primary's
+/// change queue and applies each event to Redis through
+/// . In a real system, the queue
+/// is replaced by a CDC pipeline (Redis Data Integration, Debezium, or
+/// an equivalent) that tails the primary's binlog/WAL and writes the
+/// same shape of events.
+///
+/// The worker exposes and so
+/// maintenance paths (/reprefetch, )
+/// can stop event application without tearing the thread down.
+/// blocks until the worker is parked, so the caller
+/// knows no apply is in flight by the time it returns.
+///
+public class SyncWorker
+{
+ private readonly MockPrimaryStore _primary;
+ private readonly PrefetchCache _cache;
+ private readonly TimeSpan _pollTimeout;
+ private readonly ManualResetEventSlim _stopEvent = new(false);
+ private readonly ManualResetEventSlim _pauseEvent = new(false);
+ private readonly ManualResetEventSlim _pausedIdleEvent = new(false);
+ private readonly object _threadLock = new();
+ private Thread? _thread;
+
+ public SyncWorker(MockPrimaryStore primary, PrefetchCache cache, TimeSpan? pollTimeout = null)
+ {
+ _primary = primary ?? throw new ArgumentNullException(nameof(primary));
+ _cache = cache ?? throw new ArgumentNullException(nameof(cache));
+ _pollTimeout = pollTimeout ?? TimeSpan.FromMilliseconds(50);
+ }
+
+ public void Start()
+ {
+ lock (_threadLock)
+ {
+ if (_thread is not null && _thread.IsAlive) return;
+ _stopEvent.Reset();
+ _pauseEvent.Reset();
+ _pausedIdleEvent.Reset();
+ _thread = new Thread(Run)
+ {
+ Name = "prefetch-cache-sync",
+ IsBackground = true,
+ };
+ _thread.Start();
+ }
+ }
+
+ ///
+ /// Signal the worker to exit and join its thread.
+ ///
+ /// If the join times out the worker is wedged inside
+ /// ; we leave
+ /// _thread populated so a subsequent
+ /// does not spawn a second worker on top of the orphan.
+ ///
+ public void Stop(TimeSpan? joinTimeout = null)
+ {
+ var timeout = joinTimeout ?? TimeSpan.FromSeconds(2);
+ _stopEvent.Set();
+ Thread? toJoin;
+ lock (_threadLock) { toJoin = _thread; }
+ if (toJoin is null) return;
+ if (toJoin.Join(timeout))
+ {
+ lock (_threadLock)
+ {
+ if (!toJoin.IsAlive) _thread = null;
+ }
+ }
+ }
+
+ ///
+ /// Stop applying events and block until the worker is parked.
+ ///
+ /// Returns true once the worker has confirmed it is idle, or
+ /// false if the timeout elapsed first. While paused, change
+ /// events accumulate in the primary's queue and are applied in
+ /// order after .
+ ///
+ public bool Pause(TimeSpan? timeout = null)
+ {
+ var waitFor = timeout ?? TimeSpan.FromSeconds(2);
+ _pausedIdleEvent.Reset();
+ _pauseEvent.Set();
+ Thread? current;
+ lock (_threadLock) { current = _thread; }
+ if (current is null || !current.IsAlive) return true;
+ return _pausedIdleEvent.Wait(waitFor);
+ }
+
+ public void Resume()
+ {
+ _pauseEvent.Reset();
+ _pausedIdleEvent.Reset();
+ }
+
+ private void Run()
+ {
+ while (!_stopEvent.IsSet)
+ {
+ if (_pauseEvent.IsSet)
+ {
+ // Park until the pause is lifted or the worker is stopped.
+ // Re-Set _pausedIdleEvent on every iteration so a *new*
+ // Pause call that arrives while we are still parked from
+ // the previous cycle gets acknowledged within one poll
+ // interval, not the Pause's 2 s timeout.
+ while (_pauseEvent.IsSet && !_stopEvent.IsSet)
+ {
+ _pausedIdleEvent.Set();
+ _stopEvent.Wait(_pollTimeout);
+ }
+ _pausedIdleEvent.Reset();
+ continue;
+ }
+
+ var change = _primary.NextChange(_pollTimeout);
+ if (change is null) continue;
+ try
+ {
+ _cache.ApplyChange(change);
+ }
+ catch (Exception ex)
+ {
+ // Demo behaviour: log and drop the event. A production
+ // CDC consumer would retry with bounded backoff and
+ // expose a dead-letter / error counter; see the guide's
+ // "Production usage" section.
+ Console.Error.WriteLine($"[sync] failed to apply {change}: {ex.Message}");
+ }
+ }
+ }
+}
diff --git a/content/develop/use-cases/prefetch-cache/dotnet/_index.md b/content/develop/use-cases/prefetch-cache/dotnet/_index.md
new file mode 100644
index 0000000000..fc773433d3
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/dotnet/_index.md
@@ -0,0 +1,426 @@
+---
+categories:
+- docs
+- develop
+- stack
+- oss
+- rs
+- rc
+description: Implement a Redis prefetch cache in C# with StackExchange.Redis
+linkTitle: StackExchange.Redis example (C#)
+title: Redis prefetch cache with StackExchange.Redis
+weight: 6
+---
+
+This guide shows you how to implement a Redis prefetch cache in C# with [StackExchange.Redis](https://stackexchange.github.io/StackExchange.Redis/). It includes a small local web server built with ASP.NET Core minimal APIs so you can watch the cache pre-load at startup, see a background sync worker apply primary mutations within milliseconds, and break the cache to confirm that reads never fall back to the primary.
+
+## Overview
+
+Prefetch caching pre-loads a working set of reference data into Redis before the first request arrives, so every read on the request path is a cache hit. A separate sync worker keeps the cache current as the source of truth changes — there is no fall-back to the primary on the read path.
+
+That gives you:
+
+* Near-100% cache hit ratios for reference and master data
+* Sub-millisecond reads for lookup-heavy paths at peak traffic
+* All reference-data reads offloaded from the primary database
+* Source-database changes propagated into Redis within a few milliseconds
+* A long safety-net TTL that bounds memory if the sync pipeline ever stops
+
+In this example, each cached category is stored as a Redis hash under a key like `cache:category:{id}`. The hash holds the category fields (`id`, `name`, `display_order`, `featured`, `parent_id`) and the key has a long safety-net TTL that the sync worker refreshes on every add or update event. Delete events remove the cache key outright, so there is no TTL to refresh in that case.
+
+## How it works
+
+The flow has three independent paths:
+
+1. **On startup**, the demo server calls `cache.BulkLoad(primary.ListRecords())`, which pipelines `DEL` + `HSET` + `EXPIRE` for every record in one round trip.
+2. **On every read**, the application calls `cache.Get(entityId)`, which runs `HGETALL` against Redis only. A miss is treated as an error, not a trigger to query the primary.
+3. **On every primary mutation**, the primary appends a change event to an in-process queue. The sync worker thread drains the queue and calls `cache.ApplyChange(event)`. For an `Upsert`, the helper rewrites the cache hash and refreshes the safety-net TTL; for a `Delete`, it removes the cache key.
+
+In a real system the in-process change queue is replaced by a CDC pipeline — [Redis Data Integration]({{< relref "/integrate/redis-data-integration" >}}), Debezium plus a lightweight consumer, or an equivalent tool that tails the source's binlog/WAL and pushes events into Redis.
+
+## The prefetch-cache helper
+
+The `PrefetchCache` class wraps the cache operations
+([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/prefetch-cache/dotnet/PrefetchCache.cs)):
+
+```csharp
+using StackExchange.Redis;
+using PrefetchCacheDemo;
+
+var redis = ConnectionMultiplexer.Connect("localhost:6379");
+var primary = new MockPrimaryStore();
+var cache = new PrefetchCache(redis.GetDatabase(), ttlSeconds: 3600);
+
+// Pre-load every primary record into Redis in one pipelined round trip.
+cache.BulkLoad(primary.ListRecords());
+
+// Start the sync worker so primary mutations propagate into Redis.
+var sync = new SyncWorker(primary, cache);
+sync.Start();
+
+// Read paths now go to Redis only.
+var result = cache.Get("cat-001");
+```
+
+### Data model
+
+Each cached category is stored in a Redis hash:
+
+```text
+cache:category:cat-001
+ id = cat-001
+ name = Beverages
+ display_order = 1
+ featured = true
+ parent_id =
+```
+
+The implementation uses:
+
+* [`HSET`]({{< relref "/commands/hset" >}}) + [`EXPIRE`]({{< relref "/commands/expire" >}}), batched, for the bulk load and every sync event
+* [`HGETALL`]({{< relref "/commands/hgetall" >}}) on the read path
+* [`DEL`]({{< relref "/commands/del" >}}) for sync-delete events and explicit invalidation
+* [`SCAN`]({{< relref "/commands/scan" >}}) to enumerate the cached keyspace and to clear the prefix
+* [`TTL`]({{< relref "/commands/ttl" >}}) to surface remaining safety-net time in the demo UI
+
+## Bulk load on startup
+
+The `BulkLoad` method pipelines a `DEL` + `HSET` + `EXPIRE` triple for every record through a StackExchange.Redis `IBatch`, so loading thousands of records takes one network RTT plus the time Redis spends executing the commands locally — typically tens of milliseconds even for a large reference table:
+
+```csharp
+public int BulkLoad(IEnumerable> records)
+{
+ var batch = _db.CreateBatch();
+ var tasks = new List();
+ var loaded = 0;
+ foreach (var record in records)
+ {
+ if (!record.TryGetValue("id", out var entityId) || string.IsNullOrEmpty(entityId)) continue;
+ var cacheKey = CacheKey(entityId);
+ tasks.Add(batch.KeyDeleteAsync(cacheKey));
+ tasks.Add(batch.HashSetAsync(
+ cacheKey,
+ record.Select(p => new HashEntry(p.Key, p.Value)).ToArray()));
+ tasks.Add(batch.KeyExpireAsync(cacheKey, TimeSpan.FromSeconds(_ttlSeconds)));
+ loaded++;
+ }
+ if (loaded > 0)
+ {
+ batch.Execute();
+ Task.WaitAll(tasks.ToArray());
+ }
+ return loaded;
+}
+```
+
+`IBatch` is non-transactional on purpose for the **startup** path: nothing is reading the cache yet, the records do not need to be applied atomically as a set, and skipping `MULTI`/`EXEC` keeps the bulk load fast. The same method is used for the live `/reprefetch` reload, which is safe because the demo pauses the sync worker around the clear-and-reload sequence — see [Re-prefetch under load](#re-prefetch-under-load) below. If you call `BulkLoad` directly from your own code on a cache that is already serving reads, either pause your writers first or rewrite it with `IDatabase.CreateTransaction()` so callers cannot observe a half-loaded record.
+
+## Reads from Redis only
+
+The `Get` method runs `HGETALL` and returns the cached hash. **It does not fall back to the primary on a miss.** In a healthy system, a miss never happens; if it does, the application surfaces it as an error and treats it as a sync-pipeline incident:
+
+```csharp
+public ReadResult Get(string entityId)
+{
+ var cacheKey = CacheKey(entityId);
+ var sw = System.Diagnostics.Stopwatch.StartNew();
+ var entries = _db.HashGetAll(cacheKey);
+ sw.Stop();
+ var redisLatencyMs = sw.Elapsed.TotalMilliseconds;
+
+ if (entries.Length > 0)
+ {
+ lock (_statsLock) { _hits++; }
+ return new ReadResult(ToDict(entries), Hit: true, redisLatencyMs);
+ }
+
+ lock (_statsLock) { _misses++; }
+ return new ReadResult(null, Hit: false, redisLatencyMs);
+}
+```
+
+This is the key behavioural difference from [cache-aside]({{< relref "/develop/use-cases/cache-aside" >}}): the request path never touches the primary, so reference-data reads cannot contribute to primary database load.
+
+## Applying sync events
+
+The sync worker calls `ApplyChange` for every primary mutation. For an `Upsert`, the helper rewrites the cache hash and refreshes the safety-net TTL inside a StackExchange.Redis transaction (`IDatabase.CreateTransaction()`) so the cache never holds a stale mix of old and new fields. For a `Delete`, it removes the cache key:
+
+```csharp
+public void ApplyChange(ChangeEvent change)
+{
+ if (string.IsNullOrEmpty(change.Id)) return;
+ var cacheKey = CacheKey(change.Id);
+
+ if (change.Op == ChangeOp.Upsert)
+ {
+ if (change.Fields is null || change.Fields.Count == 0) return;
+ var tx = _db.CreateTransaction();
+ _ = tx.KeyDeleteAsync(cacheKey);
+ _ = tx.HashSetAsync(
+ cacheKey,
+ change.Fields.Select(p => new HashEntry(p.Key, p.Value)).ToArray());
+ _ = tx.KeyExpireAsync(cacheKey, TimeSpan.FromSeconds(_ttlSeconds));
+ tx.Execute();
+ }
+ else if (change.Op == ChangeOp.Delete)
+ {
+ _db.KeyDelete(cacheKey);
+ }
+}
+```
+
+The `DEL` before the `HSET` ensures the cached hash contains exactly the fields the primary record has now — fields that have been dropped from the primary will not linger in Redis. StackExchange.Redis transactions are optimistic (WATCH-based under the hood), but the three commands here have no conditions so they queue and dispatch atomically in a single round trip.
+
+The "skip empty upserts" early-return is important: `HSET` with an empty array of fields throws, and a CDC pipeline that ever emits an upsert without fields would crash the sync worker on first encounter. A production consumer would route the bad event to a dead-letter queue and alert; the demo simply drops it.
+
+## The sync worker
+
+The `SyncWorker` runs a long-running background `Thread` (not a `Task`) so it can poll on the change queue without consuming a ThreadPool slot. Every change is applied to Redis as soon as it arrives
+([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/prefetch-cache/dotnet/SyncWorker.cs)):
+
+```csharp
+private void Run()
+{
+ while (!_stopEvent.IsSet)
+ {
+ if (_pauseEvent.IsSet)
+ {
+ _pausedIdleEvent.Set();
+ while (_pauseEvent.IsSet && !_stopEvent.IsSet)
+ {
+ _stopEvent.Wait(_pollTimeout);
+ }
+ _pausedIdleEvent.Reset();
+ continue;
+ }
+
+ var change = _primary.NextChange(_pollTimeout);
+ if (change is null) continue;
+ try { _cache.ApplyChange(change); }
+ catch (Exception ex)
+ {
+ Console.Error.WriteLine($"[sync] failed to apply {change}: {ex.Message}");
+ }
+ }
+}
+```
+
+`ManualResetEventSlim` provides the pause and stop signals. `BlockingCollection.TryTake(out _, timeout)` is the .NET equivalent of `queue.Queue.get(timeout=…)` from the reference; the 50 ms timeout keeps the worker responsive to pause and stop requests without busy-looping.
+
+In production this loop is replaced by a CDC consumer reading from RDI's Redis output stream, Debezium's Kafka topic, or an equivalent change feed. The shape stays the same: drain events, apply them to Redis, advance the consumer offset.
+
+## Invalidation and re-prefetch
+
+Two helpers exist for testing and recovery:
+
+* `Invalidate(entityId)` deletes a single cache key. The demo uses it to simulate a sync-pipeline failure on one record.
+* `Clear()` runs `SCAN MATCH cache:category:*` and deletes every key under the prefix. The demo uses it to simulate a full cache loss.
+
+In both cases, the recovery path is to call `BulkLoad(primary.ListRecords())` again — re-prefetching from the primary. The demo exposes this as the "Re-prefetch" button so you can see the cache come back to a fully-warm state in one operation.
+
+### Re-prefetch under load
+
+`Clear()` and `BulkLoad()` are not atomic against the sync worker. If a change event arrives between the snapshot (`primary.ListRecords()`) and the bulk write, the bulk write can overwrite a newer value; if a change event arrives between `Clear()`'s `SCAN` and `DEL`, the cleared entry can immediately be recreated. The demo's `/clear` and `/reprefetch` handlers solve this by pausing the sync worker around the operation:
+
+```csharp
+sync.Pause();
+try
+{
+ cache.Clear();
+ cache.BulkLoad(primary.ListRecords());
+}
+finally
+{
+ sync.Resume();
+}
+```
+
+`Pause()` waits for the worker to finish whatever event it is currently applying, parks the run loop, and returns. Change events that arrive during the pause sit in the primary's `BlockingCollection` queue and apply in order once `Resume()` is called, so no event is lost.
+
+## Hit/miss accounting
+
+The helper keeps in-process counters for hits, misses, prefetched records, sync events applied, and the average lag between a primary change and its application to Redis. The demo UI surfaces these so you can confirm the cache is absorbing all reads and the sync worker is keeping up:
+
+```csharp
+public Dictionary Stats()
+{
+ lock (_statsLock)
+ {
+ var total = _hits + _misses;
+ var hitRate = total == 0 ? 0.0 : Math.Round(100.0 * _hits / total, 1);
+ var avgLag = _syncLagSamples == 0
+ ? 0.0
+ : Math.Round(_syncLagMsTotal / _syncLagSamples, 2);
+ return new Dictionary
+ {
+ ["hits"] = _hits,
+ ["misses"] = _misses,
+ ["hit_rate_pct"] = hitRate,
+ ["prefetched"] = _prefetched,
+ ["sync_events_applied"] = _syncEventsApplied,
+ ["sync_lag_ms_avg"] = avgLag,
+ };
+ }
+}
+```
+
+In production you would emit these as counters and gauges through `Meter`/`Counter` and scrape with Prometheus or OpenTelemetry. The sync-lag metric is the most important: a sudden rise indicates the CDC pipeline is falling behind.
+
+## Prerequisites
+
+Before running the demo, make sure that:
+
+* Redis is running and accessible. By default, the demo connects to `localhost:6379`.
+* The [.NET 8 SDK](https://dotnet.microsoft.com/download) (or newer) is installed:
+
+```bash
+dotnet --version
+```
+
+The project file pins `StackExchange.Redis` at 2.7+, which `dotnet run` restores automatically on first invocation.
+
+If your Redis server is running elsewhere, start the demo with `--redis-host` and `--redis-port`.
+
+## Running the demo
+
+### Get the source files
+
+The demo consists of five files. Download them from the [`dotnet` source folder](https://github.com/redis/docs/tree/main/content/develop/use-cases/prefetch-cache/dotnet) on GitHub, or grab them with `curl`:
+
+```bash
+mkdir prefetch-cache-demo && cd prefetch-cache-demo
+BASE=https://raw.githubusercontent.com/redis/docs/main/content/develop/use-cases/prefetch-cache/dotnet
+curl -O $BASE/PrefetchCacheDemo.csproj
+curl -O $BASE/PrefetchCache.cs
+curl -O $BASE/MockPrimaryStore.cs
+curl -O $BASE/SyncWorker.cs
+curl -O $BASE/Program.cs
+```
+
+### Start the demo server
+
+From that directory:
+
+```bash
+dotnet run
+```
+
+You should see something like:
+
+```text
+Redis prefetch-cache demo server listening on http://127.0.0.1:8787
+Using Redis at localhost:6379 with cache prefix 'cache:category:' and TTL 3600s
+Prefetched 5 records in 92.4 ms; sync worker running
+```
+
+After starting the server, visit `http://localhost:8787`.
+
+The demo server uses ASP.NET Core minimal APIs and only standard .NET threading primitives:
+
+* `WebApplication.CreateBuilder()` for HTTP routing
+* `BlockingCollection` for the change-event queue
+* `Thread` + `ManualResetEventSlim` for the sync worker
+
+It exposes a small interactive page where you can:
+
+* See which IDs are in the cache and in the primary, side by side
+* Read a category through the cache and confirm every read is a hit
+* Update a field on the primary and watch the sync worker rewrite the cache hash
+* Add and delete categories and watch them appear and disappear from the cache
+* Invalidate one key or clear the entire cache to simulate a sync-pipeline failure
+* Re-prefetch from the primary to recover from a broken cache state
+* Watch the average sync lag, and confirm primary reads stay at one until you re-prefetch — each `/reprefetch` adds another primary read for the snapshot, but normal request traffic never reaches the primary at all
+
+## The mock primary store
+
+To make the demo self-contained, the example includes a `MockPrimaryStore` that stands in for a source-of-truth database
+([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/prefetch-cache/dotnet/MockPrimaryStore.cs)):
+
+```csharp
+public class MockPrimaryStore
+{
+ public MockPrimaryStore(int readLatencyMs = 80) { ... }
+
+ public List> ListRecords()
+ {
+ Thread.Sleep(ReadLatencyMs);
+ ...
+ }
+
+ public bool UpdateField(string entityId, string field, string value)
+ {
+ lock (_lock)
+ {
+ ...
+ EmitChangeLocked(ChangeOp.Upsert, entityId, snapshot);
+ }
+ return true;
+ }
+}
+```
+
+Every mutation appends a `ChangeEvent` to an in-process [`BlockingCollection`](https://learn.microsoft.com/dotnet/api/system.collections.concurrent.blockingcollection-1). The sync worker drains the queue with a 50 ms timeout and applies each event to Redis. The emit happens while the mutation lock is held so two concurrent updates cannot interleave their event order on the queue. In a real system this queue is replaced by a CDC pipeline — RDI on Redis Enterprise or Debezium with a Redis consumer on open-source Redis.
+
+## Production usage
+
+This guide uses a deliberately small local demo so you can focus on the prefetch-cache pattern. In production, you will usually want to harden several aspects of it.
+
+### Replace the in-process change queue with a real CDC pipeline
+
+The demo's in-process queue is the simplest possible stand-in for a CDC change feed. In production, the change feed lives outside the application process: an RDI pipeline configured against your primary database, Debezium connectors writing to Kafka or a Redis stream, or your application explicitly publishing change events from the write path. Whatever you choose, the consumer side stays the same — read events, apply them to Redis, advance the offset.
+
+### Use a long safety-net TTL, not a freshness TTL
+
+The TTL on each cache key is a **safety net**: it bounds memory if the sync pipeline silently stops, so a stuck consumer cannot leave stale data in Redis indefinitely. The TTL is not the freshness mechanism — freshness comes from the sync worker, which refreshes the TTL on every add or update event (delete events remove the key). Pick a TTL that is comfortably longer than your worst-case sync lag plus your alerting window, so a transient sync hiccup never expires hot keys.
+
+### Decide what to do on a cache miss
+
+A prefetch cache treats a miss as an error or a missing record. The two reasonable strategies are:
+
+* **Return a 404 to the user.** Appropriate when the cache is authoritative for the lookup — for example, when the user is asking for a category by ID and the ID is not in the cache.
+* **Page on-call.** A sustained miss rate on IDs you know exist is an incident: either the prefetch did not run, or the sync pipeline is broken.
+
+Whichever you choose, do not fall back to the primary on the read path — that is what cache-aside is for, and conflating the two patterns breaks the load-isolation guarantee that prefetch provides.
+
+### Bound the working set to what fits in memory
+
+Prefetch only works if the entire dataset fits in Redis memory with headroom. Estimate the size of your reference data, multiply by a growth factor, and confirm the result fits within your Redis instance's `maxmemory` minus what other use cases need. If the working set grows beyond what Redis can hold, switch the dataset to a cache-aside pattern instead — the request path will pay miss latency, but you will not OOM.
+
+### Reconcile periodically against the primary
+
+CDC pipelines are eventually consistent: an event can be lost (broker outage, consumer crash, configuration drift) and the cache can silently diverge from the source. Run a periodic reconciliation job that re-reads all primary records, compares them against the cache, and either re-prefetches or fixes individual entries. Even running it once a day catches drift that ad-hoc inspection would miss.
+
+### Namespace cache keys in shared Redis deployments
+
+If multiple applications share a Redis deployment, prefix cache keys with the application name (`cache:billing:category:{id}`) so different services cannot clobber each other's entries. The helper takes a `prefix` argument exactly for this.
+
+### Prefer async on hot paths
+
+The demo helper is synchronous (`HashGetAll`, `KeyDelete`, etc.) to keep the example compact. .NET's `ThreadPool` grows by only a couple of threads per second under load, so a synchronous helper combined with many concurrent HTTP handlers can starve workers and produce false cache-misses during traffic spikes. The demo works around this by calling `ThreadPool.SetMinThreads(64, 64)` at startup; a production helper would expose `async` methods (`HashGetAllAsync`, `KeyDeleteAsync`, `await Task.Delay`) and route requests through an async pipeline end-to-end. That removes the synchronous-blocking risk entirely and is the idiomatic shape for ASP.NET Core handlers.
+
+### Use TickCount64 (not TickCount) for any deadline arithmetic
+
+If you add timeout/deadline logic to your sync worker or maintenance handlers, use `Environment.TickCount64`, never `Environment.TickCount`. The 32-bit variant wraps every 24.9 days and adding a positive offset near the wraparound boundary produces a negative deadline that immediately exits the polling loop. The 64-bit variant has no practical wrap interval.
+
+### Inspect cached entries directly in Redis
+
+When testing or troubleshooting, inspect the stored cache keys directly to confirm the bulk load and the sync worker are writing what you expect:
+
+```bash
+redis-cli --scan --pattern 'cache:category:*'
+redis-cli HGETALL cache:category:cat-001
+redis-cli TTL cache:category:cat-001
+```
+
+If a key is missing for an ID that still exists in the primary, the prefetch did not run, the key expired without a sync refresh, or someone invalidated it. If a key is still present for an ID that was deleted in the primary, the delete event has not yet been applied. If the TTL is much lower than the configured safety-net value on a hot key, the sync worker is not keeping up.
+
+## Learn more
+
+* [StackExchange.Redis documentation](https://stackexchange.github.io/StackExchange.Redis/) - Install and use the StackExchange.Redis client
+* [HSET command]({{< relref "/commands/hset" >}}) - Write hash fields
+* [HGETALL command]({{< relref "/commands/hgetall" >}}) - Read every field of a hash
+* [EXPIRE command]({{< relref "/commands/expire" >}}) - Set key expiration in seconds
+* [DEL command]({{< relref "/commands/del" >}}) - Delete a key on invalidation or sync-delete
+* [SCAN command]({{< relref "/commands/scan" >}}) - Iterate the cached keyspace without blocking the server
+* [TTL command]({{< relref "/commands/ttl" >}}) - Inspect remaining safety-net time on a key
+* [Redis Data Integration]({{< relref "/integrate/redis-data-integration" >}}) - Configuration-driven CDC into Redis on Redis Enterprise and Redis Cloud
diff --git a/content/develop/use-cases/prefetch-cache/go/_index.md b/content/develop/use-cases/prefetch-cache/go/_index.md
new file mode 100644
index 0000000000..18844b7313
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/go/_index.md
@@ -0,0 +1,456 @@
+---
+categories:
+- docs
+- develop
+- stack
+- oss
+- rs
+- rc
+description: Implement a Redis prefetch cache in Go with go-redis
+linkTitle: go-redis example (Go)
+title: Redis prefetch cache with go-redis
+weight: 3
+---
+
+This guide shows you how to implement a Redis prefetch cache in Go with [`go-redis`]({{< relref "/develop/clients/go" >}}). It includes a small local web server built with Go's standard `net/http` package so you can watch the cache pre-load at startup, see a background sync worker apply primary mutations within milliseconds, and break the cache to confirm that reads never fall back to the primary.
+
+## Overview
+
+Prefetch caching pre-loads a working set of reference data into Redis before the first request arrives, so every read on the request path is a cache hit. A separate sync worker keeps the cache current as the source of truth changes — there is no fall-back to the primary on the read path.
+
+That gives you:
+
+* Near-100% cache hit ratios for reference and master data
+* Sub-millisecond reads for lookup-heavy paths at peak traffic
+* All reference-data reads offloaded from the primary database
+* Source-database changes propagated into Redis within a few milliseconds
+* A long safety-net TTL that bounds memory if the sync pipeline ever stops
+
+In this example, each cached category is stored as a Redis hash under a key like `cache:category:{id}`. The hash holds the category fields (`id`, `name`, `display_order`, `featured`, `parent_id`) and the key has a long safety-net TTL that the sync worker refreshes on every add or update event. Delete events remove the cache key outright, so there is no TTL to refresh in that case.
+
+## How it works
+
+The flow has three independent paths:
+
+1. **On startup**, the demo server calls `cache.BulkLoad(ctx, primary.ListRecords())`, which pipelines `DEL` + `HSET` + `EXPIRE` for every record in one round trip.
+2. **On every read**, the application calls `cache.Get(ctx, id)`, which runs `HGETALL` against Redis only. A miss is treated as an error, not a trigger to query the primary.
+3. **On every primary mutation**, the primary appends a change event to an in-process channel. A sync-worker goroutine drains the channel and calls `cache.ApplyChange(ctx, event)`. For an `upsert`, the helper rewrites the cache hash and refreshes the safety-net TTL; for a `delete`, it removes the cache key.
+
+In a real system the in-process channel is replaced by a CDC pipeline — [Redis Data Integration]({{< relref "/integrate/redis-data-integration" >}}), Debezium plus a lightweight consumer, or an equivalent tool that tails the source's binlog/WAL and pushes events into Redis.
+
+## The prefetch-cache helper
+
+The `PrefetchCache` type wraps the cache operations
+([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/prefetch-cache/go/cache.go)):
+
+```go
+package main
+
+import (
+ "context"
+
+ "github.com/redis/go-redis/v9"
+ "prefetchcache"
+)
+
+func main() {
+ client := redis.NewClient(&redis.Options{Addr: "localhost:6379"})
+ primary := prefetchcache.NewMockPrimaryStore(80)
+ cache := prefetchcache.NewPrefetchCache(client, "cache:category:", 3600)
+
+ ctx := context.Background()
+
+ // Pre-load every primary record into Redis in one pipelined round trip.
+ _, _ = cache.BulkLoad(ctx, primary.ListRecords())
+
+ // Start the sync worker so primary mutations propagate into Redis.
+ sync := prefetchcache.NewSyncWorker(primary, cache)
+ sync.Start()
+ defer sync.Stop(2 * time.Second)
+
+ // Read paths now go to Redis only.
+ result, _ := cache.Get(ctx, "cat-001")
+ _ = result
+}
+```
+
+### Data model
+
+Each cached category is stored in a Redis hash:
+
+```text
+cache:category:cat-001
+ id = cat-001
+ name = Beverages
+ display_order = 1
+ featured = true
+ parent_id =
+```
+
+The implementation uses:
+
+* [`HSET`]({{< relref "/commands/hset" >}}) + [`EXPIRE`]({{< relref "/commands/expire" >}}), pipelined, for the bulk load and every sync event
+* [`HGETALL`]({{< relref "/commands/hgetall" >}}) on the read path
+* [`DEL`]({{< relref "/commands/del" >}}) for sync-delete events and explicit invalidation
+* [`SCAN`]({{< relref "/commands/scan" >}}) to enumerate the cached keyspace and to clear the prefix
+* [`TTL`]({{< relref "/commands/ttl" >}}) to surface remaining safety-net time in the demo UI
+
+## Bulk load on startup
+
+`BulkLoad` pipelines a `DEL` + `HSET` + `EXPIRE` triple for every record. The pipeline is sent in a single round trip, so loading thousands of records takes one network RTT plus the time Redis spends executing the commands locally — typically tens of milliseconds even for a large reference table:
+
+```go
+func (c *PrefetchCache) BulkLoad(ctx context.Context, records []map[string]string) (int, error) {
+ loaded := 0
+ pipe := c.client.Pipeline()
+ for _, record := range records {
+ id := record["id"]
+ if id == "" {
+ continue
+ }
+ cacheKey := c.cacheKey(id)
+ pipe.Del(ctx, cacheKey)
+ pipe.HSet(ctx, cacheKey, hashFields(record)...)
+ pipe.Expire(ctx, cacheKey, time.Duration(c.ttlSeconds)*time.Second)
+ loaded++
+ }
+ if loaded > 0 {
+ if _, err := pipe.Exec(ctx); err != nil {
+ return 0, err
+ }
+ }
+ return loaded, nil
+}
+```
+
+The pipeline uses `client.Pipeline()` (non-transactional) on the **startup** path: nothing is reading the cache yet, the records do not need to be applied atomically as a set, and skipping `MULTI`/`EXEC` keeps the bulk load fast. The same method is used for the live `/reprefetch` reload, which is safe because the demo pauses the sync worker around the clear-and-reload sequence — see [Re-prefetch under load](#re-prefetch-under-load) below. If you call `BulkLoad` directly from your own code on a cache that is already serving reads, either pause your writers first or rewrite it with `client.TxPipeline()` so callers cannot observe a half-loaded record.
+
+## Reads from Redis only
+
+`Get` runs `HGETALL` and returns the cached hash. **It does not fall back to the primary on a miss.** In a healthy system, a miss never happens; if it does, the application surfaces it as an error and treats it as a sync-pipeline incident:
+
+```go
+func (c *PrefetchCache) Get(ctx context.Context, id string) (GetResult, error) {
+ cacheKey := c.cacheKey(id)
+ started := time.Now()
+ cached, err := c.client.HGetAll(ctx, cacheKey).Result()
+ latencyMs := float64(time.Since(started).Microseconds()) / 1000.0
+ if err != nil {
+ return GetResult{RedisLatencyMs: latencyMs}, err
+ }
+ if len(cached) > 0 {
+ c.recordHit()
+ return GetResult{Record: cached, Hit: true, RedisLatencyMs: latencyMs}, nil
+ }
+ c.recordMiss()
+ return GetResult{Record: nil, Hit: false, RedisLatencyMs: latencyMs}, nil
+}
+```
+
+This is the key behavioural difference from [cache-aside]({{< relref "/develop/use-cases/cache-aside" >}}): the request path never touches the primary, so reference-data reads cannot contribute to primary database load.
+
+## Applying sync events
+
+The sync worker calls `ApplyChange` for every primary mutation. For an `upsert`, the helper rewrites the cache hash and refreshes the safety-net TTL in one pipelined transaction so the cache never holds a stale mix of old and new fields. For a `delete`, it removes the cache key:
+
+```go
+func (c *PrefetchCache) ApplyChange(ctx context.Context, change Change) error {
+ if change.ID == "" {
+ return nil
+ }
+ cacheKey := c.cacheKey(change.ID)
+
+ switch change.Op {
+ case ChangeOpUpsert:
+ if len(change.Fields) == 0 {
+ // Malformed upsert with no fields. Skip rather than crash
+ // the sync worker: HSET with an empty mapping errors, and
+ // there's nothing to write anyway.
+ return nil
+ }
+ pipe := c.client.TxPipeline()
+ pipe.Del(ctx, cacheKey)
+ pipe.HSet(ctx, cacheKey, hashFields(change.Fields)...)
+ pipe.Expire(ctx, cacheKey, time.Duration(c.ttlSeconds)*time.Second)
+ if _, err := pipe.Exec(ctx); err != nil {
+ return err
+ }
+ case ChangeOpDelete:
+ if err := c.client.Del(ctx, cacheKey).Err(); err != nil {
+ return err
+ }
+ default:
+ return nil
+ }
+ return nil
+}
+```
+
+The `DEL` before the `HSET` ensures the cached hash contains exactly the fields the primary record has now — fields that have been dropped from the primary will not linger in Redis. `TxPipeline` wraps the three commands in `MULTI`/`EXEC` so concurrent readers can never observe the half-written intermediate state.
+
+## The sync worker
+
+`SyncWorker` runs a single goroutine that blocks on the primary's change channel with a short timeout. Every change is applied to Redis as soon as it arrives
+([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/prefetch-cache/go/sync_worker.go)):
+
+```go
+func (w *SyncWorker) run(ctx context.Context, done chan struct{}) {
+ defer close(done)
+ for {
+ select {
+ case <-ctx.Done():
+ return
+ default:
+ }
+
+ // ... park here if paused ...
+
+ change, ok := w.primary.NextChange(w.pollTimeout)
+ if !ok {
+ continue
+ }
+ if err := w.cache.ApplyChange(ctx, change); err != nil {
+ log.Printf("[sync] failed to apply %s %s: %v", change.Op, change.ID, err)
+ }
+ }
+}
+```
+
+Pause and resume are coordinated through two channels stored on the worker:
+
+* `pausedIdle` is closed by the worker when the run loop has parked itself. `Pause()` waits on this channel so it can prove no `ApplyChange` is in flight before returning.
+* `resumeCh` is closed by `Resume()` to wake the parked select. Both channels are replaced with fresh values on each `Pause()` so a stale `Resume` from a previous cycle cannot prematurely unblock the next pause.
+
+In production this loop is replaced by a CDC consumer reading from RDI's Redis output stream, Debezium's Kafka topic, or an equivalent change feed. The shape stays the same: drain events, apply them to Redis, advance the consumer offset.
+
+## Invalidation and re-prefetch
+
+Two helpers exist for testing and recovery:
+
+* `Invalidate(ctx, id)` deletes a single cache key. The demo uses it to simulate a sync-pipeline failure on one record.
+* `Clear(ctx)` runs `SCAN MATCH cache:category:*` and deletes every key under the prefix. The demo uses it to simulate a full cache loss.
+
+In both cases, the recovery path is to call `BulkLoad(ctx, primary.ListRecords())` again — re-prefetching from the primary. The demo exposes this as the "Re-prefetch" button so you can see the cache come back to a fully-warm state in one operation.
+
+### Re-prefetch under load
+
+`Clear()` and `BulkLoad()` are not atomic against the sync worker. If a change event arrives between the snapshot (`primary.ListRecords()`) and the bulk write, the bulk write can overwrite a newer value; if a change event arrives between `Clear()`'s `SCAN` and `DEL`, the cleared entry can immediately be recreated. The demo's `/clear` and `/reprefetch` handlers solve this by pausing the sync worker around the operation:
+
+```go
+s.sync.Pause(2 * time.Second)
+_, _ = s.cache.Clear(ctx)
+loaded, _ := s.cache.BulkLoad(ctx, s.primary.ListRecords())
+s.sync.Resume()
+```
+
+`Pause()` waits for the worker goroutine to finish whatever event it is currently applying, parks the run loop, and returns. Change events that arrive during the pause sit on the primary's channel and apply in order once `Resume()` is called, so no event is lost. The demo also wraps the pause/resume pair in a `sync.Mutex` so two concurrent admin callers cannot interleave their pause/resume cycles.
+
+## Hit/miss accounting
+
+The helper keeps in-process counters for hits, misses, prefetched records, sync events applied, and the average lag between a primary change and its application to Redis. The demo UI surfaces these so you can confirm the cache is absorbing all reads and the sync worker is keeping up:
+
+```go
+func (c *PrefetchCache) Stats() map[string]any {
+ c.mu.Lock()
+ defer c.mu.Unlock()
+ total := c.hits + c.misses
+ hitRate := 0.0
+ if total > 0 {
+ hitRate = roundTo(100.0*float64(c.hits)/float64(total), 1)
+ }
+ avgLag := 0.0
+ if c.syncLagSamples > 0 {
+ avgLag = roundTo(c.syncLagMsTotal/float64(c.syncLagSamples), 2)
+ }
+ return map[string]any{
+ "hits": c.hits,
+ "misses": c.misses,
+ "hit_rate_pct": hitRate,
+ "prefetched": c.prefetched,
+ "sync_events_applied": c.syncEventsApplied,
+ "sync_lag_ms_avg": avgLag,
+ }
+}
+```
+
+In production you would emit these as Prometheus counters and gauges. The sync-lag metric is the most important: a sudden rise indicates the CDC pipeline is falling behind.
+
+## Prerequisites
+
+* Redis running and accessible. By default, the demo connects to `localhost:6379`.
+* Go 1.21 or later.
+* The `go-redis` client. The included `go.mod` pins:
+
+ ```text
+ require github.com/redis/go-redis/v9 v9.18.0
+ ```
+
+If your Redis server is running elsewhere, start the demo with `--redis-host` and `--redis-port`.
+
+## Running the demo
+
+### Get the source files
+
+The demo consists of five files. Download them from the [`go` source folder](https://github.com/redis/docs/tree/main/content/develop/use-cases/prefetch-cache/go) on GitHub, or grab them with `curl`:
+
+```bash
+mkdir prefetch-cache-demo && cd prefetch-cache-demo
+BASE=https://raw.githubusercontent.com/redis/docs/main/content/develop/use-cases/prefetch-cache/go
+curl -O $BASE/cache.go
+curl -O $BASE/primary.go
+curl -O $BASE/sync_worker.go
+curl -O $BASE/demo_server.go
+curl -O $BASE/go.mod
+curl -O $BASE/go.sum
+```
+
+### Start the demo server
+
+The helper, mock primary, sync worker, and demo handlers all live in `package prefetchcache`. Go's `package main` can't live in the same directory as another package, so create a tiny `main.go` shim in a subdirectory that calls into the package:
+
+```bash
+mkdir -p cmd/demo
+cat > cmd/demo/main.go <<'EOF'
+package main
+
+import "prefetchcache"
+
+func main() { prefetchcache.RunDemoServer() }
+EOF
+```
+
+Then build and run:
+
+```bash
+go mod tidy
+go run ./cmd/demo
+```
+
+You should see something like:
+
+```text
+Redis prefetch-cache demo server listening on http://127.0.0.1:8784
+Using Redis at localhost:6379 with cache prefix 'cache:category:' and TTL 3600s
+Prefetched 5 records in 83.0 ms; sync worker running
+```
+
+After starting the server, visit [http://localhost:8784](http://localhost:8784).
+
+The demo server uses only Go's standard library plus `go-redis`:
+
+* [`net/http`](https://pkg.go.dev/net/http) for the web server
+* [`flag`](https://pkg.go.dev/flag) for CLI flags
+* Goroutines, channels, and `sync.Mutex` for the sync worker and stats counters
+
+It exposes a small interactive page where you can:
+
+* See which IDs are in the cache and in the primary, side by side
+* Read a category through the cache and confirm every read is a hit
+* Update a field on the primary and watch the sync worker rewrite the cache hash
+* Add and delete categories and watch them appear and disappear from the cache
+* Invalidate one key or clear the entire cache to simulate a sync-pipeline failure
+* Re-prefetch from the primary to recover from a broken cache state
+* Watch the average sync lag, and confirm primary reads stay at one until you re-prefetch — each `/reprefetch` adds another primary read for the snapshot, but normal request traffic never reaches the primary at all
+
+If you want to run the demo against a non-default cache prefix or port, pass `--port` and `--cache-prefix`:
+
+```bash
+go run ./cmd/demo --port 8784 --cache-prefix 'cache:category:'
+```
+
+## The mock primary store
+
+To make the demo self-contained, the example includes a `MockPrimaryStore` that stands in for a source-of-truth database
+([source](https://github.com/redis/docs/blob/main/content/develop/use-cases/prefetch-cache/go/primary.go)):
+
+```go
+type MockPrimaryStore struct {
+ readLatencyMs int
+
+ mu sync.Mutex
+ reads int
+ changes chan Change
+ records map[string]map[string]string
+}
+
+func (p *MockPrimaryStore) ListRecords() []map[string]string {
+ time.Sleep(time.Duration(p.readLatencyMs) * time.Millisecond)
+ // ... return a deep copy of every record under p.mu ...
+}
+
+func (p *MockPrimaryStore) UpdateField(id, field, value string) bool {
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ rec, ok := p.records[id]
+ if !ok {
+ return false
+ }
+ rec[field] = value
+ p.emitChangeLocked(ChangeOpUpsert, id, copyRecord(rec))
+ return true
+}
+```
+
+Every mutation appends a change event to an in-process buffered `chan Change`. The sync worker drains the channel with a 50 ms timeout via `NextChange` and applies each event to Redis. The change event is **emitted while the record lock is still held** (`emitChangeLocked` runs inside the `mu.Lock()` block) so two concurrent `UpdateField` calls cannot produce out-of-order events on the channel.
+
+In a real system this channel is replaced by a CDC pipeline — RDI on Redis Enterprise or Debezium with a Redis consumer on open-source Redis.
+
+## Production usage
+
+This guide uses a deliberately small local demo so you can focus on the prefetch-cache pattern. In production, you will usually want to harden several aspects of it.
+
+### Replace the in-process change channel with a real CDC pipeline
+
+The demo's in-process channel is the simplest possible stand-in for a CDC change feed. In production, the change feed lives outside the application process: an RDI pipeline configured against your primary database, Debezium connectors writing to Kafka or a Redis stream, or your application explicitly publishing change events from the write path. Whatever you choose, the consumer side stays the same — read events, apply them to Redis, advance the offset.
+
+### Use a long safety-net TTL, not a freshness TTL
+
+The TTL on each cache key is a **safety net**: it bounds memory if the sync pipeline silently stops, so a stuck consumer cannot leave stale data in Redis indefinitely. The TTL is not the freshness mechanism — freshness comes from the sync worker, which refreshes the TTL on every add or update event (delete events remove the key). Pick a TTL that is comfortably longer than your worst-case sync lag plus your alerting window, so a transient sync hiccup never expires hot keys.
+
+### Decide what to do on a cache miss
+
+A prefetch cache treats a miss as an error or a missing record. The two reasonable strategies are:
+
+* **Return a 404 to the user.** Appropriate when the cache is authoritative for the lookup — for example, when the user is asking for a category by ID and the ID is not in the cache.
+* **Page on-call.** A sustained miss rate on IDs you know exist is an incident: either the prefetch did not run, or the sync pipeline is broken.
+
+Whichever you choose, do not fall back to the primary on the read path — that is what cache-aside is for, and conflating the two patterns breaks the load-isolation guarantee that prefetch provides.
+
+### Bound the working set to what fits in memory
+
+Prefetch only works if the entire dataset fits in Redis memory with headroom. Estimate the size of your reference data, multiply by a growth factor, and confirm the result fits within your Redis instance's `maxmemory` minus what other use cases need. If the working set grows beyond what Redis can hold, switch the dataset to a cache-aside pattern instead — the request path will pay miss latency, but you will not OOM.
+
+### Reconcile periodically against the primary
+
+CDC pipelines are eventually consistent: an event can be lost (broker outage, consumer crash, configuration drift) and the cache can silently diverge from the source. Run a periodic reconciliation job that re-reads all primary records, compares them against the cache, and either re-prefetches or fixes individual entries. Even running it once a day catches drift that ad-hoc inspection would miss.
+
+### Namespace cache keys in shared Redis deployments
+
+If multiple applications share a Redis deployment, prefix cache keys with the application name (`cache:billing:category:{id}`) so different services cannot clobber each other's entries. The helper takes a `prefix` constructor argument exactly for this.
+
+### Wire shutdown through `context.Context`
+
+The sync worker runs on its own goroutine that blocks in `NextChange` (a channel select with a 50 ms timeout). The demo's `RunDemoServer` calls `syncWorker.Stop(2 * time.Second)` on SIGINT/SIGTERM, which cancels the worker's internal context and joins the goroutine. Wire your real sync worker to your service's shutdown context so `SIGTERM` produces a clean drain instead of a hard kill.
+
+### Inspect cached entries directly in Redis
+
+When testing or troubleshooting, inspect the stored cache keys directly to confirm the bulk load and the sync worker are writing what you expect:
+
+```bash
+redis-cli --scan --pattern 'cache:category:*'
+redis-cli HGETALL cache:category:cat-001
+redis-cli TTL cache:category:cat-001
+```
+
+If a key is missing for an ID that still exists in the primary, the prefetch did not run, the key expired without a sync refresh, or someone invalidated it. If a key is still present for an ID that was deleted in the primary, the delete event has not yet been applied. If the TTL is much lower than the configured safety-net value on a hot key, the sync worker is not keeping up.
+
+## Learn more
+
+* [go-redis guide]({{< relref "/develop/clients/go" >}}) - Install and use the Go Redis client
+* [HSET command]({{< relref "/commands/hset" >}}) - Write hash fields
+* [HGETALL command]({{< relref "/commands/hgetall" >}}) - Read every field of a hash
+* [EXPIRE command]({{< relref "/commands/expire" >}}) - Set key expiration in seconds
+* [DEL command]({{< relref "/commands/del" >}}) - Delete a key on invalidation or sync-delete
+* [SCAN command]({{< relref "/commands/scan" >}}) - Iterate the cached keyspace without blocking the server
+* [TTL command]({{< relref "/commands/ttl" >}}) - Inspect remaining safety-net time on a key
+* [Redis Data Integration]({{< relref "/integrate/redis-data-integration" >}}) - Configuration-driven CDC into Redis on Redis Enterprise and Redis Cloud
diff --git a/content/develop/use-cases/prefetch-cache/go/cache.go b/content/develop/use-cases/prefetch-cache/go/cache.go
new file mode 100644
index 0000000000..a473c2f719
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/go/cache.go
@@ -0,0 +1,354 @@
+// Redis prefetch-cache helper.
+//
+// Each cached entity is stored as a Redis hash under cache:{prefix}:{id}
+// with a long safety-net TTL that bounds memory if the sync pipeline
+// ever stops, but is not the freshness mechanism. Freshness comes from
+// the ApplyChange path, which the sync worker calls every time a
+// primary mutation arrives.
+//
+// Reads run HGETALL against Redis only. A miss is not a fall-back
+// trigger -- the application treats it as an error or a deliberate
+// Invalidate for testing. In production a sustained miss rate means the
+// prefetch or the sync pipeline is broken, not that the primary should
+// be re-queried on the request path.
+package prefetchcache
+
+import (
+ "context"
+ "sort"
+ "sync"
+ "time"
+
+ "github.com/redis/go-redis/v9"
+)
+
+// PrefetchCache is a prefetch-cache helper backed by Redis hashes with
+// a safety-net TTL.
+type PrefetchCache struct {
+ client *redis.Client
+ prefix string
+ ttlSeconds int
+
+ mu sync.Mutex
+ hits int
+ misses int
+ prefetched int
+ syncEventsApplied int
+ syncLagMsTotal float64
+ syncLagSamples int
+}
+
+// NewPrefetchCache returns a PrefetchCache. Pass an empty prefix to use
+// the default "cache:category:" and 0 for ttlSeconds to use the default
+// 3600.
+func NewPrefetchCache(client *redis.Client, prefix string, ttlSeconds int) *PrefetchCache {
+ if prefix == "" {
+ prefix = "cache:category:"
+ }
+ if ttlSeconds == 0 {
+ ttlSeconds = 3600
+ }
+ return &PrefetchCache{
+ client: client,
+ prefix: prefix,
+ ttlSeconds: ttlSeconds,
+ }
+}
+
+// Prefix returns the configured cache-key prefix.
+func (c *PrefetchCache) Prefix() string { return c.prefix }
+
+// TTLSeconds returns the configured safety-net TTL in seconds.
+func (c *PrefetchCache) TTLSeconds() int { return c.ttlSeconds }
+
+func (c *PrefetchCache) cacheKey(id string) string { return c.prefix + id }
+
+func (c *PrefetchCache) stripPrefix(key string) string {
+ if len(key) >= len(c.prefix) && key[:len(c.prefix)] == c.prefix {
+ return key[len(c.prefix):]
+ }
+ return key
+}
+
+// BulkLoad pipelines DEL + HSET + EXPIRE for every record. Returns the
+// number of records loaded.
+//
+// The pipeline is non-transactional: it is fast on startup (when
+// nothing is reading the cache) and on the live /reprefetch path (when
+// the demo pauses the sync worker around the call). Calling BulkLoad
+// on a cache that is actively being read and written to can briefly
+// expose a key that has been deleted but not yet rewritten; pause the
+// writers first or rewrite this with TxPipeline if that matters.
+func (c *PrefetchCache) BulkLoad(ctx context.Context, records []map[string]string) (int, error) {
+ loaded := 0
+ pipe := c.client.Pipeline()
+ for _, record := range records {
+ id := record["id"]
+ if id == "" {
+ continue
+ }
+ cacheKey := c.cacheKey(id)
+ pipe.Del(ctx, cacheKey)
+ fields := hashFields(record)
+ pipe.HSet(ctx, cacheKey, fields...)
+ pipe.Expire(ctx, cacheKey, time.Duration(c.ttlSeconds)*time.Second)
+ loaded++
+ }
+ if loaded > 0 {
+ if _, err := pipe.Exec(ctx); err != nil {
+ return 0, err
+ }
+ }
+ c.mu.Lock()
+ c.prefetched += loaded
+ c.mu.Unlock()
+ return loaded, nil
+}
+
+// GetResult bundles the record, hit/miss flag, and Redis-side latency
+// for a Get call.
+type GetResult struct {
+ Record map[string]string
+ Hit bool
+ RedisLatencyMs float64
+}
+
+// Get runs HGETALL against Redis and returns the cached hash with the
+// hit flag and Redis-side latency in milliseconds.
+//
+// Prefetch-cache reads do not fall back to the primary. A miss is a
+// signal that the cache is incomplete, not a trigger to re-query the
+// source. The caller decides how to surface it.
+func (c *PrefetchCache) Get(ctx context.Context, id string) (GetResult, error) {
+ cacheKey := c.cacheKey(id)
+ started := time.Now()
+ cached, err := c.client.HGetAll(ctx, cacheKey).Result()
+ latencyMs := float64(time.Since(started).Microseconds()) / 1000.0
+ if err != nil {
+ return GetResult{RedisLatencyMs: latencyMs}, err
+ }
+ if len(cached) > 0 {
+ c.mu.Lock()
+ c.hits++
+ c.mu.Unlock()
+ return GetResult{Record: cached, Hit: true, RedisLatencyMs: latencyMs}, nil
+ }
+ c.mu.Lock()
+ c.misses++
+ c.mu.Unlock()
+ return GetResult{Record: nil, Hit: false, RedisLatencyMs: latencyMs}, nil
+}
+
+// ApplyChange applies a primary change event to Redis.
+//
+// For an upsert, the helper rewrites the cache hash and refreshes the
+// safety-net TTL in one transactional pipeline so the cache never holds
+// a stale mix of old and new fields. For a delete, it removes the cache
+// key. An upsert with no fields is dropped silently: HSET with an empty
+// mapping errors in most clients, and there is nothing to write.
+func (c *PrefetchCache) ApplyChange(ctx context.Context, change Change) error {
+ if change.ID == "" {
+ return nil
+ }
+ cacheKey := c.cacheKey(change.ID)
+
+ switch change.Op {
+ case ChangeOpUpsert:
+ if len(change.Fields) == 0 {
+ // Malformed upsert with no fields. Skip rather than
+ // crash the sync worker: HSET with an empty mapping
+ // errors, and there's nothing to write anyway. A real
+ // CDC consumer would route this to a dead-letter queue
+ // and alert; the demo just drops it.
+ return nil
+ }
+ pipe := c.client.TxPipeline()
+ pipe.Del(ctx, cacheKey)
+ pipe.HSet(ctx, cacheKey, hashFields(change.Fields)...)
+ pipe.Expire(ctx, cacheKey, time.Duration(c.ttlSeconds)*time.Second)
+ if _, err := pipe.Exec(ctx); err != nil {
+ return err
+ }
+ case ChangeOpDelete:
+ if err := c.client.Del(ctx, cacheKey).Err(); err != nil {
+ return err
+ }
+ default:
+ return nil
+ }
+
+ c.mu.Lock()
+ c.syncEventsApplied++
+ if change.TimestampMs > 0 {
+ nowMs := float64(time.Now().UnixNano()) / 1e6
+ lag := nowMs - change.TimestampMs
+ if lag < 0 {
+ lag = 0
+ }
+ c.syncLagMsTotal += lag
+ c.syncLagSamples++
+ }
+ c.mu.Unlock()
+ return nil
+}
+
+// Invalidate deletes one cache key. Returns true if a key was removed.
+// Demo-only: simulates a broken sync pipeline.
+func (c *PrefetchCache) Invalidate(ctx context.Context, id string) (bool, error) {
+ n, err := c.client.Del(ctx, c.cacheKey(id)).Result()
+ if err != nil {
+ return false, err
+ }
+ return n == 1, nil
+}
+
+// Clear deletes every key under this cache's prefix using SCAN + DEL in
+// batches. Returns the number of keys deleted.
+func (c *PrefetchCache) Clear(ctx context.Context) (int, error) {
+ var (
+ cursor uint64
+ deleted int
+ )
+ for {
+ keys, next, err := c.client.Scan(ctx, cursor, c.prefix+"*", 500).Result()
+ if err != nil {
+ return deleted, err
+ }
+ if len(keys) > 0 {
+ n, err := c.client.Del(ctx, keys...).Result()
+ if err != nil {
+ return deleted, err
+ }
+ deleted += int(n)
+ }
+ cursor = next
+ if cursor == 0 {
+ break
+ }
+ }
+ return deleted, nil
+}
+
+// IDs returns every entity ID currently in the cache, sorted, with the
+// prefix stripped.
+func (c *PrefetchCache) IDs(ctx context.Context) ([]string, error) {
+ var (
+ cursor uint64
+ out []string
+ )
+ for {
+ keys, next, err := c.client.Scan(ctx, cursor, c.prefix+"*", 500).Result()
+ if err != nil {
+ return nil, err
+ }
+ for _, k := range keys {
+ out = append(out, c.stripPrefix(k))
+ }
+ cursor = next
+ if cursor == 0 {
+ break
+ }
+ }
+ sortStrings(out)
+ return out, nil
+}
+
+// Count returns the number of keys under the cache prefix.
+func (c *PrefetchCache) Count(ctx context.Context) (int, error) {
+ var (
+ cursor uint64
+ count int
+ )
+ for {
+ keys, next, err := c.client.Scan(ctx, cursor, c.prefix+"*", 500).Result()
+ if err != nil {
+ return 0, err
+ }
+ count += len(keys)
+ cursor = next
+ if cursor == 0 {
+ break
+ }
+ }
+ return count, nil
+}
+
+// TTLRemaining returns the remaining TTL on the cached key in seconds
+// (Redis TTL semantics: -2 = missing, -1 = no expiry).
+//
+// Use Do("TTL", ...) rather than client.TTL().Result(): the latter
+// returns time.Duration, encoding the -2 / -1 sentinels as raw
+// nanoseconds (so a naive int(d.Seconds()) would truncate them to 0).
+// Sending the raw command and reading the integer reply preserves the
+// value Redis actually returned.
+func (c *PrefetchCache) TTLRemaining(ctx context.Context, id string) (int, error) {
+ n, err := c.client.Do(ctx, "TTL", c.cacheKey(id)).Int64()
+ if err != nil {
+ return 0, err
+ }
+ return int(n), nil
+}
+
+// Stats returns the in-process counters and derived rates. JSON keys
+// are snake_case to match the other client ports.
+func (c *PrefetchCache) Stats() map[string]any {
+ c.mu.Lock()
+ defer c.mu.Unlock()
+ total := c.hits + c.misses
+ hitRate := 0.0
+ if total > 0 {
+ hitRate = roundTo(100.0*float64(c.hits)/float64(total), 1)
+ }
+ avgLag := 0.0
+ if c.syncLagSamples > 0 {
+ avgLag = roundTo(c.syncLagMsTotal/float64(c.syncLagSamples), 2)
+ }
+ return map[string]any{
+ "hits": c.hits,
+ "misses": c.misses,
+ "hit_rate_pct": hitRate,
+ "prefetched": c.prefetched,
+ "sync_events_applied": c.syncEventsApplied,
+ "sync_lag_ms_avg": avgLag,
+ }
+}
+
+// ResetStats zeroes every counter.
+func (c *PrefetchCache) ResetStats() {
+ c.mu.Lock()
+ c.hits = 0
+ c.misses = 0
+ c.prefetched = 0
+ c.syncEventsApplied = 0
+ c.syncLagMsTotal = 0
+ c.syncLagSamples = 0
+ c.mu.Unlock()
+}
+
+// hashFields flattens a map into the [key1, val1, key2, val2, ...] slice
+// go-redis expects for HSet.
+func hashFields(record map[string]string) []any {
+ fields := make([]any, 0, len(record)*2)
+ for k, v := range record {
+ fields = append(fields, k, v)
+ }
+ return fields
+}
+
+// roundTo rounds x to decimals digits after the decimal point.
+func roundTo(x float64, decimals int) float64 {
+ mul := 1.0
+ for i := 0; i < decimals; i++ {
+ mul *= 10
+ }
+ if x >= 0 {
+ return float64(int64(x*mul+0.5)) / mul
+ }
+ return float64(int64(x*mul-0.5)) / mul
+}
+
+// sortStrings sorts a slice of IDs in place. Pulled out so callers can
+// rely on a sorted result regardless of SCAN return order.
+func sortStrings(s []string) {
+ sort.Strings(s)
+}
diff --git a/content/develop/use-cases/prefetch-cache/go/demo_server.go b/content/develop/use-cases/prefetch-cache/go/demo_server.go
new file mode 100644
index 0000000000..528137829f
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/go/demo_server.go
@@ -0,0 +1,790 @@
+// Redis prefetch-cache demo server.
+//
+// Create a main.go file in a subdirectory (Go's package main can't live
+// in the same directory as package prefetchcache):
+//
+// mkdir -p cmd/demo
+// cat > cmd/demo/main.go <<'EOF'
+// package main
+//
+// import "prefetchcache"
+//
+// func main() { prefetchcache.RunDemoServer() }
+// EOF
+//
+// Then build and run:
+//
+// go mod tidy
+// go run ./cmd/demo --port 8784
+//
+// Visit http://localhost:8784 to watch a prefetch cache in action: the
+// demo bulk-loads every primary record into Redis on startup, runs a
+// background sync worker that applies primary mutations within
+// milliseconds, and lets you add, update, delete, and re-prefetch
+// records to see how the cache stays current without ever falling back
+// to the primary on the read path.
+package prefetchcache
+
+import (
+ "context"
+ "encoding/json"
+ "flag"
+ "fmt"
+ "log"
+ "net/http"
+ "os"
+ "os/signal"
+ "strconv"
+ "strings"
+ "sync"
+ "syscall"
+ "time"
+
+ "github.com/redis/go-redis/v9"
+)
+
+type demoServer struct {
+ cache *PrefetchCache
+ primary *MockPrimaryStore
+ sync *SyncWorker
+}
+
+// RunDemoServer parses CLI flags and starts the prefetch-cache demo
+// HTTP server. It is the entry point your cmd/demo/main.go shim calls.
+func RunDemoServer() {
+ host := flag.String("host", "127.0.0.1", "HTTP bind host")
+ port := flag.Int("port", 8784, "HTTP bind port")
+ redisHost := flag.String("redis-host", "localhost", "Redis host")
+ redisPort := flag.Int("redis-port", 6379, "Redis port")
+ cachePrefix := flag.String("cache-prefix", "cache:category:", "Cache key prefix")
+ ttlSeconds := flag.Int("ttl-seconds", 3600, "Safety-net TTL in seconds (refreshed on every sync event)")
+ primaryLatencyMs := flag.Int("primary-latency-ms", 80,
+ "Simulated primary read latency (only affects bulk loads and reconciliations)")
+ flag.Parse()
+
+ client := redis.NewClient(&redis.Options{
+ Addr: fmt.Sprintf("%s:%d", *redisHost, *redisPort),
+ })
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ if err := client.Ping(ctx).Err(); err != nil {
+ log.Fatalf("could not reach Redis at %s:%d: %v", *redisHost, *redisPort, err)
+ }
+
+ cache := NewPrefetchCache(client, *cachePrefix, *ttlSeconds)
+ primary := NewMockPrimaryStore(*primaryLatencyMs)
+ syncWorker := NewSyncWorker(primary, cache)
+
+ started := time.Now()
+ if _, err := cache.Clear(ctx); err != nil {
+ log.Fatalf("clear cache: %v", err)
+ }
+ loaded, err := cache.BulkLoad(ctx, primary.ListRecords())
+ if err != nil {
+ log.Fatalf("bulk load: %v", err)
+ }
+ elapsedMs := float64(time.Since(started).Microseconds()) / 1000.0
+ syncWorker.Start()
+
+ srv := &demoServer{cache: cache, primary: primary, sync: syncWorker}
+
+ mux := http.NewServeMux()
+ mux.HandleFunc("/", srv.handleRoot)
+ mux.HandleFunc("/categories", srv.handleCategories)
+ mux.HandleFunc("/read", srv.handleRead)
+ mux.HandleFunc("/stats", srv.handleStats)
+ mux.HandleFunc("/update", srv.handleUpdate)
+ mux.HandleFunc("/add", srv.handleAdd)
+ mux.HandleFunc("/delete", srv.handleDelete)
+ mux.HandleFunc("/invalidate", srv.handleInvalidate)
+ mux.HandleFunc("/clear", srv.handleClear)
+ mux.HandleFunc("/reprefetch", srv.handleReprefetch)
+ mux.HandleFunc("/reset", srv.handleReset)
+
+ addr := fmt.Sprintf("%s:%d", *host, *port)
+ httpSrv := &http.Server{Addr: addr, Handler: mux}
+
+ go func() {
+ log.Printf("Redis prefetch-cache demo server listening on http://%s", addr)
+ log.Printf("Using Redis at %s:%d with cache prefix '%s' and TTL %ds",
+ *redisHost, *redisPort, *cachePrefix, *ttlSeconds)
+ log.Printf("Prefetched %d records in %.1f ms; sync worker running", loaded, elapsedMs)
+ if err := httpSrv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
+ log.Fatalf("http server: %v", err)
+ }
+ }()
+
+ sigCh := make(chan os.Signal, 1)
+ signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM)
+ <-sigCh
+ log.Print("shutting down")
+ syncWorker.Stop(2 * time.Second)
+ shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 2*time.Second)
+ defer shutdownCancel()
+ _ = httpSrv.Shutdown(shutdownCtx)
+}
+
+// --- HTTP handlers ---
+
+func (s *demoServer) handleRoot(w http.ResponseWriter, r *http.Request) {
+ if r.URL.Path != "/" && r.URL.Path != "/index.html" {
+ http.NotFound(w, r)
+ return
+ }
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ _, _ = w.Write([]byte(s.htmlPage()))
+}
+
+func (s *demoServer) handleCategories(w http.ResponseWriter, r *http.Request) {
+ if r.Method != http.MethodGet {
+ http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
+ return
+ }
+ ids, err := s.cache.IDs(r.Context())
+ if err != nil {
+ s.writeJSON(w, http.StatusInternalServerError, map[string]any{"error": err.Error()})
+ return
+ }
+ if ids == nil {
+ ids = []string{}
+ }
+ primaryIDs := s.primary.ListIDs()
+ if primaryIDs == nil {
+ primaryIDs = []string{}
+ }
+ s.writeJSON(w, http.StatusOK, map[string]any{
+ "cache_ids": ids,
+ "primary_ids": primaryIDs,
+ })
+}
+
+func (s *demoServer) handleRead(w http.ResponseWriter, r *http.Request) {
+ if r.Method != http.MethodGet {
+ http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
+ return
+ }
+ id := r.URL.Query().Get("id")
+ if id == "" {
+ s.writeJSON(w, http.StatusBadRequest, map[string]any{"error": "Missing 'id'."})
+ return
+ }
+ result, err := s.cache.Get(r.Context(), id)
+ if err != nil {
+ s.writeJSON(w, http.StatusInternalServerError, map[string]any{"error": err.Error()})
+ return
+ }
+ ttl, err := s.cache.TTLRemaining(r.Context(), id)
+ if err != nil {
+ s.writeJSON(w, http.StatusInternalServerError, map[string]any{"error": err.Error()})
+ return
+ }
+ var record any
+ if result.Record == nil {
+ record = nil
+ } else {
+ record = result.Record
+ }
+ s.writeJSON(w, http.StatusOK, map[string]any{
+ "id": id,
+ "record": record,
+ "hit": result.Hit,
+ "redis_latency_ms": roundTo(result.RedisLatencyMs, 2),
+ "ttl_remaining": ttl,
+ "stats": s.buildStats(),
+ })
+}
+
+func (s *demoServer) handleStats(w http.ResponseWriter, r *http.Request) {
+ if r.Method != http.MethodGet {
+ http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
+ return
+ }
+ s.writeJSON(w, http.StatusOK, s.buildStats())
+}
+
+func (s *demoServer) handleUpdate(w http.ResponseWriter, r *http.Request) {
+ if r.Method != http.MethodPost {
+ http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
+ return
+ }
+ if err := r.ParseForm(); err != nil {
+ s.writeJSON(w, http.StatusBadRequest, map[string]any{"error": err.Error()})
+ return
+ }
+ id := r.FormValue("id")
+ field := r.FormValue("field")
+ value := r.FormValue("value")
+ if id == "" || field == "" {
+ s.writeJSON(w, http.StatusBadRequest, map[string]any{"error": "Missing 'id' or 'field'."})
+ return
+ }
+ if !s.primary.UpdateField(id, field, value) {
+ s.writeJSON(w, http.StatusNotFound, map[string]any{"error": fmt.Sprintf("Unknown category '%s'.", id)})
+ return
+ }
+ s.writeJSON(w, http.StatusOK, map[string]any{
+ "id": id,
+ "field": field,
+ "value": value,
+ "stats": s.buildStats(),
+ })
+}
+
+func (s *demoServer) handleAdd(w http.ResponseWriter, r *http.Request) {
+ if r.Method != http.MethodPost {
+ http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
+ return
+ }
+ if err := r.ParseForm(); err != nil {
+ s.writeJSON(w, http.StatusBadRequest, map[string]any{"error": err.Error()})
+ return
+ }
+ id := strings.TrimSpace(r.FormValue("id"))
+ name := strings.TrimSpace(r.FormValue("name"))
+ if id == "" || name == "" {
+ s.writeJSON(w, http.StatusBadRequest, map[string]any{"error": "Missing 'id' or 'name'."})
+ return
+ }
+ displayOrder := r.FormValue("display_order")
+ if displayOrder == "" {
+ displayOrder = "99"
+ }
+ featured := r.FormValue("featured")
+ if featured == "" {
+ featured = "false"
+ }
+ parentID := r.FormValue("parent_id")
+ record := map[string]string{
+ "id": id,
+ "name": name,
+ "display_order": displayOrder,
+ "featured": featured,
+ "parent_id": parentID,
+ }
+ if !s.primary.AddRecord(record) {
+ s.writeJSON(w, http.StatusConflict, map[string]any{"error": fmt.Sprintf("Category '%s' already exists.", id)})
+ return
+ }
+ s.writeJSON(w, http.StatusOK, map[string]any{
+ "id": id,
+ "record": record,
+ "stats": s.buildStats(),
+ })
+}
+
+func (s *demoServer) handleDelete(w http.ResponseWriter, r *http.Request) {
+ if r.Method != http.MethodPost {
+ http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
+ return
+ }
+ if err := r.ParseForm(); err != nil {
+ s.writeJSON(w, http.StatusBadRequest, map[string]any{"error": err.Error()})
+ return
+ }
+ id := r.FormValue("id")
+ if id == "" {
+ s.writeJSON(w, http.StatusBadRequest, map[string]any{"error": "Missing 'id'."})
+ return
+ }
+ if !s.primary.DeleteRecord(id) {
+ s.writeJSON(w, http.StatusNotFound, map[string]any{"error": fmt.Sprintf("Unknown category '%s'.", id)})
+ return
+ }
+ s.writeJSON(w, http.StatusOK, map[string]any{
+ "id": id,
+ "stats": s.buildStats(),
+ })
+}
+
+func (s *demoServer) handleInvalidate(w http.ResponseWriter, r *http.Request) {
+ if r.Method != http.MethodPost {
+ http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
+ return
+ }
+ if err := r.ParseForm(); err != nil {
+ s.writeJSON(w, http.StatusBadRequest, map[string]any{"error": err.Error()})
+ return
+ }
+ id := r.FormValue("id")
+ if id == "" {
+ s.writeJSON(w, http.StatusBadRequest, map[string]any{"error": "Missing 'id'."})
+ return
+ }
+ deleted, err := s.cache.Invalidate(r.Context(), id)
+ if err != nil {
+ s.writeJSON(w, http.StatusInternalServerError, map[string]any{"error": err.Error()})
+ return
+ }
+ s.writeJSON(w, http.StatusOK, map[string]any{
+ "id": id,
+ "deleted": deleted,
+ "stats": s.buildStats(),
+ })
+}
+
+// pauseMu serialises /clear and /reprefetch so two concurrent admin
+// callers cannot pause/resume each other into a sync-worker live state.
+var pauseMu sync.Mutex
+
+func (s *demoServer) handleClear(w http.ResponseWriter, r *http.Request) {
+ if r.Method != http.MethodPost {
+ http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
+ return
+ }
+ pauseMu.Lock()
+ defer pauseMu.Unlock()
+ // Pause the sync worker so it cannot recreate keys between SCAN
+ // and DEL. Queued events accumulate and apply after resume.
+ // `defer Resume()` guarantees the worker is unparked even if
+ // Clear panics or returns an error mid-way.
+ s.sync.Pause(2 * time.Second)
+ defer s.sync.Resume()
+ deleted, err := s.cache.Clear(r.Context())
+ if err != nil {
+ s.writeJSON(w, http.StatusInternalServerError, map[string]any{"error": err.Error()})
+ return
+ }
+ s.writeJSON(w, http.StatusOK, map[string]any{
+ "deleted": deleted,
+ "stats": s.buildStats(),
+ })
+}
+
+func (s *demoServer) handleReprefetch(w http.ResponseWriter, r *http.Request) {
+ if r.Method != http.MethodPost {
+ http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
+ return
+ }
+ pauseMu.Lock()
+ defer pauseMu.Unlock()
+ // Pause the sync worker so it cannot interleave with the clear
+ // + snapshot + bulk_load sequence. Without this, a change applied
+ // between ListRecords() and BulkLoad() would be overwritten by
+ // the stale snapshot. `defer Resume()` guarantees the worker is
+ // unparked even if Clear or BulkLoad panics mid-way.
+ s.sync.Pause(2 * time.Second)
+ defer s.sync.Resume()
+ started := time.Now()
+ var loaded int
+ var clearErr, loadErr error
+ if _, clearErr = s.cache.Clear(r.Context()); clearErr == nil {
+ loaded, loadErr = s.cache.BulkLoad(r.Context(), s.primary.ListRecords())
+ }
+ elapsedMs := float64(time.Since(started).Microseconds()) / 1000.0
+ if clearErr != nil {
+ s.writeJSON(w, http.StatusInternalServerError, map[string]any{"error": clearErr.Error()})
+ return
+ }
+ if loadErr != nil {
+ s.writeJSON(w, http.StatusInternalServerError, map[string]any{"error": loadErr.Error()})
+ return
+ }
+ s.writeJSON(w, http.StatusOK, map[string]any{
+ "loaded": loaded,
+ "elapsed_ms": roundTo(elapsedMs, 2),
+ "stats": s.buildStats(),
+ })
+}
+
+func (s *demoServer) handleReset(w http.ResponseWriter, r *http.Request) {
+ if r.Method != http.MethodPost {
+ http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
+ return
+ }
+ s.cache.ResetStats()
+ s.primary.ResetReads()
+ s.writeJSON(w, http.StatusOK, s.buildStats())
+}
+
+// --- helpers ---
+
+func (s *demoServer) buildStats() map[string]any {
+ stats := s.cache.Stats()
+ stats["primary_reads_total"] = s.primary.Reads()
+ stats["primary_read_latency_ms"] = s.primary.ReadLatencyMs()
+ return stats
+}
+
+func (s *demoServer) writeJSON(w http.ResponseWriter, status int, payload any) {
+ w.Header().Set("Content-Type", "application/json")
+ w.WriteHeader(status)
+ _ = json.NewEncoder(w).Encode(payload)
+}
+
+func (s *demoServer) htmlPage() string {
+ return strings.ReplaceAll(htmlTemplate, "__CACHE_TTL__", strconv.Itoa(s.cache.TTLSeconds()))
+}
+
+// htmlTemplate is the inline demo UI, ported verbatim from the Python
+// reference. The only substitutions are the pill text (top of )
+// and __CACHE_TTL__.
+const htmlTemplate = `
+
+
+
+
+ Redis Prefetch Cache Demo
+
+
+
+
+
go-redis + Go net/http
+
Redis Prefetch Cache Demo
+
+ Every record from the primary store has been pre-loaded into Redis.
+ Reads run HGETALL against Redis only — there is no
+ fall-back to the primary on the read path. When you add, update, or
+ delete a record, the primary emits a change event that a background
+ sync worker applies to Redis within a few milliseconds. A long
+ safety-net TTL (__CACHE_TTL__ s) is refreshed on every add or update
+ event (delete events remove the key) and bounds memory if sync ever stops.
+
+
+
+
+
Cache state
+
Loading...
+
+
+
+
+
Read a category
+
Reads come from Redis only. Every read should be a hit because
+ the cache was pre-loaded and the sync worker keeps it current.
+
+
+
+
+
+
+
Update a field
+
Updates write to the primary. The sync worker picks up the
+ change event and rewrites the cache hash within milliseconds.
+
+
+
+
+
+
+
+
+
+
+
Add a category
+
Inserts to the primary propagate to the cache through the same
+ sync path.
+
+
+
+
+
+
+
+
+
+
+
Delete a category
+
Deletes remove the record from the primary, and the sync worker
+ removes the cache entry.
+
+
+
+
+
+
+
Break the cache
+
Simulate a failure of the sync pipeline. Reads against the
+ affected key(s) return a miss until you re-prefetch.
+
+
+
+
+
+
+
+
+
+
+
Cache stats
+
Loading...
+
+
+
+
+
Last result
+
Read a category to see the cached record and timing.
+
+
+
+
+
+
+
+
+
+`
diff --git a/content/develop/use-cases/prefetch-cache/go/go.mod b/content/develop/use-cases/prefetch-cache/go/go.mod
new file mode 100644
index 0000000000..f2620d662b
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/go/go.mod
@@ -0,0 +1,11 @@
+module prefetchcache
+
+go 1.23
+
+require github.com/redis/go-redis/v9 v9.18.0
+
+require (
+ github.com/cespare/xxhash/v2 v2.3.0 // indirect
+ github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
+ go.uber.org/atomic v1.11.0 // indirect
+)
diff --git a/content/develop/use-cases/prefetch-cache/go/go.sum b/content/develop/use-cases/prefetch-cache/go/go.sum
new file mode 100644
index 0000000000..e25b1f4d0a
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/go/go.sum
@@ -0,0 +1,22 @@
+github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
+github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
+github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
+github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
+github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
+github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
+github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
+github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
+github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
+github.com/klauspost/cpuid/v2 v2.0.9 h1:lgaqFMSdTdQYdZ04uHyN2d/eKdOMyi2YLSvlQIBFYa4=
+github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
+github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
+github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
+github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
+github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
+github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
+github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
+github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
+go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
+go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
diff --git a/content/develop/use-cases/prefetch-cache/go/primary.go b/content/develop/use-cases/prefetch-cache/go/primary.go
new file mode 100644
index 0000000000..4e3ffc0906
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/go/primary.go
@@ -0,0 +1,252 @@
+// Mock primary data store for the prefetch-cache demo.
+//
+// This stands in for a source-of-truth database (Postgres, MySQL, Mongo,
+// etc.) that holds reference data the application serves to users.
+//
+// Every mutation appends a change event to an in-process Go channel,
+// which the sync worker drains and applies to Redis. In a real system
+// the channel is replaced by a CDC pipeline -- Redis Data Integration,
+// Debezium plus a lightweight consumer, or an equivalent tool that
+// tails the source's binlog/WAL and pushes changes into Redis.
+//
+// The store also exposes ReadLatencyMs so the demo can illustrate how
+// much slower a direct primary read would be than a Redis hit.
+package prefetchcache
+
+import (
+ "sort"
+ "sync"
+ "time"
+)
+
+// Change op constants emitted on the change feed.
+const (
+ ChangeOpUpsert = "upsert"
+ ChangeOpDelete = "delete"
+)
+
+// Change is a primary mutation event. Fields is nil for delete ops.
+type Change struct {
+ Op string
+ ID string
+ Fields map[string]string
+ TimestampMs float64
+}
+
+// MockPrimaryStore is an in-memory stand-in for a primary database of
+// reference data.
+type MockPrimaryStore struct {
+ readLatencyMs int
+
+ mu sync.Mutex
+ reads int
+ changes chan Change
+ records map[string]map[string]string
+}
+
+// NewMockPrimaryStore returns a MockPrimaryStore seeded with the same
+// five sample records as the Python reference. The change channel is
+// buffered generously so concurrent mutations never block the producer.
+func NewMockPrimaryStore(readLatencyMs int) *MockPrimaryStore {
+ return &MockPrimaryStore{
+ readLatencyMs: readLatencyMs,
+ changes: make(chan Change, 1024),
+ records: map[string]map[string]string{
+ "cat-001": {
+ "id": "cat-001",
+ "name": "Beverages",
+ "display_order": "1",
+ "featured": "true",
+ "parent_id": "",
+ },
+ "cat-002": {
+ "id": "cat-002",
+ "name": "Bakery",
+ "display_order": "2",
+ "featured": "true",
+ "parent_id": "",
+ },
+ "cat-003": {
+ "id": "cat-003",
+ "name": "Pantry Staples",
+ "display_order": "3",
+ "featured": "false",
+ "parent_id": "",
+ },
+ "cat-004": {
+ "id": "cat-004",
+ "name": "Frozen",
+ "display_order": "4",
+ "featured": "false",
+ "parent_id": "",
+ },
+ "cat-005": {
+ "id": "cat-005",
+ "name": "Specialty Cheeses",
+ "display_order": "5",
+ "featured": "false",
+ "parent_id": "cat-002",
+ },
+ },
+ }
+}
+
+// ReadLatencyMs returns the configured simulated read latency.
+func (p *MockPrimaryStore) ReadLatencyMs() int {
+ return p.readLatencyMs
+}
+
+// ListIDs returns the primary record IDs in sorted order. No sleep, no
+// counter increment -- this stands in for a fast metadata query (for example,
+// SELECT id FROM categories) rather than a full record read.
+func (p *MockPrimaryStore) ListIDs() []string {
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ ids := make([]string, 0, len(p.records))
+ for id := range p.records {
+ ids = append(ids, id)
+ }
+ sort.Strings(ids)
+ return ids
+}
+
+// ListRecords returns every record. Used by the cache's bulk-load path
+// on startup. Sleeps for ReadLatencyMs and increments the read counter.
+func (p *MockPrimaryStore) ListRecords() []map[string]string {
+ time.Sleep(time.Duration(p.readLatencyMs) * time.Millisecond)
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ p.reads++
+ out := make([]map[string]string, 0, len(p.records))
+ // Iterate sorted IDs so the snapshot order is deterministic.
+ ids := make([]string, 0, len(p.records))
+ for id := range p.records {
+ ids = append(ids, id)
+ }
+ sort.Strings(ids)
+ for _, id := range ids {
+ out = append(out, copyRecord(p.records[id]))
+ }
+ return out
+}
+
+// Read returns a single record by id, or nil if absent. Not on the
+// demo's normal read path.
+func (p *MockPrimaryStore) Read(id string) map[string]string {
+ time.Sleep(time.Duration(p.readLatencyMs) * time.Millisecond)
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ p.reads++
+ rec, ok := p.records[id]
+ if !ok {
+ return nil
+ }
+ return copyRecord(rec)
+}
+
+// AddRecord inserts a record if id is absent and emits an upsert event.
+// Returns false if the id already exists or is empty.
+func (p *MockPrimaryStore) AddRecord(record map[string]string) bool {
+ id := record["id"]
+ if id == "" {
+ return false
+ }
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ if _, exists := p.records[id]; exists {
+ return false
+ }
+ p.records[id] = copyRecord(record)
+ // Emit while the lock is held so the channel order matches the
+ // mutation order. Two concurrent callers cannot interleave
+ // mutation A -> mutation B -> emit B -> emit A.
+ p.emitChangeLocked(ChangeOpUpsert, id, copyRecord(p.records[id]))
+ return true
+}
+
+// UpdateField updates a single field in place and emits an upsert event.
+// Returns false if the id is unknown.
+func (p *MockPrimaryStore) UpdateField(id, field, value string) bool {
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ rec, ok := p.records[id]
+ if !ok {
+ return false
+ }
+ rec[field] = value
+ p.emitChangeLocked(ChangeOpUpsert, id, copyRecord(rec))
+ return true
+}
+
+// DeleteRecord removes a record and emits a delete event. Returns false
+// if the id is unknown.
+func (p *MockPrimaryStore) DeleteRecord(id string) bool {
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ if _, ok := p.records[id]; !ok {
+ return false
+ }
+ delete(p.records, id)
+ p.emitChangeLocked(ChangeOpDelete, id, nil)
+ return true
+}
+
+// NextChange blocks up to timeout for the next change event. Returns
+// (zero Change, false) if the timeout elapsed with nothing on the
+// channel. The boolean disambiguates a zero-value Change from a
+// genuine timeout.
+func (p *MockPrimaryStore) NextChange(timeout time.Duration) (Change, bool) {
+ if timeout <= 0 {
+ select {
+ case change := <-p.changes:
+ return change, true
+ default:
+ return Change{}, false
+ }
+ }
+ timer := time.NewTimer(timeout)
+ defer timer.Stop()
+ select {
+ case change := <-p.changes:
+ return change, true
+ case <-timer.C:
+ return Change{}, false
+ }
+}
+
+// Reads returns the cumulative number of full-record reads since the
+// counter was last reset.
+func (p *MockPrimaryStore) Reads() int {
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ return p.reads
+}
+
+// ResetReads zeroes the primary read counter.
+func (p *MockPrimaryStore) ResetReads() {
+ p.mu.Lock()
+ p.reads = 0
+ p.mu.Unlock()
+}
+
+// emitChangeLocked appends a change event to the feed. The caller must
+// hold p.mu. Channel sends are themselves thread-safe and never try to
+// acquire p.mu, so sending while holding the records lock cannot
+// deadlock; holding the lock here is what guarantees that the channel
+// order matches the order in which the records map was mutated.
+func (p *MockPrimaryStore) emitChangeLocked(op, id string, fields map[string]string) {
+ p.changes <- Change{
+ Op: op,
+ ID: id,
+ Fields: fields,
+ TimestampMs: float64(time.Now().UnixNano()) / 1e6,
+ }
+}
+
+func copyRecord(in map[string]string) map[string]string {
+ out := make(map[string]string, len(in))
+ for k, v := range in {
+ out[k] = v
+ }
+ return out
+}
diff --git a/content/develop/use-cases/prefetch-cache/go/sync_worker.go b/content/develop/use-cases/prefetch-cache/go/sync_worker.go
new file mode 100644
index 0000000000..49ae57fe1f
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/go/sync_worker.go
@@ -0,0 +1,215 @@
+// Background sync worker for the prefetch-cache demo.
+//
+// A daemon goroutine drains the primary's change channel and applies
+// each event to Redis through PrefetchCache.ApplyChange. In a real
+// system, the channel is replaced by a CDC pipeline (Redis Data
+// Integration, Debezium, or an equivalent) that tails the primary's
+// binlog/WAL and writes the same shape of events.
+//
+// The worker exposes Pause() and Resume() so maintenance paths
+// (/reprefetch, Clear()) can stop event application without tearing
+// the goroutine down. Pause() blocks until the worker is parked, so
+// the caller knows no apply is in flight by the time it returns.
+package prefetchcache
+
+import (
+ "context"
+ "log"
+ "sync"
+ "time"
+)
+
+// SyncWorker drains primary change events into Redis on a goroutine.
+type SyncWorker struct {
+ primary *MockPrimaryStore
+ cache *PrefetchCache
+ pollTimeout time.Duration
+
+ mu sync.Mutex
+ running bool
+ cancel context.CancelFunc
+ done chan struct{}
+ paused bool
+ pausedIdle chan struct{} // closed (replaced with a fresh chan) every time the loop parks
+ resumeCh chan struct{} // closed by Resume to wake the parked loop
+}
+
+// NewSyncWorker creates a SyncWorker. The default poll timeout (50 ms)
+// matches the Python reference.
+func NewSyncWorker(primary *MockPrimaryStore, cache *PrefetchCache) *SyncWorker {
+ return &SyncWorker{
+ primary: primary,
+ cache: cache,
+ pollTimeout: 50 * time.Millisecond,
+ pausedIdle: make(chan struct{}),
+ resumeCh: make(chan struct{}),
+ }
+}
+
+// Start spawns the worker goroutine if it is not already running.
+func (w *SyncWorker) Start() {
+ w.mu.Lock()
+ defer w.mu.Unlock()
+ if w.running {
+ return
+ }
+ ctx, cancel := context.WithCancel(context.Background())
+ done := make(chan struct{})
+ w.cancel = cancel
+ w.done = done
+ w.running = true
+ go w.run(ctx, done)
+}
+
+// Stop signals the worker to exit and waits up to joinTimeout for the
+// goroutine to finish.
+//
+// If the join times out the worker is wedged inside ApplyChange; we
+// leave w.running true so a subsequent Start() does not spawn a second
+// worker on top of the orphan.
+func (w *SyncWorker) Stop(joinTimeout time.Duration) {
+ w.mu.Lock()
+ if !w.running {
+ w.mu.Unlock()
+ return
+ }
+ cancel := w.cancel
+ done := w.done
+ // Close resumeCh inside the lock so a concurrent Resume cannot
+ // pass the "already closed?" check and then race us to close()
+ // the same channel twice (which would panic).
+ closeOnce(w.resumeCh)
+ w.mu.Unlock()
+
+ cancel()
+
+ select {
+ case <-done:
+ w.mu.Lock()
+ w.running = false
+ w.cancel = nil
+ w.done = nil
+ w.mu.Unlock()
+ case <-time.After(joinTimeout):
+ // Worker is wedged: leave running=true so Start() is a no-op
+ // rather than producing a second worker.
+ }
+}
+
+// Pause sets the pause flag and blocks until the worker confirms it is
+// parked, up to timeout. Returns true if confirmed paused.
+//
+// While paused, change events accumulate on the primary's channel and
+// apply in order after Resume(). Calling Pause while already paused is
+// idempotent and returns immediately.
+func (w *SyncWorker) Pause(timeout time.Duration) bool {
+ w.mu.Lock()
+ if !w.running {
+ w.paused = true
+ w.mu.Unlock()
+ return true
+ }
+ if w.paused {
+ idle := w.pausedIdle
+ w.mu.Unlock()
+ // Already paused -- wait for the current idle signal (which
+ // is closed once the worker is parked).
+ select {
+ case <-idle:
+ return true
+ case <-time.After(timeout):
+ return false
+ }
+ }
+ // Replace the resume channel with a fresh one so any prior
+ // Resume() does not immediately unblock this pause.
+ w.resumeCh = make(chan struct{})
+ // Reset the idle channel: a fresh one will be closed by the
+ // worker when it parks.
+ w.pausedIdle = make(chan struct{})
+ idle := w.pausedIdle
+ w.paused = true
+ w.mu.Unlock()
+
+ select {
+ case <-idle:
+ return true
+ case <-time.After(timeout):
+ return false
+ }
+}
+
+// Resume clears the pause flag and wakes the parked worker goroutine.
+func (w *SyncWorker) Resume() {
+ w.mu.Lock()
+ defer w.mu.Unlock()
+ if !w.paused {
+ return
+ }
+ w.paused = false
+ // Close inside the lock so a concurrent Stop cannot pass the
+ // "already closed?" check and then race us to close() the same
+ // channel twice (which would panic).
+ closeOnce(w.resumeCh)
+}
+
+// closeOnce closes ch if it isn't already closed. Callers MUST hold
+// w.mu while invoking it (the non-blocking receive + close pair is not
+// atomic on its own; the mutex provides the missing serialisation).
+func closeOnce(ch chan struct{}) {
+ select {
+ case <-ch:
+ // Already closed.
+ default:
+ close(ch)
+ }
+}
+
+func (w *SyncWorker) run(ctx context.Context, done chan struct{}) {
+ defer close(done)
+ for {
+ // Bail out promptly on cancel.
+ select {
+ case <-ctx.Done():
+ return
+ default:
+ }
+
+ // Park if paused.
+ w.mu.Lock()
+ paused := w.paused
+ idle := w.pausedIdle
+ resumeCh := w.resumeCh
+ w.mu.Unlock()
+ if paused {
+ // Signal "I am parked, no apply in flight". Closing the
+ // channel lets every waiter on Pause() observe it.
+ select {
+ case <-idle:
+ // Already closed -- nothing to do.
+ default:
+ close(idle)
+ }
+ // Wait for Resume() to close resumeCh, or for Stop() to
+ // cancel the context.
+ select {
+ case <-resumeCh:
+ case <-ctx.Done():
+ return
+ }
+ continue
+ }
+
+ change, ok := w.primary.NextChange(w.pollTimeout)
+ if !ok {
+ continue
+ }
+ if err := w.cache.ApplyChange(ctx, change); err != nil {
+ // Demo behaviour: log and drop the event. A production
+ // CDC consumer would retry with bounded backoff and
+ // expose a dead-letter / error counter; see the guide's
+ // "Production usage" section.
+ log.Printf("[sync] failed to apply %s %s: %v", change.Op, change.ID, err)
+ }
+ }
+}
diff --git a/content/develop/use-cases/prefetch-cache/java-jedis/DemoServer.java b/content/develop/use-cases/prefetch-cache/java-jedis/DemoServer.java
new file mode 100644
index 0000000000..186f7cf455
--- /dev/null
+++ b/content/develop/use-cases/prefetch-cache/java-jedis/DemoServer.java
@@ -0,0 +1,866 @@
+import com.sun.net.httpserver.HttpExchange;
+import com.sun.net.httpserver.HttpHandler;
+import com.sun.net.httpserver.HttpServer;
+import redis.clients.jedis.JedisPool;
+import redis.clients.jedis.JedisPoolConfig;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.InetSocketAddress;
+import java.net.URLDecoder;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Executors;
+
+/**
+ * Redis prefetch-cache demo server (Jedis + JDK HttpServer).
+ *
+ *
Run this file and visit {@code http://localhost:8785} to watch a
+ * prefetch cache in action: the demo bulk-loads every primary record
+ * into Redis on startup, runs a background sync worker that applies
+ * primary mutations within milliseconds, and lets you add, update,
+ * delete, and re-prefetch records to see how the cache stays current
+ * without ever falling back to the primary on the read path.