You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix spurious per-result waiter count and consolidate counter management
When the global ConsolidatorQueryWaiterCap was hit by a caller of a
different query, the current query's per-result waiter count was left
spuriously positive, causing unnecessary proto3 row caching. Fix by
decrementing the per-result count alongside the global count for all
non-original callers.
Also refactors counter management: AddWaiterCounter now increments both
the per-result and global counters (removing AddPerResultWaiterCounter),
and TotalWaiterCount is a read method on the Consolidator interface
instead of being read through AddWaiterCounter's return value.
Note: the two counter increments in AddWaiterCounter are not jointly
atomic. This is benign — the leader checks HasWaiters() before
Broadcast(), so it always sees the pre-decrement state. The only effect
is momentary soft-limit imprecision on ConsolidatorQueryWaiterCap, which
is a soft cap anyway.
AI disclosure: Claude Code assisted with development. Every line of code was either written by or carefully reviewed by me :)
Co-Authored-By: Claude <svc-devxp-claude@slack-corp.com>
Signed-off-by: Brett Wines <bwines@slack-corp.com>
0 commit comments