Skip to content

fix(cluster): Remove per-call srand in valkey-cli (8.0 backport)#3671

Open
ranshid wants to merge 1 commit into
valkey-io:8.0from
ranshid:backport/8.0-fix-srand-per-call
Open

fix(cluster): Remove per-call srand in valkey-cli (8.0 backport)#3671
ranshid wants to merge 1 commit into
valkey-io:8.0from
ranshid:backport/8.0-fix-srand-per-call

Conversation

@ranshid
Copy link
Copy Markdown
Member

@ranshid ranshid commented May 12, 2026

Backport of #3586 to the 8.0 branch.

Problem

clusterManagerNodePrimaryRandom() called srand(time(NULL)) on every invocation, then immediately rand() % primary_count. When called in a tight loop for uncovered slots, all calls within the same wall-clock second produce the identical seed, causing every uncovered slot to be assigned to the same primary node.

The same issue existed in clusterManagerOptimizeAntiAffinity, pipeMode, and LRUTestMode.

Solution

Remove all per-call srand() invocations and consolidate into a single srand(time(NULL) ^ getpid()) at the start of main(). This allows rand() to advance its state across calls, distributing uncovered slots randomly across available primaries.

(cherry picked from commit 7e2a2f7)

Backport of valkey-io#3586 to 8.0 branch.

clusterManagerNodePrimaryRandom() called srand(time(NULL)) on every
invocation, then immediately rand() % primary_count. When called in a
tight loop for uncovered slots, all calls within the same wall-clock
second produce the identical seed, causing every uncovered slot to be
assigned to the same primary node.

Remove all per-call srand() invocations and consolidate into a single
srand(time(NULL) ^ getpid()) at the start of main(). This allows rand()
to advance its state across calls, distributing uncovered slots randomly
across available primaries.

(cherry picked from commit 7e2a2f7)

Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants