Skip to content

Use an extra bit of entropy in the Float64 uniform 0-1 distribution.#712

Closed
nhz2 wants to merge 1 commit intoJuliaGPU:masterfrom
nhz2:nz/add-extra-bit-of-entropy-in-u01-Float64
Closed

Use an extra bit of entropy in the Float64 uniform 0-1 distribution.#712
nhz2 wants to merge 1 commit intoJuliaGPU:masterfrom
nhz2:nz/add-extra-bit-of-entropy-in-u01-Float64

Conversation

@nhz2
Copy link
Copy Markdown

@nhz2 nhz2 commented Apr 19, 2026

When looking over #707 I think the following is a slight improvement for Float64 u01.
This PR subtracts prevfloat(1.0) to avoid returning zero from u01 while still using 52 bits.

cc @maleadt

@nhz2 nhz2 marked this pull request as draft April 20, 2026 01:20
@kshyatt kshyatt requested a review from maleadt April 21, 2026 06:17
@nhz2
Copy link
Copy Markdown
Author

nhz2 commented Apr 21, 2026

I think the Float64(::UInt64) being expensive was a red herring. The actual issue was the Float64(2)^(-64) not being constant propagated.

@inline function u01(::Type{Float64}, u::UInt64)::Float64
    fma(Float64(u), Float64(2)^Int32(-64), Float64(2)^Int32(-65))
end

Seems to be fine performance wise. Ref: https://github.com/medyan-dev/PhiloxRNG.jl/tree/v1.1.1#gpu--nvidia-geforce-rtx-3080-nsvalue-n--100000000

@nhz2 nhz2 closed this Apr 21, 2026
@maleadt
Copy link
Copy Markdown
Member

maleadt commented Apr 21, 2026

The actual issue was the Float64(2)^(-64) not being constant propagated.

This was fixed in JuliaGPU/CUDA.jl#3098, which I encountered as part of porting your RNG.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants