You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I invested some time looking through Arbitrary/libfuzzer-sys to determine the best way to limit recursion (in my case). I'm going to list what I looked at:
Isn't using size_hint faster than limiting with a custom arbitrary implementation? If you implement size_hint, you can set the size hint to something like u64::MAX, which will be interpreted by libfuzzer-sys as not generating this input. The check for supplied bytes being less than the hint returns -1. While this may not be ideal for varying recursion depths since it's currently hardcoded, there are workarounds.
My thought is that calculating the hint is much faster than limiting while generating the data structure, especially if you know your boundaries and just want to go a bit higher.
Has it been empirically found that continuously generating the non-recursive data structure improves coverage? (ref to point 2 in the previous list)
I invested some time looking through Arbitrary/libfuzzer-sys to determine the best way to limit recursion (in my case). I'm going to list what I looked at:
size_hintis used to bail out in case of "not enough data".depth, switch to non-recursive generation as the limit is surpassed.Questions I'm trying to answer:
size_hintfaster than limiting with a customarbitraryimplementation? If you implementsize_hint, you can set the size hint to something likeu64::MAX, which will be interpreted by libfuzzer-sys as not generating this input. The check for supplied bytes being less than the hint returns-1. While this may not be ideal for varying recursion depths since it's currently hardcoded, there are workarounds.My thought is that calculating the hint is much faster than limiting while generating the data structure, especially if you know your boundaries and just want to go a bit higher.