Skip to content

Cap array allocations in ProtocolUtils readers#1788

Open
AlexMelanFromRingo wants to merge 1 commit intoPaperMC:dev/3.0.0from
AlexMelanFromRingo:cap-array-allocations
Open

Cap array allocations in ProtocolUtils readers#1788
AlexMelanFromRingo wants to merge 1 commit intoPaperMC:dev/3.0.0from
AlexMelanFromRingo:cap-array-allocations

Conversation

@AlexMelanFromRingo
Copy link
Copy Markdown

readByteArray already takes a cap and rejects oversized counts, but readKeyArray, readIntegerArray, readStringArray and readVarIntArray only check that the count fits in the remaining buffer bytes. Since each entry is 4 to 8 bytes (a primitive int or an Object reference) but the buffer check is in bytes, a single varint count of around the frame size lets a peer push us into a 4x to 8x larger heap allocation than the wire bytes.

For example readIntegerArray was reachable from AvailableCommandsPacket (clientbound). Setting the array length to ~2M passes the isReadable(length) check and we then new int[~2M], an 8 MiB allocation per packet. Similar story for the other three.

Adds a cap parameter to each of the four readers, defaults to DEFAULT_MAX_STRING_SIZE (the same constant readByteArray uses) and rejects with the existing checkFrame if the count exceeds it. Tests for the cap path are added.

./gradlew :velocity-proxy:test :velocity-proxy:checkstyleMain clean.

@kennytv
Copy link
Copy Markdown
Member

kennytv commented May 8, 2026

Restricting these from strictly server-defined data is not a really concern. Why would you crash your own proxy or player? applying this limit unconditionally will also potentially hit real data at some point

@AlexMelanFromRingo
Copy link
Copy Markdown
Author

Fair on the single-backend case. I was thinking more about proxies fronting multiple backends, where one bad backend shouldn't be able to take out players connected to the others. If you don't see value there, happy to close this. Otherwise I can rework it as an opt-in system property (like velocity.max-known-packs) so default behavior is unchanged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants