Skip to content

fix(rdpsnd-native): allocate Opus PCM buffer as Vec<i16> to avoid alignment panic#1256

Open
Greg Lamberson (glamberson) wants to merge 1 commit into
Devolutions:masterfrom
lamco-admin:fix/rdpsnd-cpal-opus-alignment-panic
Open

fix(rdpsnd-native): allocate Opus PCM buffer as Vec<i16> to avoid alignment panic#1256
Greg Lamberson (glamberson) wants to merge 1 commit into
Devolutions:masterfrom
lamco-admin:fix/rdpsnd-cpal-opus-alignment-panic

Conversation

@glamberson
Copy link
Copy Markdown
Contributor

Closes #1202

Problem

The Opus decode path in DecodeStream::new allocates the destination PCM buffer as Vec<u8>, then casts it to &mut [i16] for the Opus decoder via bytemuck::cast_slice_mut:

let mut pcm = vec![0u8; nb_samples * chan as usize * size_of::<i16>()];
if let Err(error) = dec.decode(&pkt, bytemuck::cast_slice_mut(pcm.as_mut_slice()), false) {

Vec<u8> has alignment 1. The Rust allocator returns a pointer that may or may not happen to be 2-byte aligned. When the pointer lands on an odd address, bytemuck::cast_slice_mut::<u8, i16> panics with TargetAlignmentGreaterAndInputNotAligned.

The latent bug surfaces in #1202 when the remote server reboots. The server emits a flood of malformed Opus packets during reboot, each triggering a fresh allocation in this loop. With enough allocations in quick succession, one lands on an unaligned address and the dec_thread crashes. The cascade of "channel is empty / sending half is closed" warnings that follow is the consequence: the panicked thread dropped its sender.

Reproduced from the panic.log attached to #1202:

thread '<unnamed>' panicked at .../bytemuck-1.25.0/src/internal.rs:33:3:
cast_slice_mut>TargetAlignmentGreaterAndInputNotAligned

Fix

Allocate the PCM buffer as Vec<i16> directly. Vec<i16> has the required 2-byte alignment by construction, and opus2::Decoder::decode takes &mut [i16] natively with no cast needed.

The downstream channel still carries Vec<u8>, so we convert via bytemuck::cast_slice(&pcm_i16).to_vec(). Going from a larger alignment (i16 = 2) to a smaller one (u8 = 1) is always safe and never panics. The extra Vec copy is one small allocation per audio packet, negligible for audio (Opus packets are typically 2.5-60 ms of audio).

Test

  • cargo xtask check fmt clean
  • cargo xtask check lints clean
  • cargo xtask check locks clean
  • cargo xtask check typos clean
  • cargo xtask check tests clean

…gnment panic

Closes Devolutions#1202

The Opus decode path in `DecodeStream::new` allocated the destination
PCM buffer as `Vec<u8>`, then cast it to `&mut [i16]` for the decoder
via `bytemuck::cast_slice_mut`:

    let mut pcm = vec![0u8; nb_samples * chan as usize * size_of::<i16>()];
    if let Err(error) = dec.decode(&pkt, bytemuck::cast_slice_mut(pcm.as_mut_slice()), false) {

`Vec<u8>` allocations have alignment 1. The Rust allocator returns a
pointer that may or may not happen to be 2-byte aligned. When the
returned pointer is on an odd address, `bytemuck::cast_slice_mut`
panics with `TargetAlignmentGreaterAndInputNotAligned`.

The bug manifests in Devolutions#1202 when the remote server reboots: the server
generates a flood of malformed Opus packets, each triggering a fresh
allocation. With enough allocations in quick succession, one lands on
an unaligned address and the dec_thread crashes. The cascade of
"channel is empty / sending half is closed" warnings that follow is
the consequence — the panicked thread dropped the sender.

Fix: allocate the buffer as `Vec<i16>` directly. `Vec<i16>` has the
required 2-byte alignment by construction, and the Opus decoder's
`decode` method takes `&mut [i16]` directly with no cast needed.
The downstream channel still wants `Vec<u8>`, so we convert via
`bytemuck::cast_slice(&pcm_i16).to_vec()` — going from a larger
alignment (i16 = 2) to a smaller one (u8 = 1) is always safe and
never panics. The extra allocation is negligible for audio frames.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

Panic on remote server rebooting

1 participant