Skip to content

Add UIntxBitPackedTensor, UIntxWeightOnlyConfig, and Int8DynamicActivationUIntxWeightConfig#4081

Closed
jerryzh168 wants to merge 1 commit intogh/jerryzh168/47/basefrom
gh/jerryzh168/47/head
Closed

Add UIntxBitPackedTensor, UIntxWeightOnlyConfig, and Int8DynamicActivationUIntxWeightConfig#4081
jerryzh168 wants to merge 1 commit intogh/jerryzh168/47/basefrom
gh/jerryzh168/47/head

Conversation

@jerryzh168
Copy link
Copy Markdown
Contributor

@jerryzh168 jerryzh168 commented Mar 13, 2026

Stack from ghstack (oldest at bottom):

Add v2 tensor subclass UIntxBitPackedTensor(TorchAOBaseTensor) using
gemlite bit-packing and Triton GEMM kernels, replacing the old AQT-based
GemliteUIntXWeightOnlyConfig path.

  • UIntxBitPackedTensor: tensor subclass with from_hp(), dequantize(),
    and aten.linear/t/slice dispatch implementations
  • UIntxWeightOnlyConfig: weight-only quantization (4-bit/8-bit)
  • Int8DynamicActivationUIntxWeightConfig: int8 dynamic activation + uintx weight
  • Tests for both configs covering 4-bit, 8-bit, slice, and non-standard shapes

…ationUIntxWeightConfig

Add v2 tensor subclass UIntxBitPackedTensor(TorchAOBaseTensor) using
gemlite bit-packing and Triton GEMM kernels, replacing the old AQT-based
GemliteUIntXWeightOnlyConfig path.

- UIntxBitPackedTensor: tensor subclass with from_hp(), dequantize(),
  and aten.linear/t/slice dispatch implementations
- UIntxWeightOnlyConfig: weight-only quantization (4-bit/8-bit)
- Int8DynamicActivationUIntxWeightConfig: int8 dynamic activation + uintx weight
- Tests for both configs covering 4-bit, 8-bit, slice, and non-standard shapes

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Mar 13, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/4081

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 9 Pending

As of commit 8e4bbdd with merge base ab4a336 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

jerryzh168 added a commit that referenced this pull request Mar 13, 2026
…ationUIntxWeightConfig

Add v2 tensor subclass UIntxBitPackedTensor(TorchAOBaseTensor) using
gemlite bit-packing and Triton GEMM kernels, replacing the old AQT-based
GemliteUIntXWeightOnlyConfig path.

- UIntxBitPackedTensor: tensor subclass with from_hp(), dequantize(),
  and aten.linear/t/slice dispatch implementations
- UIntxWeightOnlyConfig: weight-only quantization (4-bit/8-bit)
- Int8DynamicActivationUIntxWeightConfig: int8 dynamic activation + uintx weight
- Tests for both configs covering 4-bit, 8-bit, slice, and non-standard shapes

ghstack-source-id: 5cdbbe1
Pull Request resolved: #4081
@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 13, 2026
@jerryzh168 jerryzh168 added the module: not user facing Use this tag if you don't want this PR to show up in release notes label Mar 13, 2026
@jerryzh168 jerryzh168 closed this Mar 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: not user facing Use this tag if you don't want this PR to show up in release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant