Skip to content

Commit 75e8550

Browse files
committed
Skip test_q_attention_block for torch >= 2.11.0.dev
Summary: Currently failing in CI, probably due to changes in compile, need a fix current failing example: https://github.com/pytorch/ao/actions/runs/23070569120/job/67019856617?pr=4082 cc Xia-Weiwen Test Plan: CI nightly regression tests Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 82fc986 Pull Request resolved: #4085
1 parent ab4a336 commit 75e8550

1 file changed

Lines changed: 6 additions & 0 deletions

File tree

test/quantization/pt2e/test_x86inductor_fusion.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3093,13 +3093,19 @@ def matcher_check_fn():
30933093

30943094
@skipIfNoDynamoSupport
30953095
@skipIfNoONEDNN
3096+
@unittest.skipIf(
3097+
torch_version_at_least("2.11.0.dev"), "Doesn't work with torch 2.11.0.dev+"
3098+
)
30963099
def test_q_attention_block(self):
30973100
for annotate_matmul in [True, False]:
30983101
self._test_q_attention_block_helper(annotate_matmul=annotate_matmul)
30993102

31003103
@skipIfNoDynamoSupport
31013104
@skipIfNoONEDNN
31023105
@skipIfNoFloat8Support
3106+
@unittest.skipIf(
3107+
torch_version_at_least("2.11.0.dev"), "Doesn't work with torch 2.11.0.dev+"
3108+
)
31033109
def test_fp8_q_attention_block(self):
31043110
for annotate_matmul in [True, False]:
31053111
self._test_q_attention_block_helper(

0 commit comments

Comments
 (0)