Skip to content

Commit a6e0f38

Browse files
committed
Update on "Skip test_q_attention_block for torch >= 2.11.0.dev"
Summary: Currently failing in CI, probably due to changes in compile, need a fix current failing example: https://github.com/pytorch/ao/actions/runs/23070569120/job/67019856617?pr=4082 cc Xia-Weiwen Test Plan: CI nightly regression tests Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
1 parent 4e42fa9 commit a6e0f38

1 file changed

Lines changed: 6 additions & 1 deletion

File tree

test/quantization/pt2e/test_x86inductor_fusion.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3093,14 +3093,19 @@ def matcher_check_fn():
30933093

30943094
@skipIfNoDynamoSupport
30953095
@skipIfNoONEDNN
3096-
@unittest.skipIf(torch_version_at_least("2.11.0.dev"), "Requires torch 2.11.0.dev+")
3096+
@unittest.skipIf(
3097+
torch_version_at_least("2.11.0.dev"), "Doesn't work with torch 2.11.0.dev+"
3098+
)
30973099
def test_q_attention_block(self):
30983100
for annotate_matmul in [True, False]:
30993101
self._test_q_attention_block_helper(annotate_matmul=annotate_matmul)
31003102

31013103
@skipIfNoDynamoSupport
31023104
@skipIfNoONEDNN
31033105
@skipIfNoFloat8Support
3106+
@unittest.skipIf(
3107+
torch_version_at_least("2.11.0.dev"), "Doesn't work with torch 2.11.0.dev+"
3108+
)
31043109
def test_fp8_q_attention_block(self):
31053110
for annotate_matmul in [True, False]:
31063111
self._test_q_attention_block_helper(

0 commit comments

Comments
 (0)