fix(tests): honor parametrized dtype in test_autotuner; correct SQNR var in test_integration error msg#4340
fix(tests): honor parametrized dtype in test_autotuner; correct SQNR var in test_integration error msg#4340Anai-Guo wants to merge 3 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/4340
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @Anai-Guo! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
Fixes two unrelated test bugs reported in #4339.
Bug 1:
dtypeparametrize silently ignored intest/kernel/test_autotuner.pyTestQuantFlow.test_int_mm,test_int_mm_float8, andtest_int_scaled_mmare each@parameterized.expand(...)'d over bothtorch.bfloat16andtorch.float16, but each method's body opens withdtype = torch.bfloat16, overwriting the parameter. The float16 variants run with bfloat16 inputs, giving false coverage.Removing the three
dtype = torch.bfloat16overrides also exposes a hardcoded assertion at the bottom oftest_int_scaled_mm:out32_1.dtypefollowsscales.dtype, which follows the parametrizeddtype. Fixed to assert against the parameter:Bug 2: Wrong variable in error message in
test/integration/test_integration.pyIn
test_save_load_qtensors(around line 641):The assertion compares
ref_fagainsttest, but the failure message reportsSQNR(ref_f, ref_q)— i.e. SQNR between the float reference and the compiled quantized reference computed earlier in the test, not between the float reference and the actual loaded model output. When the assertion fails, the printed value is meaningless for debugging.Fixed to print
SQNR(ref_f, test). (The earlierassert SQNR(ref_f, ref_q) > min_sqnrblock at ~line 619 is already self-consistent and is left untouched.)Test plan
dtypeoverrides means thefloat16parametrize branches now actually execute as float16. If any of those branches were previously hiding a real failure, this PR will surface it — please re-run the relevant CI.test_int_scaled_mm's output dtype followsscales.dtype, which is now correctly the parametrizeddtype; the new assertionout32_1.dtype == dtypematches.🤖 Generated with Claude Code