Skip to content

Commit f7b1985

Browse files
lizamdLiclaude
authored andcommitted
Use Float8TrainingOpConfig instead of removed FP8GroupedMMConfig alias (pytorch#2573)
## Summary `FP8GroupedMMConfig` was a temporary backward-compatibility alias in torchao that has been removed in pytorch/ao#4069. This PR updates torchtitan to use the canonical `Float8TrainingOpConfig` name directly. ## Change One-line rename in `torchtitan/components/quantization/float8.py`: - `FP8GroupedMMConfig` → `Float8TrainingOpConfig` (import + usage) ## Test plan - No behavior change — `FP8GroupedMMConfig` was an alias for `Float8TrainingOpConfig` with identical defaults. - Existing MoE FP8 training tests cover this code path. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Li <lizli102@ctr2-alola-ctrl-01.amd.com> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
1 parent cf77402 commit f7b1985

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

torchtitan/components/quantization/float8.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -274,7 +274,7 @@ def convert(self, model: nn.Module):
274274
from torchao.quantization.quant_api import quantize_
275275

276276
try:
277-
from torchao.prototype.moe_training.config import FP8GroupedMMConfig
277+
from torchao.prototype.moe_training.config import Float8TrainingOpConfig
278278
except ImportError as e:
279279
raise ImportError(
280280
"torchao installation does not have MoE training support. Please install torchao nightly build."
@@ -293,7 +293,7 @@ def moe_module_filter_fn(mod: nn.Module, cur_fqn: str) -> bool:
293293
model, ["_init_mean", "_init_std"], nn_module_cls=nn.Linear
294294
)
295295

296-
config = FP8GroupedMMConfig()
296+
config = Float8TrainingOpConfig()
297297
quantize_(model, config=config, filter_fn=moe_module_filter_fn)
298298

299299
# Re-inject Linear protocol and re-attach attrs

0 commit comments

Comments
 (0)