feat(sparsity): Add FisherPruner — FIM-guided weight pruning with calibration-based eFIM accumulation#4352
Conversation
Introduces FisherPruner, a new BaseSparsifier subclass that prunes weights using the diagonal of the empirical Fisher Information Matrix (eFIM) rather than weight magnitude or activation norms. Weights with low gradient variance (low eFIM score) are pruned first — their removal causes the smallest expected increase in loss. Core algorithm: - prepare(): attaches FakeSparsity parametrizations + PerChannelNormObserver (consistent with WandaSparsifier API). - accumulate_fim(): accumulates squared gradients (eFIM diagonal) across calibration batches; call after loss.backward() before zero_grad(). - update_mask(): prunes lowest-FIM-score weights; falls back to magnitude pruning with a warning if no calibration data was provided. - squash_mask(): removes parametrizations and clears FIM state. Supports unstructured sparsity (arbitrary sparsity_level) and semi-structured 2:N sparsity via semi_structured_block_size. Tests: 14 unit tests covering construction, prepare, accumulate_fim, fallback behaviour, known-weight correctness, 2:4 semi-structured sparsity, and per-layer custom config. Signed-off-by: Ramakrishnan Sathyavageeswaran <ramkrishs@outlook.com>
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/4352
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @ramkrishs! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
jerryzh168
left a comment
There was a problem hiding this comment.
can you put this in https://github.com/pytorch/ao/tree/main/torchao/prototype/sparsity
|
also please create a readme to talk about things in summary + show some e2e model result (accuracy impact) on a popular model |
Summary
This PR introduces
FisherPruner, a newBaseSparsifiersubclass that prunes neural network weights using the diagonal of the empirical Fisher Information Matrix (eFIM) rather than weight magnitude alone or the Wanda activation-norm criterion.Motivation
Existing sparsifiers in torchao use either magnitude (
WeightNormSparsifier) or weight × activation-norm (WandaSparsifier) as the pruning criterion. The Fisher Information Matrix provides a principled, loss-aware alternative: a parameter's eFIM diagonal entry approximates the expected curvature of the loss w.r.t. that parameter. Pruning low-eFIM weights minimises the expected increase in loss, a well-known result from Optimal Brain Damage (LeCun et al., 1990) and Optimal Brain Surgeon (Hassibi & Stork, 1993).Algorithm
The diagonal eFIM is approximated empirically as the mean squared gradient across calibration batches:
Weights with the lowest eFIM scores are pruned first — their removal has the smallest expected impact on the loss.
API (mirrors WandaSparsifier)
Supports:
sparsity_level(0–1)semi_structured_block_sizeconfig=[{"tensor_fqn": "layer.weight"}]Files changed
torchao/sparsity/fisher_pruner.pyFisherPrunerclass (≈220 lines)torchao/sparsity/__init__.pyFisherPrunertest/sparsity/test_fisher_pruner.pyTests
Covers: construction validation,
prepareparametrization,accumulate_fimaccumulation, no-gradient safety,squash_maskcleanup, unstructured sparsity level correctness, known-weight pruning direction, 2:4 semi-structured sparsity, fallback-to-magnitude warning, and per-layer custom config.References
By submitting this pull request, I confirm that my contribution is made under the terms of the BSD 3-Clause License.