Skip to content

feat(sparsity): Add FisherPruner — FIM-guided weight pruning with calibration-based eFIM accumulation#4352

Open
ramkrishs wants to merge 1 commit intopytorch:mainfrom
ramkrishs:feat/fisher-pruner-fim-guided-sparsity
Open

feat(sparsity): Add FisherPruner — FIM-guided weight pruning with calibration-based eFIM accumulation#4352
ramkrishs wants to merge 1 commit intopytorch:mainfrom
ramkrishs:feat/fisher-pruner-fim-guided-sparsity

Conversation

@ramkrishs
Copy link
Copy Markdown

Summary

This PR introduces FisherPruner, a new BaseSparsifier subclass that prunes neural network weights using the diagonal of the empirical Fisher Information Matrix (eFIM) rather than weight magnitude alone or the Wanda activation-norm criterion.

Motivation

Existing sparsifiers in torchao use either magnitude (WeightNormSparsifier) or weight × activation-norm (WandaSparsifier) as the pruning criterion. The Fisher Information Matrix provides a principled, loss-aware alternative: a parameter's eFIM diagonal entry approximates the expected curvature of the loss w.r.t. that parameter. Pruning low-eFIM weights minimises the expected increase in loss, a well-known result from Optimal Brain Damage (LeCun et al., 1990) and Optimal Brain Surgeon (Hassibi & Stork, 1993).

Algorithm

The diagonal eFIM is approximated empirically as the mean squared gradient across calibration batches:

$$F_{ii} \approx \frac{1}{T} \sum_{t=1}^{T} \left(\frac{\partial \ell_t}{\partial \theta_i}\right)^2$$

Weights with the lowest eFIM scores are pruned first — their removal has the smallest expected impact on the loss.

API (mirrors WandaSparsifier)

from torchao.sparsity import FisherPruner

pruner = FisherPruner(sparsity_level=0.5)
pruner.prepare(model, config=None)

# Calibration: accumulate FIM statistics over representative data
for X, y in calibration_loader:
    loss = criterion(model(X), y)
    loss.backward()
    pruner.accumulate_fim()   # accumulate squared gradients
    model.zero_grad()

pruner.step()         # apply FIM-guided masks
pruner.squash_mask()  # finalise: remove parametrizations

Supports:

  • Unstructured sparsity — arbitrary sparsity_level (0–1)
  • Semi-structured 2:N sparsity — via semi_structured_block_size
  • Per-layer configconfig=[{"tensor_fqn": "layer.weight"}]
  • Graceful fallback — warns and falls back to magnitude pruning if no calibration data is provided

Files changed

File Change
torchao/sparsity/fisher_pruner.py New FisherPruner class (≈220 lines)
torchao/sparsity/__init__.py Export FisherPruner
test/sparsity/test_fisher_pruner.py 14 unit tests

Tests

Ran 14 tests in 0.070s — OK

Covers: construction validation, prepare parametrization, accumulate_fim accumulation, no-gradient safety, squash_mask cleanup, unstructured sparsity level correctness, known-weight pruning direction, 2:4 semi-structured sparsity, fallback-to-magnitude warning, and per-layer custom config.

References

  • LeCun et al., Optimal Brain Damage, NeurIPS 1990
  • Hassibi & Stork, Optimal Brain Surgeon, NeurIPS 1993
  • Singh & Alistarh, WoodFisher: Efficient Second-Order Approximation for Neural Network Compression, NeurIPS 2020

By submitting this pull request, I confirm that my contribution is made under the terms of the BSD 3-Clause License.

Introduces FisherPruner, a new BaseSparsifier subclass that prunes weights
using the diagonal of the empirical Fisher Information Matrix (eFIM) rather
than weight magnitude or activation norms.  Weights with low gradient
variance (low eFIM score) are pruned first — their removal causes the
smallest expected increase in loss.

Core algorithm:
- prepare(): attaches FakeSparsity parametrizations + PerChannelNormObserver
  (consistent with WandaSparsifier API).
- accumulate_fim(): accumulates squared gradients (eFIM diagonal) across
  calibration batches; call after loss.backward() before zero_grad().
- update_mask(): prunes lowest-FIM-score weights; falls back to magnitude
  pruning with a warning if no calibration data was provided.
- squash_mask(): removes parametrizations and clears FIM state.

Supports unstructured sparsity (arbitrary sparsity_level) and semi-structured
2:N sparsity via semi_structured_block_size.

Tests: 14 unit tests covering construction, prepare, accumulate_fim, fallback
behaviour, known-weight correctness, 2:4 semi-structured sparsity, and
per-layer custom config.

Signed-off-by: Ramakrishnan Sathyavageeswaran <ramkrishs@outlook.com>
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 28, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/4352

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented Apr 28, 2026

Hi @ramkrishs!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented Apr 28, 2026

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 28, 2026
@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented Apr 28, 2026

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

Copy link
Copy Markdown
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jerryzh168
Copy link
Copy Markdown
Contributor

also please create a readme to talk about things in summary + show some e2e model result (accuracy impact) on a popular model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants