Skip to content

Latest commit

 

History

History
82 lines (66 loc) · 3.38 KB

File metadata and controls

82 lines (66 loc) · 3.38 KB

FMA-Net++

1Korea Advanced Institute of Science and Technology (KAIST), South Korea
2Chung-Ang University, South Korea
Co-corresponding authors

This repository is the official implementation of "FMA-Net++: Motion- and Exposure-Aware Real-World Joint Video Super-Resolution and Deblurring".

demo.mp4

👆 Experience User-Interactive Comparisons: Please visit our Project Page to explore more results.

📧 News

  • Dec 04, 2025: This repository is created.

📖 Abstract

Real-world video restoration is plagued by complex degradations from motion coupled with dynamically varying exposure—a key challenge largely overlooked by prior works. We present FMA-Net++, a framework for joint video super-resolution and deblurring (VSRDB) that explicitly models this coupled effect.

FMA-Net++ adopts a sequence-level architecture built from Hierarchical Refinement with Bidirectional Propagation (HRBP) blocks for parallel, long-range temporal modeling. It incorporates an Exposure Time-aware Modulation (ETM) layer and an exposure-aware Flow-Guided Dynamic Filtering (FGDF) module to infer physically grounded degradation kernels. Extensive experiments on our proposed REDS-ME and REDS-RE benchmarks demonstrate that FMA-Net++ achieves state-of-the-art performance.

🖼️ Method Overview

FMA-Net++ utilizes HRBP blocks for efficient temporal modeling and ETM layers to explicitly handle dynamic exposure changes.

Architecture

HRBP

🚀 Code Release Plan

The full code and pretrained models will be released soon.

  • Inference code
  • Pretrained models
  • Training scripts
  • Dataset generation scripts

📬 Contact

For any questions, please contact rmsgurkjg@kaist.ac.kr via email.