Skip to content

Mr-Loevan/VL-Calibration

Repository files navigation

[ACL 2026] VL-Calibration: Decoupled Confidence Calibration for Large Vision-Language Models Reasoning


Code Paper Dataset Model

Overview

VL-Calibration is a framework for improving LVLMs calibration and reasoning via decoupled verbalized confidence.

Table of Contents

Installation

# Clone the repository
git clone https://github.com/Mr-Loevan/VL-Calibration.git
cd VL-Calibration

# Create conda environment
conda create -n vl_calib python=3.11
conda activate vl_calib

# Install dependencies (Refer to EasyR1 installation)
pip install -r requirements.txt
pip install -e .

Quick Start

# Run decouple calibration training
# Download VL-Calibration-12K
bash examples/decouple.sh

Citation

If you find this work useful, please cite:

@misc{xiao2026vlcalibration,
      title={VL-Calibration: Decoupled Confidence Calibration for Large Vision-Language Models Reasoning},
      author={Wenyi Xiao and Xinchi Xu and Leilei Gan},
      year={2026},
      eprint={2604.09529},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2604.09529},
}

License

This project is licensed under the Apache 2.0 License.

Acknowledgments

  • Built on top of veRL and EasyR1, Efficient, Scalable, Multi-Modality RL Training Frameworks.

About

[ACL 2026 Main] VL-Calibration: Decoupled Confidence Calibration for Large Vision-Language Models Reasoning

Topics

Resources

License

Stars

Watchers

Forks

Contributors

Languages