Qwen2.5-VL-SAT / README.md
array's picture
Update README.md
b5bb67c verified
metadata
library_name: transformers
license: mit
datasets:
  - array/SAT
  - Video-R1/Video-R1-data
base_model:
  - Qwen/Qwen2.5-VL-7B-Instruct

Model Card for Model ID

A strong spatial Qwen 2.5 VL baseline.

Post-trained on SAT, and just the answers from Video-R1. The exact mix is 40% SAT, 20% Zebra-COT, 20% Video-R1.

Get Started

% pip install git+https://github.com/huggingface/transformers accelerate
% pip install qwen-vl-utils[decord]==0.0.8

from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor

model = Qwen2_5_VLForConditionalGeneration.from_pretrained("array/Qwen2.5-VL-SAT")
processor = AutoProcessor.from_pretrained(
    exp_confs["model_path"],
    trust_remote_code=model_config.trust_remote_code
)

Please see the paper for details on training and evaluation datasets and metrics.

Results

Model MV RelDep SpRel Jig IQT BLINK Avg BLINK Reas SAT-R VSI Avg VSI Reas ERQA Avg (All)
Qwen2.5-VL (7B) 39.00 61.29 92.38 58.66 25.33 55.33 41.00 59.00 23.96 22.96 38.91 44.30
+ SAT 57.14 87.09 74.12 58.66 30.00 61.40 48.60 71.66 32.40 30.65 38.00 50.87

Citation [optional]

@misc{ray2025satdynamicspatialaptitude,
      title={SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models}, 
      author={Arijit Ray and Jiafei Duan and Ellis Brown and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko},
      year={2025},
      eprint={2412.07755},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.07755}, 
    }