You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

UniPercept: Towards Unified Perceptual-Level Image Understanding across Aesthetics, Quality, Structure, and Texture

arXiv Website Model Dataset
Shuo Cao*, Jiayang Li*, Xiaohui Li, Yuandong Pu, Kaiwen Zhu, Yuanting Gao, Siqi Luo, Yi Xin, Qi Qin, Yu Zhou, Xiangyu Chen, Wenlong Zhang, Bin Fu, Yu Qiao, Yihao Liu†
University of Science and Technology of China   Shanghai AI Laboratory   Peking University
* Equal contribution   † Corresponding author

Dataset Distribution

⭐️ More Research:

πŸš€ News & Updates

  • [Dec 29, 2025] πŸ”₯ Official Release
    • Technical Report
    • Project Page
    • UniPercept-Bench: A comprehensive evaluation suite for perceptual-level MLLMs, spanning Image Aesthetics Assessment (IAA), Image Quality Assessment (IQA), and Image Structure & Texture Assessment (ISTA) across Visual Rating (VR) and Visual Question Answering (VQA) tasks.
    • UniPercept: A powerful baseline MLLM specialized for perceptual image understanding, optimized via Domain-Adaptive Pre-Training and Task-Aligned RL.

🌟 Abstract

Multimodal large language models (MLLMs) have achieved remarkable progress in visual understanding tasks such as visual grounding, segmentation, and captioning. However, their ability to perceive perceptual-level image features remains limited. In this work, we present UniPercept-Bench, a unified framework for perceptual-level image understanding across three key domains: Aesthetics, Quality, and Structure and Texture. We establish a hierarchical definition system and construct large-scale datasets to evaluate perceptual-level image understanding. Based on this foundation, we develop a strong baseline UniPercept trained via Domain-Adaptive Pre-Training and Task-Aligned RL, enabling robust generalization across both Visual Rating (VR) and Visual Question Answering (VQA) tasks. UniPercept outperforms existing MLLMs on perceptual-level image understanding and can serve as a plug-and-play reward model for text-to-image generation. This work defines Perceptual-Level Image Understanding in the era of MLLMs and, through the introduction of a comprehensive benchmark together with a strong baseline, provides a solid foundation for advancing perceptual-level multimodal image understanding.

πŸ“Š UniPercept-Bench

We introduce UniPercept-Bench, a systematic benchmark for perceptual image understanding:

  • Comprehensive Coverage: Spans 3 domains (IAA, IQA, ISTA), 17 categories, and 43 criteria.

  • Perceptual Tasks: Supports both Visual Rating (VR) and Visual Question Answering (VQA).

    Performance on UniPercept-Bench-VR Performance on UniPercept-Bench-VR
    Performance on UniPercept-Bench-VQA (IAA) Performance on UniPercept-Bench-VQA (IAA)
    Performance on UniPercept-Bench-VQA (IQA) Performance on UniPercept-Bench-VQA (IQA)
    Performance on UniPercept-Bench-VQA (ISTA) Performance on UniPercept-Bench-VQA (ISTA)

    🎨 Applications

    UniPercept As Reward

    UniPercept can be used as a powerful reward model for post-training Text-to-Image (T2I) models. By integrating UniPercept rewards into the training of FLUX.1-dev, we observe significant improvements in aesthetic quality, structural richness, and prompt adherence.

    Performance on UniPercept-Bench-VR

    UniPercept As Metrics

    UniPercept can serve as an perceptual-level metric that assesses the quality of outputs from any model producing images, covering three complementary dimensions: IAA, IQA, and ISTA.

    Performance on UniPercept-Bench-VR Performance on UniPercept-Bench-VR

    πŸ–ΌοΈ UniPercept-Constructed Image Profiles

    UniPercept performs comprehensive perceptual-level image analysis, delivering accurate visual ratings across the IAA, IQA, and ISTA dimensions, along with fine-grained multi-dimensional analytical outputs that together form a detailed image profile.

    Performance on UniPercept-Bench-VR Performance on UniPercept-Bench-VR Performance on UniPercept-Bench-VR

    ✏️ Citation

    If you find UniPercept useful for your research, please consider citing our work:

    @misc{cao2025uniperceptunifiedperceptuallevelimage,
          title={UniPercept: Towards Unified Perceptual-Level Image Understanding across Aesthetics, Quality, Structure, and Texture}, 
          author={Shuo Cao and Jiayang Li and Xiaohui Li and Yuandong Pu and Kaiwen Zhu and Yuanting Gao and Siqi Luo and Yi Xin and Qi Qin and Yu Zhou and Xiangyu Chen and Wenlong Zhang and Bin Fu and Yu Qiao and Yihao Liu},
          year={2025},
          eprint={2512.21675},
          archivePrefix={arXiv},
          primaryClass={cs.CV},
          url={https://arxiv.org/abs/2512.21675}, 
    }
    
    @misc{cao2025artimusefinegrainedimageaesthetics,
          title={ArtiMuse: Fine-Grained Image Aesthetics Assessment with Joint Scoring and Expert-Level Understanding}, 
          author={Shuo Cao and Nan Ma and Jiayang Li and Xiaohui Li and Lihao Shao and Kaiwen Zhu and Yu Zhou and Yuandong Pu and Jiarui Wu and Jiaquan Wang and Bo Qu and Wenhai Wang and Yu Qiao and Dajuin Yao and Yihao Liu},
          year={2025},
          eprint={2507.14533},
          archivePrefix={arXiv},
          primaryClass={cs.CV},
          url={https://arxiv.org/abs/2507.14533}, 
    }
    
Downloads last month
-
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Thunderbolt215215/UniPercept

Collection including Thunderbolt215215/UniPercept