TPGDiff: Hierarchical Triple-Prior Guided Diffusion for Image Restoration

1Northwestern Polytechnical University
2Shenzhen Research Institute of Northwestern Polytechnical University

*Corresponding Author

Interactive Comparisons

Click a thumbnail to switch degradation types. Drag the slider to compare LQ input and restoration by Ours.

TPGDiff teaser figure

(a) Existing methods inject prior information uniformly into the diffusion model, whereas our approach adopts a hierarchical strategy, distributing distinct priors across specific layers of the network. (b) The generation results of diffusion models are largely governed by representations encoded in the deep layers of the network, which play a dominant role in determining the final reconstruction.

Abstract

All-in-one image restoration aims to address diverse degradation types using a single unified model. Existing methods typically rely on degradation priors to guide restoration, yet often struggle to reconstruct content in severely degraded regions. Although recent works leverage semantic information to facilitate content generation, integrating it into the shallow layers of diffusion models often disrupts spatial structures (e.g., blurring artifacts). To address this issue, we propose a Triple-Prior Guided Diffusion (TPGDiff) network for unified image restoration. TPGDiff incorporates degradation priors throughout the diffusion trajectory, while introducing structural priors into shallow layers and semantic priors into deep layers, enabling hierarchical and complementary prior guidance for image reconstruction. Specifically, we leverage multi-source structural cues as structural priors to capture fine-grained details and guide shallow layers representations. To complement this design, we further develop a distillation-driven semantic extractor that yields robust semantic priors, ensuring reliable high-level guidance at deep layers even under severe degradations. Furthermore, a degradation extractor is employed to learn degradation-aware priors, enabling stage-adaptive control of the diffusion process across all timesteps. Extensive experiments on both single- and multi-degradation benchmarks demonstrate that TPGDiff achieves superior performance and generalization across diverse restoration scenarios.

Method Overview

Overview of the proposed TPGDiff framework

Overall architecture of TPGDiff. The framework explicitly models three types of priors from a low-quality input image and integrates them into a diffusion-based restoration network: (a) a semantic extractor that learns semantic representations via teacher–student distillation, (b) a degradation extractor that captures degradation-related characteristics, and (c) a structural adapter that injects structural priors into the diffusion model through an adapter module.

Quantitative Comparisons

Quantitative comparison between our method with other state-of-the-art approaches on nine different degradation-specific datasets. ↑ indicates higher is better, and ↓ indicates lower is better. Best and second-best results are highlighted in red and blue, respectively.

Quantitative comparison between our method and other state-of-the-art approaches on five restoration tasks. ↑ indicates higher is better. Best and second-best results are highlighted in red and blue, respectively.

Comparison under unknown tasks setting (under-display camera image restoration) on TOLED and POLED datasets. ↑ indicates higher is better, and ↓ indicates lower is better. Best and second-best results are highlighted in red and blue, respectively.

More Qualitative Results

BibTeX

@article{TPGDiff2026,
  title   = {TPGDiff: Hierarchical Triple-Prior Guided Diffusion for Image Restoration},
  author  = {Yanjie Tu, Qingsen Yan, Axi Niu, Jiacong Tang},
  journal = {Under Review},
  year    = {2026},
  note    = {Under review}
}