TPGDiff: Hierarchical Triple-Prior Guided Diffusion for Image Restoration
Interactive Comparisons
Click a thumbnail to switch degradation types. Drag the slider to compare LQ input and restoration by Ours.
Abstract
All-in-one image restoration aims to address diverse degradation types using a single unified model. Existing methods typically rely on degradation priors to guide restoration, yet often struggle to reconstruct content in severely degraded regions. Although recent works leverage semantic information to facilitate content generation, integrating it into the shallow layers of diffusion models often disrupts spatial structures (e.g., blurring artifacts). To address this issue, we propose a Triple-Prior Guided Diffusion (TPGDiff) network for unified image restoration. TPGDiff incorporates degradation priors throughout the diffusion trajectory, while introducing structural priors into shallow layers and semantic priors into deep layers, enabling hierarchical and complementary prior guidance for image reconstruction. Specifically, we leverage multi-source structural cues as structural priors to capture fine-grained details and guide shallow layers representations. To complement this design, we further develop a distillation-driven semantic extractor that yields robust semantic priors, ensuring reliable high-level guidance at deep layers even under severe degradations. Furthermore, a degradation extractor is employed to learn degradation-aware priors, enabling stage-adaptive control of the diffusion process across all timesteps. Extensive experiments on both single- and multi-degradation benchmarks demonstrate that TPGDiff achieves superior performance and generalization across diverse restoration scenarios.
Method Overview
Overall architecture of TPGDiff. The framework explicitly models three types of priors from a low-quality input image and integrates them into a diffusion-based restoration network: (a) a semantic extractor that learns semantic representations via teacher–student distillation, (b) a degradation extractor that captures degradation-related characteristics, and (c) a structural adapter that injects structural priors into the diffusion model through an adapter module.
Quantitative Comparisons
Quantitative comparison between our method with other state-of-the-art approaches on nine different degradation-specific datasets. ↑ indicates higher is better, and ↓ indicates lower is better. Best and second-best results are highlighted in red and blue, respectively.
Quantitative comparison between our method and other state-of-the-art approaches on five restoration tasks. ↑ indicates higher is better. Best and second-best results are highlighted in red and blue, respectively.
Comparison under unknown tasks setting (under-display camera image restoration) on TOLED and POLED datasets. ↑ indicates higher is better, and ↓ indicates lower is better. Best and second-best results are highlighted in red and blue, respectively.
More Qualitative Results
Visual comparison results with other all-in-one image restoration methods on image denoising, low-light enhancement, image deraining, and image deblurring tasks. Zoom in for a better view.
Visualization results for the image deraining task under different methods on the Rain100L dataset. Zoom in for best view.
Visualization results for the image denoising task under different methods on the CBSD68 dataset. Zoom in for best view.
Visualization results for the low-light image enhancement task under different methods on the LOL-v1 dataset. Zoom in for best view.
Visualization results for the image deblurring task under different methods on the GoPro dataset. Zoom in for best view.
Visualization results for the image dehazing task under different methods on the SOTS dataset. Zoom in for best view.
BibTeX
@article{TPGDiff2026,
title = {TPGDiff: Hierarchical Triple-Prior Guided Diffusion for Image Restoration},
author = {Yanjie Tu, Qingsen Yan, Axi Niu, Jiacong Tang},
journal = {Under Review},
year = {2026},
note = {Under review}
}