HAODiff: Human-Aware One-Step Diffusion
via Dual-Prompt Guidance

Jue Gong, Tingyu Yang, Jingkai Wang, Zheng Chen, Xing Liu, Hong Gu, Yulun Zhang, Xiaokang Yang

“A novel one-step diffusion model for human body restoration, efficiently handling human motion blur and generic noise in human images.”, 2025

🔥🔥🔥 News

  • 2025-05-27: This repo is released.

Abstract: Human-centered images often suffer from severe generic degradation during transmission and are prone to human motion blur (HMB), making restoration challenging. Existing research lacks sufficient focus on these issues, as both problems often coexist in practice. To address this, we design a degradation pipeline that simulates the coexistence of HMB and generic noise, generating synthetic degraded data to train our proposed HAODiff, a human-aware one-step diffusion. Specifically, we propose a triple-branch dual-prompt guidance (DPG), which leverages high-quality images, residual noise (LQ minus HQ), and HMB segmentation masks as training targets. It produces a positive-negative prompt pair for classifier-free guidance (CFG) in a single diffusion step. The resulting adaptive dual prompts let HAODiff exploit CFG more effectively, boosting robustness against diverse degradations. For fair evaluation, we introduce MPII-Test, a benchmark rich in combined noise and HMB cases. Extensive experiments show that our HAODiff surpasses existing state-of-the-art (SOTA) methods in terms of both quantitative metrics and visual quality on synthetic and real-world datasets, including our introduced MPII-Test.

HAODiff_pipeline.png Figure 2: Degradation pipeline overview.

HAODiff_model.png Figure 3: Model structure of our HAODiff.


⚒️ TODO

🔗 Contents

🔎 Results

The model HAODiff achieved state-of-the-art performance on both the datasets PERSONA-Val, PERSONA-Test, and MPII-Test. Detailed results can be found in the paper.

 Quantitative Comparisons (click to expand)
  • Results in Table 1 on synthetic PERSONA-Val dataset from the main paper.

    tab_1.png

  •  Quantitative Comparisons (click to expand)
  • Results in Table 2 on real-world PERSONA-Test and MPII-Test datasets from the main paper.

    tab_2.png

  •  Visual Comparisons (click to expand)
  • Results in Figure 5 on synthetic PERSONA-Val dataset from the main paper.

    fig5-main.png

  • Results in Figure 6 on real-world PERSONA-Test and MPII-Test datasets from the main paper.

    fig6-main.png

  •  More Comparisons on fabric patterns and textures…
  • Results in Figure 4 from supplemental material.

    fig4-supp.png

  •  More Comparisons on synthetic PERSONA-Val dataset…
  • Results in Figure 5, 6 from supplemental material.

    fig5-supp.png

    fig6-supp.png

  •  More Comparisons on real-world PERSONA-Test dataset…
  • Results in Figure 7, 8 from supplemental material.

    fig7-supp.png

    fig8-supp.png

  •  More Comparisons on real-world MPII-Test dataset…
  • Results in Figure 9, 10 from supplemental material.

    fig9-supp.png

    fig10-supp.png

  •  More Comparisons on challenge tasks…
  • Results in Figure 11, 12 from supplemental material.

    fig11-supp.png

    fig12-supp.png

  • 📎 Citation

    If you find the code helpful in your research or work, please cite the following paper(s).

    1
    2
    3
    4
    5
    6
    @article{gong2025haodiff,
    title={{HAODiff: Human-Aware One-Step Diffusion via Dual-Prompt Guidance}},
    author={Gong, Jue and Yang, Tingyu and Wang, Jingkai and Chen, Zheng and Liu, Xing and Gu, Hong and Liu, Yutong and Zhang, Yulun and Yang, Xiaokang},
    journal={arXiv preprint 2505.19742},
    year={2025}
    }

    💡 Acknowledgements

    [TBD]