Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/32372| Title: | Information Theoretic Learning for Diffusion Models with Warm Start |
| Authors: | Shen, Y Gan, L Ling, C |
| Keywords: | generative models;additive noise;relative Fisher information;density estimation;likelihood;diffusion models;information theory (cs.IT);machine learning (cs.LG) |
| Issue Date: | 23-Oct-2025 |
| Publisher: | OpenReview.net |
| Citation: | Shen, Y., Gan, L. and Ling, C. (2025) 'Information Theoretic Learning for Diffusion Models with Warm Start', , pp. 1 - 52. Available at: https://openreview.net/forum?id=3IbKbmNci3 (accessed: 17 November 2025) |
| Abstract: | Generative models that maximize model likelihood have gained traction in many practical settings. Among them, perturbation-based approaches underpin many state-of-the-art likelihood estimation models, yet they often face slow convergence and limited theoretical understanding. In this paper, we derive a tighter likelihood bound for noise-driven models to improve both the accuracy and efficiency of maximum likelihood learning. Our key insight extends the classical Kullback–Leibler (KL) divergence–Fisher information relationship to arbitrary noise perturbations, going beyond the Gaussian assumption and enabling structured noise distributions. This formulation allows flexible use of randomized noise distributions that naturally account for sensor artifacts, quantization effects, and data distribution smoothing, while remaining compatible with standard diffusion training. Treating the diffusion process as a Gaussian channel, we further express the mismatched entropy between data and model, showing that the proposed objective upper-bounds the negative log-likelihood (NLL). In experiments, our models achieve competitive NLL on CIFAR-10 and state-of-the-art results on ImageNet across multiple resolutions, all without data augmentation, and the framework extends naturally to discrete data. |
| Description: | NeurIPS 2025 poster. Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs). A version of the conference paper is available at arXiv:2510.20903v1 [cs.IT], https://arxiv.org/abs/2510.20903. Comments: NeurIPS 2025. Submission history: From: Yirong Shen [view email]. [v1] Thu, 23 Oct 2025 18:00:59 UTC (2,267 KB) |
| URI: | https://bura.brunel.ac.uk/handle/2438/32372 |
| Other Identifiers: | ORCiD: Lu Gan https://orcid.org/0000-0003-1056-7660 Submission Number: 24082 arXiv:2510.20903v1 [cs.IT] |
| Appears in Collections: | Dept of Electronic and Electrical Engineering Research Papers |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| FullText.pdf | Copyright © 2025 The Author(s). This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (https://creativecommons.org/licenses/by-nc-nd/4.0/). | 2.95 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License