A Bayesian Framework for Fine-Tuning Pretrained Diffusion Models
Prof. Jian Huang
Chair Professor of Data Science and Analytics
Department of Applied Mathematics
The Hong Kong Polytechnic University
Diffusion-based generative models have achieved remarkable successes in learning complex probability measures for various types of data, including image, video, audio, and biomedical data. Researchers have taken steps to fine tune pre-trained large-scale models with a significantly reduced amount of data, enabling them to generate samples that align with the dataset’s support and achieve comparable quality. The combination of learnable modules and large models has shown impressive generation capabilities. Therefore, it is useful to understand how fine-tuning transitions from “a large probability space” to “a small probability space.” In this work, we formulate a Bayesian framework for fine-tuning in the context of large diffusion models. We clarify the meaning behind transitioning from a “large probability space” to a “small probability space” and explore the task of fine-tuning pre-trained models using learnable modules from a Bayesian perspective.