Score-Based Generative Models

Generative Modeling by Estimating Gradients of the Data Distribution

Likelihood-based models and implicit generative models, however, both have significant limitations. Likelihood-based models either require strong restrictions on the model architecture to ensure a tractable normalizing constant for likelihood computation, or must rely on surrogate objectives to approximate maximum likelihood training. Implicit generative models, on the other hand, often require adversarial training, which is notoriously unstable and can lead to mode collapse.

there is another way to represent probability distributions that may circumvent several of these limitations. The key idea is to model the gradient of the log probability density function, a quantity often known as the (Stein) score function.

Score function approximates the log likelihood in a way that allows ignoring the normalizing constant. Just as with likelihood-based models, we can train score-based models by minimizing the Fisher divergence between the model and the data distributions.