29 min listen
Adversarial Diffusion Distillation
ratings:
Length:
24 minutes
Released:
Dec 9, 2023
Format:
Podcast episode
Description
We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1-4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs, Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Code and weights available under https://github.com/Stability-AI/generative-models and https://huggingface.co/stabilityai/ .
2023: Axel Sauer, Dominik Lorenz, A. Blattmann, Robin Rombach
https://arxiv.org/pdf/2311.17042v1.pdf
2023: Axel Sauer, Dominik Lorenz, A. Blattmann, Robin Rombach
https://arxiv.org/pdf/2311.17042v1.pdf
Released:
Dec 9, 2023
Format:
Podcast episode
Titles in the series (100)
STaR: Bootstrapping Reasoning With Reasoning: Generating step-by-step"chain-of-thought"rationales improves language model performance on complex reasoning tasks like mathematics or commonsense question-answering. However, inducing language model rationale generation currently requires either con... by Papers Read on AI