Generative AI models, such as GANs and VAEs, have the potential to create realistic and diverse synthetic data for various applications, from image and speech synthesis to drug discovery and language modeling. However, training these models can be challenging due to the instability and mode collapse issues that often arise. In this workshop, we will explore how stable diffusion, a recent training method that combines diffusion models and Langevin dynamics, can address these challenges and improve the performance and stability of generative models. We will use a pre-configured development environment for machine learning, to run hands-on experiments and train stable diffusion models on different datasets. By the end of the session, attendees will have a better understanding of generative AI and stable diffusion, and how to build and deploy stable generative models for real-world use cases.