1. Autoencoders are unsupervised neural networks that are useful for dimensionality reduction and clustering. They compress the input into a latent-space representation then reconstruct the output from this representation. 2. Deep autoencoders stack multiple autoencoder layers to learn hierarchical representations of the data. Each layer is trained sequentially. 3. Variational autoencoders use probabilistic encoders and decoders to learn a Gaussian latent space. They can generate new samples from the learned data distribution.