Adversarial training can defend against poisoning attacks by making neural networks robust to adversarial perturbations. The document discusses using adversarial training with the PGD attack on poisoned datasets to evaluate its effectiveness against poisoning attacks. Results show that adversarial training improves test accuracy on poisoned images compared to normal training, reducing the effectiveness of poisoning attacks. For clean label attacks, adversarial training works by making models robust to adversarial perturbations added during poisoning. For badnets attacks on MNIST, adversarial training removes the effect of backdoors by preventing separate clustering of poisoned and non-poisoned classes.