This document discusses transfer learning using Transformers (BERT) in Thai. It begins by outlining the topics to be covered, including an overview of deep learning for text processing, the BERT model architecture, pre-training, fine-tuning, state-of-the-art results, and alternatives to BERT. It then explains why transfer learning with Transformers is interesting due to its strong performance on tasks like question answering and intent classification in Thai. The document dives into details of BERT's pre-training including masking words and predicting relationships between sentences. In the end, BERT has learned strong language representations that can then be fine-tuned for downstream tasks.