Large Language Models (LLMs) are a type of deep learning model designed to process and understand vast amounts of natural language data. Built on neural network architectures, particularly the transformer architecture, LLMs have revolutionized the field of natural language processing. In this presentation, we will explore the world of LLMs, their significance, and the different types of LLMs based on the transformer architecture, such as autoregressive language models (e.g., GPT), autoencoding language models (e.g., BERT), and combined models (e.g., T5). Join us as we delve into the world of LLMs and discover their potential in shaping the future of natural language processing.