This paper investigates the interpretability challenges of generative AI models, specifically focusing on transformer architectures like GPT. It employs a novel sensitivity analysis method using attention weights and Kullback-Leibler divergence to rank the importance of words in a corpus, enhancing the understanding of model predictions. The study aims to bridge the gap between transformer models and explainable AI to promote responsible usage of these technologies in real-world applications.