The document presents a novel contrastive learning framework for conditional text generation that addresses the exposure bias problem in sequence-to-sequence models. By leveraging adversarial perturbations to create challenging negative and positive examples, the method demonstrates improved performance on various tasks including machine translation, question generation, and summarization compared to existing baselines. Experimental results indicate the proposed approach outperforms traditional T5 models across key metrics, although future work will focus on enhancing sample efficiency and the quality of generated examples.