ORPO - Monolithic Preference Optimization Without Reference Model
ORPO - Monolithic Preference Optimization Without Reference Model
KAIST AI
{jiwoo_hong, noah.lee, thorne}@kaist.ac.kr
Abstract 12.5
Llama−2 Mistral
AlpacaEval 2.0
for language models have demonstrated promis-
erence alignment, emphasizing that a minor Llama (7B) Llama (13B) Llama−ORPO (7B) Zephyr−a Zephyr−b Mistral−ORPO−a Mistral−ORPO−b
penalty for the disfavored generation style is Algorithm RLHF DPO ORPO
Str
Chosen Responses ong
SFT Ada
pta
ti on
SFT
Reward Model
Rejected Responses alty
Pen
ak
Ref. Policy We
Ref. Policy
Figure 2: Comparison of model alignment techniques. ORPO aligns the language model without a reference model in
a single-step manner by assigning a weak penalty to the rejected responses and a strong adaptation signal to the
chosen responses with a simple log odds ratio term appended to the negative log-likelihood loss.
tasks highlights the necessity of understanding the on AlpacaEval2.0 , and 61.63% and 66.19% in IFE-
alignment procedure and further improving the al- val instruction-level loose accuracy, respectively.
gorithms in terms of efficiency and performance.
However, existing preference alignment methods 2 Related Works
normally consist of a multi-stage process, as shown Alignment with Reinforcement Learning Rein-
in Figure 2, typically requiring a second refer- forcement learning with human feedback (RLHF)
ence model and a separate warm-up phase with commonly applies the Bradley-Terry model
supervised fine-tuning (SFT) (Ziegler et al., 2020; (Bradley and Terry, 1952) to estimate the prob-
Rafailov et al., 2023; Wu et al., 2023). ability of a pairwise competition between two in-
In this paper, we study the role and impact of dependently evaluated instances. An additional
SFT in pairwise preference datasets for model reward model is trained to score instances. Rein-
alignment in Section 3 and propose a simple and forcement learning algorithms such as proximal
novel monolithic alignment method, odds ratio policy optimization (PPO) (Schulman et al., 2017)
preference optimization (ORPO), which efficiently are employed to train the model to maximize the
penalizes the model from learning undesired gen- score of the reward model for the chosen response,
eration styles during SFT in Section 4. In con- resulting in language models that are trained with
trast to previous works, our approach requires human preferences (Ziegler et al., 2020; Stiennon
neither an SFT warm-up stage nor a reference et al., 2022). Notably, Ouyang et al. (2022) demon-
model, enabling resource-efficient development of strated the scalability and versatility of RLHF for
preference-based aligned models. instruction-following language models. Extensions
such as language model feedback (RLAIF) could
We demonstrate the effectiveness of our method
be a viable alternative to human feedback (Bai et al.,
with the evaluation of model alignment tasks and
2022b; Lee et al., 2023; Pang et al., 2023). How-
popular leaderboards in Section 6.1 and 6.2 by
ever, RLHF faces challenges of extensive hyper-
fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mis-
parameter searching due to the instability of PPO
tral (7B) with ORPO. Then, we conduct controlled
(Rafailov et al., 2023; Wu et al., 2023) and the
experiments comparing ORPO against established
sensitivity of the reward models (Gao et al., 2022;
methods for model alignment, RLHF, and DPO
Wang et al., 2024). Therefore, there is a crucial
for different datasets and model sizes in Section
need for stable preference alignment algorithms.
6.3. Along with the post-hoc analysis of generation
diversity in Section 6.4, we expound on the theo- Alignment without Reward Model Several
retical, empirical, and computational justification techniques for preference alignment mitigate the
of utilizing the odds ratio in monolithic preference need for reinforcement learning (Rafailov et al.,
alignment in Section 7.3. We release the training 2023; Song et al., 2023; Azar et al., 2023; Etha-
code and the checkpoints for Mistral-ORPO-α (7B) yarajh et al., 2024). Rafailov et al. (2023) introduce
and Mistral-ORPO-β (7B). These models achieve direct policy optimization (DPO), which combines
7.24 and 7.32 in MT-Bench, 11.33% and 12.20% the reward modeling stage into the preference learn-
ing stage. Azar et al. (2023) prevented potential 2023a; Dong et al., 2024) by increasing the log
overfitting problems in DPO through identity pref- probabilities of pertinent tokens. Nevertheless, this
erence optimization (IPO). Ethayarajh et al. (2024) inadvertently increases the likelihood of generating
and Cai et al. (2023) proposed Kahneman-Tversky tokens in undesirable styles, as illustrated in Figure
Optimisation (KTO) and Unified Language Model 3. Therefore, it is necessary to develop methods
Alignment (ULMA) that does not require the pair- capable of preserving the domain adaptation role of
wise preference dataset, unlike RLHF and DPO. SFT while concurrently discerning and mitigating
Song et al. (2023) further suggests incorporation unwanted generation styles.
of the softmax value of the reference response set
in the negative log-likelihood loss to merge the Absence of Penalty in Cross-Entropy Loss The
supervised fine-tuning and preference alignment. goal of cross-entropy loss model fine-tuning is to
penalize the model if the predicted logits for the
Alignment with Supervised Fine-tuning Prefer- reference answers are low, as shown in Equation 2.
ence alignment methods in reinforcement learning
m
(RL) often leverage supervised fine-tuning (SFT) 1 X
L=− log P (x(k) , y(k) ) (1)
to ensure the stable update of the active policy in m
k=1
relation to the old policy (Schulman et al., 2017).
This is because the SFT model is the old policy in m |V |
1 X X (k) (k)
the context of RLHF (Ziegler et al., 2020). Fur- =− yi · log(pi ) (2)
thermore, empirical findings indicate that, even in m
k=1 i=1
non-RL alignment methods, the SFT model is cru-
cial for achieving convergence to desired results where yi is a boolean value that indicates if ith to-
(Rafailov et al., 2023; Tunstall et al., 2023). ken in the vocabulary set V is a label token, pi
In contrast, there have been approaches to build refers to the probability of ith token, and m is
human-aligned language models by conducting the length of sequence. Using cross-entropy alone
SFT only with filtered datasets (Zhou et al., 2023a; gives no direct penalty or compensation for the
Li et al., 2023a; Haggerty and Chandra, 2024; Zhou logits of non-answer tokens (Lin et al., 2017) as
et al., 2023b). Zhou et al. (2023a) demonstrated yi will be set to 0. While cross-entropy is gen-
that SFT with a small amount of data with fine- erally effective for domain adaptation (Mao et al.,
grained filtering and curation could be sufficient 2023), there are no mechanisms to penalize rejected
for building helpful language model assistants. Fur- responses when compensating for the chosen re-
thermore, Li et al. (2023a) and Haggerty and Chan- sponses. Therefore, the log probabilities of the to-
dra (2024) proposed an iterative process of fine- kens in the rejected responses increase along with
tuning the supervised fine-tuned language mod- the chosen responses, which is not desired from the
els with their own generations after fine-grained viewpoint of preference alignment.
selection of aligned generations and Zhou et al. Generalization over Both Response Styles We
(2023b) suggested that a curated subset of prefer- conduct a pilot study to empirically demonstrate
ence dataset is sufficient for alignment. While these the miscalibration of chosen and rejected responses
works highlight the impact and significance of SFT with supervised fine-tuning alone. We fine-tune
in the context of alignment, the actual role of SFT OPT-350M (Zhang et al., 2022) on the chosen re-
and the theoretical background for incorporating sponses only from the HH-RLHF dataset (Bai et al.,
preference alignment in SFT remains understudied. 2022b). Throughout the training, we monitor the
log probability of rejected responses for each batch
3 The Role of Supervised Fine-tuning
and report this in Figure 3. Both the log probability
We study the behavior of supervised fine-tuning of chosen and rejected responses exhibited a simul-
(SFT) as an initial stage of preference alignment taneous increase. This can be interpreted from two
methods (Ziegler et al., 2020; Rafailov et al., 2023) different perspectives. First, the cross-entropy loss
through analysis of the loss function in SFT and effectively guides the model toward the intended
empirical demonstration of the preference compre- domain (e.g., dialogue). However, the absence of
hension ability of the trained SFT model. SFT a penalty for unwanted generations results in re-
plays a significant role in tailoring the pre-trained jected responses sometimes having even higher log
language models to the desired domain (Zhou et al., probabilities than the chosen ones.
Response Type Chosen Rejected
Pθ (y|x)
oddsθ (y|x) = (4)
1 − Pθ (y|x)
Intuitively, oddsθ (y|x) = k implies that it is k
−2.2
times more likely for the model θ to generate the
Log Probability
Table 1: Table of instruction-following abilities of each checkpoint measured through AlpacaEval. While clearly
showing the improvements in instruction-following abilities after training with ORPO, it is notable that ORPO models
exceed RLHF or DPO models of Llama-2 and Mistral (* indicates the results from the official leaderboard.)
Coding
Mistral (7B) with single-turn conversation dataset, Figure 4: MT-Bench result of Mistral-ORPO-α (7B) and
UltraFeedback, and ORPO with λ of 0.1 outperforms Mistral-ORPO-β (7B) by the category. Further compari-
Zephyr series, which are the Mistral (7B) models son can be found in the Appendix G.
fine-tuned with SFT on 20K UltraChat (Ding et al.,
2023) and DPO on the full UltraFeedback. As 6.2 Multi-turn Instruction Following
shown in Table 1, Mistral-ORPO-α (7B) achieves
87.92% and 11.33%, which exceeds Zephyr α by With our best model, Mistral-ORPO-α (7B) and
1.98% and Zephyr β by 0.34% in AlpacaEval2.0 . Mistral-ORPO-β (7B), we also assess the multi-turn
The sample responses and corresponding refer- instruction-following skills with deterministic an-
ences from GPT-4 can be found in Appendix I. swers (e.g., math) through MT-Bench.
As shown in Figure 4, ORPO-Mistral (7B) series
Mistral-ORPO-β (7B) Using the same configura- achieve comparable results to either larger or the
tion of Mistral-ORPO-α (7B), we additionally com- proprietary models, including Llama-2-Chat (70B)
pare fine-tuning Mistral on the cleaned version of and Claude. Eventually, Mistral-ORPO-α (7B) and
the UltraFeedback8 to demonstrate the effect of the Mistral-ORPO-β (7B) scored 7.23 and 7.32 in MT-
data quality (Bartolome et al., 2023). While the Bench without being exposed to the multi-turn con-
actual sizes of datasets are similar, ORPO gains fur- versation dataset during training.
ther advantages from the dataset quality by scoring
6.3 Reward Model Win Rate
over 91% and 12% on AlpacaEval, as shown in
Table 1. Further instruction-following evaluation We assess the win rate of ORPO over other pref-
on two Mistral-based models with IFEval (Zhou erence alignment methods, including supervised
et al., 2023c) is reported in the Appendix D. fine-tuning (SFT), PPO, and DPO, using RM-1.3B
to understand the effectiveness and scalability of
8
https://huggingface.co/datasets/argilla/ ORPO in Tables 2 and 3. Additionally, we visu-
ultrafeedback-binarized-preferences-cleaned ally verify that ORPO can effectively enhance the
Figure 5: Reward distribution comparison between OPT-125M (left), OPT-350M (middle), and OPT-1.3B (right)
trained with SFT (blue), RLHF (green), DPO (orange), and ORPO (red) on the test set of UltraFeedback using the
RM-1.3B. While the rewards of the trained models are roughly normal and preference optimization algorithms
(RLHF, DPO, and ORPO) tend to move the reward distribution in the positive direction, ORPO is on par or better than
RLHF and DPO in increasing the expected reward. The same plot for the HH-RLHF dataset is in Appendix F.
10K −0.9
Log Probability
1.0
Response Type
5K −1.2 Chosen
Rejected
0.5
−1.5
10 5 0 5 10
0.0 −1.8
0 2000 4000 6000 8000 0 2000 4000 6000 8000
Figure 6: Sampled distribution of log PR(X2 |X1 ) and Training Step Training Step
Aaron Gokaslan and Vanya Cohen. 2019. Open- Anqi Mao, Mehryar Mohri, and Yutao Zhong. 2023.
webtext corpus. http://Skylion007.github.io/ Cross-entropy loss functions: Theoretical analysis
OpenWebTextCorpus. and applications.
Alexey Gorbatovski and Sergey Kovalchuk. 2024. Re- Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car-
inforcement learning for question answering in pro- roll L. Wainwright, Pamela Mishkin, Chong Zhang,
gramming domain using public community scoring Sandhini Agarwal, Katarina Slama, Alex Ray, John
as a human feedback. Schulman, Jacob Hilton, Fraser Kelton, Luke Miller,
Maddie Simens, Amanda Askell, Peter Welinder,
Hamish Haggerty and Rohitash Chandra. 2024. Self- Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
supervised learning for skin cancer diagnosis with Training language models to follow instructions with
limited training data. human feedback.
Jing-Cheng Pang, Pengyuan Wang, Kaiyuan Li, Xiong- Katherine Tian, Eric Mitchell, Huaxiu Yao, Christo-
Hui Chen, Jiacheng Xu, Zongzhang Zhang, and Yang pher D. Manning, and Chelsea Finn. 2023. Fine-
Yu. 2023. Language model self-improvement by tuning language models for factuality.
reinforcement learning contemplation.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Ruxandra Cojocaru, Alessandro Cappelli, Hamza Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
and Julien Launay. 2023. The refinedweb dataset for Grave, and Guillaume Lample. 2023. Llama: Open
falcon llm: Outperforming curated corpora with web and efficient foundation language models.
data, and web data only.
Lewis Tunstall, Edward Beeching, Nathan Lambert,
Reid Pryzant, Dan Iter, Jerry Li, Yin Lee, Chenguang Nazneen Rajani, Kashif Rasul, Younes Belkada,
Zhu, and Michael Zeng. 2023. Automatic prompt op- Shengyi Huang, Leandro von Werra, Clémentine
timization with “gradient descent” and beam search. Fourrier, Nathan Habib, Nathan Sarrazin, Omar San-
In Proceedings of the 2023 Conference on Empiri- seviero, Alexander M. Rush, and Thomas Wolf. 2023.
cal Methods in Natural Language Processing, pages Zephyr: Direct distillation of lm alignment.
7957–7968, Singapore. Association for Computa-
tional Linguistics.
Leandro von Werra, Younes Belkada, Lewis Tun-
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano stall, Edward Beeching, Tristan Thrush, Nathan
Ermon, Christopher D. Manning, and Chelsea Finn. Lambert, and Shengyi Huang. 2020. Trl: Trans-
2023. Direct preference optimization: Your language former reinforcement learning. https://github.
model is secretly a reward model. com/huggingface/trl.
Miguel Moura Ramos, Patrick Fernandes, António Far- Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan
inhas, and André F. T. Martins. 2023. Aligning neu- Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu
ral machine translation models: Human feedback in Zhou, Chenyu Shi, Songyang Gao, Nuo Xu, Yuhao
training and inference. Zhou, Xiaoran Fan, Zhiheng Xi, Jun Zhao, Xiao
Wang, Tao Ji, Hang Yan, Lixing Shen, Zhan Chen,
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang,
and Yuxiong He. 2020. Deepspeed: System opti- Zuxuan Wu, and Yu-Gang Jiang. 2024. Secrets of
mizations enable training deep learning models with rlhf in large language models part ii: Reward model-
over 100 billion parameters. In Proceedings of the ing.
26th ACM SIGKDD International Conference on
Knowledge Discovery & Data Mining, KDD ’20, Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack
page 3505–3506, New York, NY, USA. Association Hessel, Tushar Khot, Khyathi Raghavi Chandu,
for Computing Machinery. David Wadden, Kelsey MacMillan, Noah A. Smith,
Iz Beltagy, and Hannaneh Hajishirzi. 2023. How
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec far can camels go? exploring the state of instruction
Radford, and Oleg Klimov. 2017. Proximal policy tuning on open resources.
optimization algorithms.
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin
Wei Shen, Rui Zheng, Wenyu Zhan, Jun Zhao, Shi- Guu, Adams Wei Yu, Brian Lester, Nan Du, An-
han Dou, Tao Gui, Qi Zhang, and Xuanjing Huang. drew M. Dai, and Quoc V. Le. 2022. Finetuned
2023. Loose lips sink ships: Mitigating length bias language models are zero-shot learners.
in reinforcement learning from human feedback. In
Findings of the Association for Computational Lin-
Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Di-
guistics: EMNLP 2023, pages 2859–2873, Singapore.
nan, Kyunghyun Cho, and Jason Weston. 2019. Neu-
Association for Computational Linguistics.
ral text generation with unlikelihood training. arXiv
Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei preprint arXiv:1908.04319.
Huang, Yongbin Li, and Houfeng Wang. 2023. Pref-
erence ranking optimization for human alignment. Tianhao Wu, Banghua Zhu, Ruoyu Zhang, Zhaojin Wen,
Kannan Ramchandran, and Jiantao Jiao. 2023. Pair-
Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. wise proximal policy optimization: Harnessing rela-
Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, tive feedback for llm alignment.
Dario Amodei, and Paul Christiano. 2022. Learning
to summarize from human feedback. Susan Zhang, Stephen Roller, Naman Goyal, Mikel
Artetxe, Moya Chen, Shuohui Chen, Christopher De-
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann wan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mi-
Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, haylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel
and Tatsunori B. Hashimoto. 2023. Stanford alpaca: Simig, Punit Singh Koura, Anjali Sridhar, Tianlu
An instruction-following llama model. https:// Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-
github.com/tatsu-lab/stanford_alpaca. trained transformer language models.
Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo,
Chien-Chin Huang, Min Xu, Less Wright, Hamid
Shojanazeri, Myle Ott, Sam Shleifer, Alban Desmai-
son, Can Balioglu, Pritam Damania, Bernard Nguyen,
Geeta Chauhan, Yuchen Hao, Ajit Mathews, and
Shen Li. 2023. Pytorch fsdp: Experiences on scaling
fully sharded data parallel.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang,
Joseph E. Gonzalez, and Ion Stoica. 2023. Judging
LLM-as-a-Judge with MT-Bench and Chatbot Arena.
ArXiv:2306.05685 [cs].
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao
Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu,
Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis,
Luke Zettlemoyer, and Omer Levy. 2023a. Lima:
Less is more for alignment.
oddsθ P (yw |x)
∇θ LOR = ∇θ log σ log (20)
oddsθ P (yl |x)
σ ′ (log g(x, yl , yw ))
= (21)
σ (log g(x, yl , yw ))
σ (− log g(x, yl , yw ))
= · ∇θ g(x, yl , yw ) (23)
g(x, yl , yw )
In Equation 25,qthe remaining derivative can be further simplified by replacing 1 − Pθ (y|x) terms
where P (y|x) = N N
Q
t Pθ (yt |x, y<t in oddsθ (y|x) as follows.
∇θ (1 − Pθ (y|x))
∇θ log (1 − Pθ (y|x)) = (26)
1 − Pθ (y|x)
−∇θ Pθ (y|x)
= (27)
1 − Pθ (y|x)
Pθ (y|x)
=− · ∇θ log Pθ (y|x) (28)
1 − Pθ (y|x)
= (1 + oddsθ P (yw |x)) ∇θ log Pθ (yw |x) − (1 + oddsθ P (yl |x)) ∇θ log Pθ (yl |x)
(31)
Figure 8: The log probability trace when the model is trained with the probability ratio (left) and the odds ratio
(right) given the same hyperparameters. The probability ratio leads the rejected responses to have relatively lower
log probabilities.
C Experimental Details
Flash-Attention 2 (Dao, 2023) is applied for all the pre-trained models for computational efficiency. In
particular, the OPT series and Phi-2 (2.7B) were trained with DeepSpeed ZeRO 2 (Rasley et al., 2020),
Llama-2 (7B) and Mistral (7B) were trained with Fully Sharded Data Parallel(FSDP) (Zhao et al., 2023).
7B and 2.7B models were trained with four and two NVIDIA A100, and the rest were trained on four
NVIDIA A6000. For optimizer, AdamW optimizer (Loshchilov and Hutter, 2019) and paged AdamW
(Dettmers et al., 2023) were used, and the linear warmup with cosine decay was applied for the learning
rate. For input length, every instance was truncated and padded to 1,024 tokens and 2,048 tokens for
HH-RLHF and UltraFeedback, respectively. To guarantee that the models can sufficiently learn to generate
the proper response to the conversation history or the complex instruction, we filtered instances with
prompts with more than 1,024 tokens.
Supervised Fine-tuning (SFT) For SFT, the maximum learning rate was set to 1e-5. Following Ziegler
et al. (2020) and Rafailov et al. (2023), the training epoch is set to 1.
Reinforcement Learning with Human Feedback (RLHF) For RLHF, the hyperparameters were set as
Table 5 for UltraFeedback. For the HH-RLHF dataset, the output_min_length and output_max_length
were set to 64 and 256.
Direct Preference Optimization (DPO) For DPO, β was set to 0.1 for every case. The learning rate
was set to 5e-6, and the model was trained for three epochs to select the best model by evaluation loss in
each epoch. However, in most cases, the first or the second checkpoint was selected as the best model as
the evaluation loss increased from the third epoch.
Hyperparameter Setting
ppo_epoch 4
init_kl_coef 0.1
horizon 2,000
batch_size 64
mini_batch_size 8
gradient_accumulation_steps 1
output_min_length 128
output_max_length 512
optimizer AdamW
learning_rate 1e-05
gamma 0.99
Odds Ratio Preference Optimization (ORPO) As ORPO does not require any special hyperparameter,
only the learning rate and epoch were the only hyperparameter to set. For ORPO, the maximum learning
rate was set to 8e-6 and trained for 10 epochs. The best model is selected based on the lowest evaluation
loss for the OPT series, Phi-2 (2.7B) and Llama-2 (7B).
−1.0
Log Probability
−1.5
−2.0
−2.5
0 1000 2000 3000 4000 0 1000 2000 3000 4000 0 1000 2000 3000 4000
Training Step
Figure 9: The log probability trend by λ. With larger λ (e.g., λ = 1.0), LOR gets more influential in fine-tuning the
models with ORPO.
In Figure 9, we find that larger λ leads to stronger discrimination of the rejected responses in general.
With λ = 0.1, the average log probability of the chosen and the rejected responses stay close as the
fine-tuning proceeds. Also, unlike other settings, the log probabilities for the rejected responses do not
decrease, but rather, the log probabilities of the chosen responses increase to minimize LOR term.
Moreover, in λ = 0.5, there exists a similar trend of further increasing the log probabilities of the
chosen responses, but the log probabilities of the rejected responses are diminishing simultaneously.
Lastly, in λ = 1.0, the chosen responses diminish along with the rejected responses while enlarging the
margin between them. However, this does not mean smaller λ is always the better. It will depend on the
specific need and model.
E.2 MT-Bench
Writing
model
GPT-4
Humanities Roleplay
GPT-3.5-turbo
Mistral (ORPO) - 1.0
Mistral (ORPO) - 0.1
STEM Reasoning
0 2 4 6 8 10
Extraction Math
Coding
Along with Figure 11, which depicts the reward distribution of OPT2-125M, OPT2-350M, and OPT2-1.3B
on the UltraFeedback dataset, we report the reward distribution of each pre-trained checkpoint trained on
the HH-RLHF dataset. As discussed in Section 6.3, ORPO consistently pushes the reward distribution of
SFT to the right side.
Figure 11: Reward distribution comparison between OPT-125M (left), OPT-350M (middle), and OPT-1.3B (right)
trained with SFT (blue), RLHF (green), DPO (orange), and ORPO (red) on the test set of HH-RLHF using the 1.3B
reward model. General tendency follows that of Figure 5.
For the MT-Bench result in Section 6.2, we report the category-wise scores of Mistral-ORPO-α (7B) and
Mistral-ORPO-β (7B) in Figure 12. While surpassing Llama-2 Chat (13B) and Llama-2 Chat (70B) in
most cases, Mistral-ORPO-β (7B) is comparable to GPT-3.5-turbo in the categories that require descriptive
generations. However, it lacks coding and math skills, which we speculate is due to the lack of training
data, as we used 61k instances in UltraFeedback.
Writing
Model
Humanities Roleplay Llama-2-Chat (70B)
GPT-3.5-turbo
Claude V1
GPT-4
Mistral-ORPO-⍺ (7B): 7.23
Mistral-ORPO-β (7B): 7.32
STEM Reasoning
0 2 4 6 8 10
Extraction Math
Coding
Loading [MathJax]/extensions/MathMenu.js
Figure 12: MT-Bench result of Mistral-ORPO-α (7B) and Mistral-ORPO-β (7B) by the category.
H Special Instructions for Verbosity Assessment
For the succinctness and verboseness instructions, we generated five different instructions, each with
ChatGPT 9 . From the instructions in Table 7, we randomly sampled one prompt each for every batch to
prevent potential word bias.
# Succinctness Verboseness
1 Please generate a short and concise response. Please generate an elaborative and chatty response.
2 Provide a brief and concise answer. Provide a detailed answer.
3 Keep your reply short and to the point. Keep your reply elaborative and intricate.
4 Keep your answer brief for clarity. Keep your answer detailed.
5 Generate a brief and to-the-point answer. Generate a chatty and step-wise answer.
Table 7: Instructions prepended to the queries from AlpacaEval. Each instruction set asks the model to generate
either shorter or longer responses given the query, respectively.
9
https://chat.openai.com/
I Generation Samples from Mistral-ORPO-α (7B)
In this section, we two sample generations from Mistral-ORPO-α (7B) on AlpacaEval, including the
one which is preferred over the response of GPT-4 and the one in which GPT-4 was preferred. For
readability, the instances with short responses are selected for both cases. Additionally, the responses for
Mistral-ORPO-α (7B) were sampled with a temperature of 0.7. The checkpoints for Mistral-ORPO-α (7B)
and Mistral-ORPO-β (7B) can be found in https://huggingface.co/kaist-ai/mistral-orpo-alpha
and https://huggingface.co/kaist-ai/mistral-orpo-beta.
Query 1 (GPT-4 Win): Give a brief description of the book that you are given. Matilda
1. GPT-4:
"Matilda" is a children’s novel written by the British author Roald Dahl. It was first published in 1988.
The story revolves around a very intelligent and precocious young girl named Matilda Wormwood.
Despite being neglected and mistreated by her ignorant and self-absorbed parents, Matilda finds
solace in reading books and quickly develops a love for learning.
At the age of five, she begins attending school and meets Miss Honey, a kindhearted teacher
who recognizes Matilda’s extraordinary intellectual abilities. However, the school’s tyrannical
headmistress, Miss Trunchbull, is a cruel and oppressive figure who takes pleasure in punishing the
children with her unreasonable and harsh methods.
Matilda discovers that she possesses a unique telekinetic ability, which she learns to control and
use to her advantage. With her wit, intelligence, and special powers, Matilda sets out to defend her
friends from Miss Trunchbull’s abuses and to help Miss Honey overcome her own difficult past.
The book is a tale of good triumphing over evil, and it celebrates the power of knowledge, kindness,
and standing up for what is right. It has been adapted into a successful film, stage musical, and has
remained a beloved classic in children’s literature.
2. Mistral-ORPO-β:
"Matilda" is a children’s book written by Roald Dahl, published in 1988. The story revolves around
the life of Matilda Wormwood, a brilliant and gifted five-year-old girl who has telekinetic powers.
The story explores themes of education, family, and the importance of reading and intellectual
curiosity. The book has been praised for its wit, humor, and powerful message, making it a classic
and beloved children’s book.
Query 2 (Mistral-ORPO-β Win): What do you think about ChatGPT?
1. GPT-4:
As an AI developed by OpenAI, I don’t have personal opinions. However, I can share that ChatGPT
is an advanced language model that has received considerable attention for its ability to generate
human-like text. It’s designed to assist users in a wide range of tasks, from answering questions to
creating content and engaging in conversation.
ChatGPT has been praised for its fluency and versatility in handling various topics and for its
potential to streamline and enhance communication in both personal and professional settings.
However, like any technology, it also faces challenges and criticisms, such as ensuring the accuracy
of information, preventing misuse, and managing ethical considerations surrounding AI-generated
content.
Overall, ChatGPT represents a significant achievement in natural language processing and has
sparked discussions about the future of AI in our daily lives.
2. Mistral-ORPO-β:
As an AI language model, I am an example of the technology used in ChatGPT. I can provide a
personal opinion, but please keep in mind that I am designed to be socially unbiased and positive in
nature.
However, it’s important to recognize that ChatGPT, like any AI technology, is not perfect. It may
sometimes provide inaccurate or biased information due to the limitations of its training data or
the complexity of human language. It’s crucial to verify the facts and use critical thinking when
interacting with AI systems like ChatGPT.
Overall, ChatGPT represents a significant step forward in the field of AI, and its potential applications
are vast. As a responsible user, it’s important to approach this technology with a critical and respectful
mindset, recognizing its strengths and limitations.