Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
PCL-张晗 9338f32d36 | 3 months ago | |
---|---|---|
trlx | 3 months ago | |
README.md | 3 months ago | |
dpo_accelerate_config.yaml | 3 months ago | |
dpo_bf16_accelerate_config.yaml | 3 months ago | |
train_DPO.py | 3 months ago |
Reproducing the codes of DPO[1], PRO[2], RRHF[3], SPIN (online method) [4], CPPO (online method) [5], COPF[6].
[1] DPO: Direct Preference Optimization: Your Language Model is Secretly a Reward Model
[2] PRO: Preference Ranking Optimization for Human Alignment
[3] RRHF: Rank Responses to Align Language Models with Human Feedback without tears
[4] SPIN: Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models
[5] CPPO: Continual Learning for Reinforcement Learning with Human Feedback
[6] COPF: CONTINUAL LEARNING HUMAN PREFERENCE THROUGH OPTIMAL POLICY FITTING
复现了offline对齐算法的一系列工作,欢迎大家交流。 包括DPO, PRO, RRHF和SPIN。还有团队发表在ICLR2024的CPPO,以及最新的研究工作COPR。
Python
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》