Abstract: In this talk, I'll cover two recent papers for preference alignment: Odds-Ratio Preference Optimisation (ORPO, EMNLP 2024), discussing the role of the reference model for preference alignment (e.g. DPO, RLHF), and Margin-aware Preference Optimization (under review @ CVPR), thinking about the risks of reference mismatch: where the preference alignment data has features diverging from the reference model.
Bio: James is Assistant Professor at the KAIST Graduate School of AI, South Korea, working on large-scale and knowledge-intensive natural language understanding. James recently completed his PhD at the University of Cambridge where he developed models and methods for automated fact verification and correction.
[1] https://aclanthology.org/2024.emnlp-main.626/
[2] https://arxiv.org/pdf/2406.06424