Papers
arxiv:2602.02383

SLIME: Stabilized Likelihood Implicit Margin Enforcement for Preference Optimization

Published on Feb 2
· Submitted by
Maksim Afanasyev
on Feb 3
Authors:

Abstract

SLIME is a novel reference-free alignment objective for large language models that decouples preference learning from generation quality through a three-pronged approach combining likelihood maximization, probability stabilization, and dual-margin constraints.

AI-generated summary

Direct preference optimization methods have emerged as a computationally efficient alternative to Reinforcement Learning from Human Feedback (RLHF) for aligning Large Language Models (LLMs). Latest approaches have streamlined the alignment process by deriving implicit reward functions, yet they often suffer from a critical objective mismatch: optimizing the relative margin between chosen and rejected responses does not guarantee the preservation of the chosen response's absolute likelihood. This can lead to ``unlearning'', where the model degrades the probability of high-quality outputs to satisfy margin constraints, and ``formatting collapse'' caused by the over-penalization of rejected sequences. In this work, we introduce SLIME (Stabilized Likelihood Implicit Margin Enforcement), a reference-free alignment objective designed to decouple preference learning from generation quality. SLIME incorporates a three-pronged objective: (1) an anchoring term to maximize the likelihood of preferred responses; (2) a stabilizing penalty that prevents the probabilities of rejected tokens from collapsing to zero; and (3) a dual-margin mechanism that combines hard and soft constraints for precise boundary shaping. Our results demonstrate that SLIME achieves superior performance compared to state-of-the-art baselines while maintaining higher generation stability.

Community

Paper author Paper submitter

We introduce SLIME, a reference-free preference optimization objective designed to decouple preference learning from generation quality. Our approach uses a three-pronged objective:

  • Likelihood Anchoring: An explicit term to maximize the likelihood of the preferred response, preventing quality degradation.
  • Token-Level Stabilization: A softplus-based penalty that prevents rejected token probabilities from collapsing to zero, preserving linguistic fluency.
  • Dual-Margin Mechanism: A novel combination of hard and soft margins for precise boundary shaping without vanishing gradients.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.02383 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.02383 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.02383 in a Space README.md to link it from this page.

Collections including this paper 1