OpenRubrics: Towards Scalable Synthetic Rubric Generation for Reward Modeling and LLM Alignment Paper • 2510.07743 • Published Oct 9, 2025 • 10
RECAST: Expanding the Boundaries of LLMs' Complex Instruction Following with Multi-Constraint Data Paper • 2505.19030 • Published May 25, 2025 • 1
Scaling Towards the Information Boundary of Instruction Set: InfinityInstruct-Subject Technical Report Paper • 2507.06968 • Published Jul 9, 2025 • 1
EvoSyn: Generalizable Evolutionary Data Synthesis for Verifiable Learning Paper • 2510.17928 • Published Oct 20, 2025 • 4
Plan, Verify and Fill: A Structured Parallel Decoding Approach for Diffusion Language Models Paper • 2601.12247 • Published 8 days ago • 1
From Bits to Rounds: Parallel Decoding with Exploration for Diffusion Language Models Paper • 2511.21103 • Published Nov 26, 2025 • 1
Learning to Parallel: Accelerating Diffusion Large Language Models via Adaptive Parallel Decoding Paper • 2509.25188 • Published Sep 29, 2025 • 3
FS-DFM: Fast and Accurate Long Text Generation with Few-Step Diffusion Language Models Paper • 2509.20624 • Published Sep 24, 2025 • 1
Attention Is All You Need for KV Cache in Diffusion LLMs Paper • 2510.14973 • Published Oct 16, 2025 • 42
d^2Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive Caching Paper • 2509.23094 • Published Sep 27, 2025 • 5
Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models Paper • 2503.09573 • Published Mar 12, 2025 • 75
SoftCoT++: Test-Time Scaling with Soft Chain-of-Thought Reasoning Paper • 2505.11484 • Published May 16, 2025 • 6
SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs Paper • 2502.12134 • Published Feb 17, 2025 • 3
Soft Thinking: Unlocking the Reasoning Potential of LLMs in Continuous Concept Space Paper • 2505.15778 • Published May 21, 2025 • 19
Pretraining Language Models to Ponder in Continuous Space Paper • 2505.20674 • Published May 27, 2025 • 3
Scaling Latent Reasoning via Looped Language Models Paper • 2510.25741 • Published Oct 29, 2025 • 223
Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding Paper • 2505.22618 • Published May 28, 2025 • 45