From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning
Abstract
DICE-RL enhances pretrained generative robot policies through reinforcement learning distribution contraction, achieving complex manipulation skills from pixel inputs with improved stability and sample efficiency.
We introduce Distribution Contractive Reinforcement Learning (DICE-RL), a framework that uses reinforcement learning (RL) as a "distribution contraction" operator to refine pretrained generative robot policies. DICE-RL turns a pretrained behavior prior into a high-performing "pro" policy by amplifying high-success behaviors from online feedback. We pretrain a diffusion- or flow-based policy for broad behavioral coverage, then finetune it with a stable, sample-efficient residual off-policy RL framework that combines selective behavior regularization with value-guided action selection. Extensive experiments and analyses show that DICE-RL reliably improves performance with strong stability and sample efficiency. It enables mastery of complex long-horizon manipulation skills directly from high-dimensional pixel inputs, both in simulation and on a real robot. Project website: https://zhanyisun.github.io/dice.rl.2026/.
Community
A sample efficient and stable off-policy RL finetuning method for generative BC policies.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RFS: Reinforcement Learning with Residual Flow Steering for Dexterous Manipulation (2026)
- RISE: Self-Improving Robot Policy with Compositional World Model (2026)
- SERFN: Sample-Efficient Real-World Dexterous Policy Fine-Tuning via Action-Chunked Critics and Normalizing Flows (2026)
- Off-Policy Actor-Critic with Sigmoid-Bounded Entropy for Real-World Robot Learning (2026)
- ALOE: Action-Level Off-Policy Evaluation for Vision-Language-Action Model Post-Training (2026)
- Simulation Distillation: Pretraining World Models in Simulation for Rapid Real-World Adaptation (2026)
- ExpertGen: Scalable Sim-to-Real Expert Policy Learning from Imperfect Behavior Priors (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 1
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper