Reward Shaping

Modifying the reward function to provide denser, more informative learning signals without changing the optimal policy. For example, adding a potential-based shaping reward that gives partial credit for getting closer to the goal. Reward shaping dramatically accelerates RL training in sparse-reward environments but requires care to avoid introducing unintended local optima.

Robot LearningRL

Explore More Terms

Browse the full robotics glossary with 1,000+ terms.

Back to Glossary