Reward Shaping
Modifying the reward function to provide denser, more informative learning signals without changing the optimal policy. For example, adding a potential-based shaping reward that gives partial credit for getting closer to the goal. Reward shaping dramatically accelerates RL training in sparse-reward environments but requires care to avoid introducing unintended local optima.