L2 Regularization

Adding a penalty proportional to the squared magnitude of model weights to the loss function, discouraging large weights and reducing overfitting. L2 regularization (weight decay) is applied to nearly all neural network training. In robot learning, appropriate weight decay prevents the policy from memorizing specific demonstration trajectories.

MLTraining

Explore More Terms

Browse the full robotics glossary.

Back to Glossary