Task-parameterized Learning
Task-parameterized learning encodes demonstrations relative to multiple coordinate frames or task parameters (e.g., the object's pose, a target location, an obstacle frame) rather than in a fixed world frame. When executing, the policy adapts automatically to new object and target configurations without retraining, because it has learned motion relative to task-relevant references. Task-parameterized Gaussian Mixture Models (TP-GMM) and kernelized movement primitives are classical implementations. This approach provides strong geometric generalization for structured pick-and-place tasks, though it requires task frames to be identified and tracked at runtime.