Pre-training

Pre-training is the phase of model development in which a neural network is trained on a large, diverse dataset before task-specific fine-tuning. For robotics foundation models, pretraining may occur on internet-scale vision-language data (images, video, text), cross-embodiment robot datasets (Open X-Embodiment), synthetic simulation data, or a combination. The pretrained model learns rich general representations of objects, actions, and concepts that transfer to downstream robot tasks with far fewer demonstrations than training from scratch. Pre-training is the mechanism behind the success of VLA models such as RT-2, which benefits from both robotic and internet-scale pretraining.
Foundation ModelTrainingTransfer Learning

Explore More Terms

Browse the full robotics glossary with 70+ terms.

Back to Glossary