LoRA

Low-Rank Adaptation — a parameter-efficient fine-tuning method that freezes the pre-trained model weights and injects trainable low-rank matrices into each layer. LoRA dramatically reduces the number of trainable parameters (often 1-10% of full fine-tuning) while achieving comparable performance. It is increasingly used for adapting large VLA models to specific robot embodiments or tasks.

MLTraining

Explore More Terms

Browse the full robotics glossary.

Back to Glossary