OpenVLA

An open-source Vision-Language-Action model built on the Prismatic VLM architecture, fine-tuned on the Open X-Embodiment dataset. OpenVLA predicts discretized robot actions from images and language instructions. As an open-weights model (7B parameters), it enables the research community to study, modify, and build upon VLA technology without proprietary restrictions.

VLARobot Learning

Explore More Terms

Browse the full robotics glossary.

Back to Glossary