Mixed Precision Training
Training neural networks using a mix of 16-bit (FP16 or BF16) and 32-bit (FP32) floating-point arithmetic to reduce memory usage and increase throughput while maintaining training stability. Mixed precision is standard for training large robot learning models on GPUs — it roughly halves memory consumption and doubles throughput compared to full FP32 training.