Zubnet AILearnWiki › LoRA
Training

LoRA

Also known as: Low-Rank Adaptation
A technique that makes fine-tuning dramatically cheaper by only training a small number of additional parameters instead of modifying the entire model. LoRA "adapters" are lightweight add-ons (often just megabytes) that modify a model's behavior without retraining its billions of parameters.

Why it matters

LoRA democratized fine-tuning. Before it, customizing a 7B model required serious GPU resources. Now you can fine-tune on a single consumer GPU in hours and share the tiny adapter file. It's why there are thousands of specialized models on HuggingFace.

Related Concepts

← All Terms
← Liquid AI Luma AI →
ESC