The numerical values inside a neural network that get adjusted during training to minimize error. Each connection between neurons has a weight that determines how much influence one neuron has on the next. When you download a model file — a .safetensors, .gguf, or .pt file — you're downloading its weights. "Releasing the weights" means publishing these files so anyone can run the model. Weights ARE the model; everything else is just the architecture that tells you how to arrange them.
Why it matters
When the AI industry says "open weights" vs "open source," the distinction matters. Weights alone let you run and fine-tune a model, but without the training code, data, and recipe, you can't reproduce it from scratch. Understanding weights helps you grasp model distribution, quantization (reducing weight precision), and why a 7B model needs ~14GB of disk space in fp16.