Zubnet AILearnWiki › Attention
Models

Attention

Also known as: Attention Mechanism, Self-Attention
The core mechanism in Transformers that lets a model weigh which parts of the input are most relevant to each other. Instead of reading text left-to-right like older models, attention lets every word "look at" every other word simultaneously to understand context.

Why it matters

Attention is why modern LLMs understand that "bank" means different things in "river bank" vs. "bank account." It's also why longer context windows cost more — attention scales quadratically with sequence length.

Related Concepts

← All Terms
← AssemblyAI Automation →
ESC