Capabilities that appear in AI models at scale but were not explicitly trained for — abilities that seem to "emerge" suddenly once a model reaches a certain size or training threshold. A model trained purely to predict the next word somehow learns to do arithmetic, translate between languages it wasn't taught, or write working code. Emergence is one of the most debated phenomena in AI: is it real phase-transition magic, or a measurement artifact?
Why it matters
Emergence is at the heart of the biggest question in AI: can we predict what larger models will be able to do? If capabilities truly emerge unpredictably at scale, then every bigger model is a surprise box. If emergence is an artifact of how we measure, then scaling is more predictable than it seems. The answer shapes everything from safety planning to investment decisions.