Does Reasoning Require Scale?

A 950M parameter model solves more competition math problems than models nearly twice its size. The gap isn't parameter count, it's training methodology and inference strategy. But cheap reasoning shifts the bottleneck to reliability: small models can reason, they just don't know when they're wrong.

Read More

On-Device LLMs: State of the Union, 2026

Three years ago, running a language model on a phone meant a toy demo. Today, billion-parameter models run in real time on flagship devices. This shift came not from faster chips alone, but from rethinking how we build, compress, and deploy models.

Read More