Date

Feb 2, 2026

Category

Reading Time

Share

AI Vocabulary Cheat Sheet

AI Vocabulary Cheat Sheet

AI Vocabulary Cheat Sheet

Main Image
Main Image
Main Image

If you've felt lost in AI jargon this year, here's your catch-up: the essential terms and who's actually winning the model race.

Let's start with what matters:

LLM (Large Language Model): This powers ChatGPT, Claude, Gemini—any AI that reads and writes text. It's a neural network trained to predict the next word. The "large" part? Billions of parameters (tiny dials the AI adjusts during training).

Tokens: The building blocks of how LLMs process text. One token ≈ 3-4 characters. Why care? You pay per token, and models have a "context window" measured in tokens

RLVR (Reinforcement Learning from Verifiable Rewards): The paradigm shift of 2025, per Andrej Karpathy. Instead of showing models examples, you give them math problems or code tests with verifiable answers. Get it right? Reward. This led to models like OpenAI's o1 and DeepSeek-R1 that genuinely "reason."

KV Cache: Why your AI conversations got faster and cheaper. When LLMs process text, they calculate Key and Value matrices for "attention." These don't change for processed text, so models save (cache) them. Result: 10x cheaper tokens and 85% faster responses.

Context Compression: When conversations get too long, you need compression. Factory.ai's research shows their structured approach scores 3.70 vs Anthropic's 3.44 vs OpenAI's 3.35. All achieve ~99% compression, but quality varies.

And the model landscape as of the end of 2025:

  • OpenAI: GPT-5.2 launched Dec 11 with GPT Image 1.5 following Dec 16. CEO Sam Altman declared "code red" after losing market share to Google.

  • Google: Gemini 3 Flash (Dec 17) now powers the Gemini app globally. Processing >1 trillion tokens daily.

  • Anthropic: Claude Opus 4.5 and Sonnet 4.5 dominate coding benchmarks. Hit $2B annualized revenue in Q1 2025.

  • xAI: Grok 4.1 topped LMArena in November, and there’s another big Grok model training as we speak…

  • DeepSeek (The Wildcard): DeepSeek-R1 matched OpenAI's o1 after training for just $5.3M. Surpassed ChatGPT as #1 iOS app in January, causing an 18% NVIDIA stock drop.

Why these terms and players matter: There’s three key takeaways from 2025:

  1. Reasoning models can now "think before speaking"

  2. China proved frontier AI doesn't need billions in training costs

  3. Efficiency optimizations (caching, compression) matter more than raw scale (then imagine scaling the most efficient model possible…).

TEAM / The Neuron
THANK YOU / Grant Harvey
LINK / Enjoy

Author

Nathan Hackstock

Creative Director

Nathan is a strategic Creative Director with a passion to build things, teams, cultures, brands and business.

Related Snacks

Related

Preview Image

AI Vocabulary Cheat Sheet

If you've felt lost in AI jargon this year, here's your catch-up: the essential terms and who's actually winning the model race.

Preview Image

MTV Rewind

For those still wanting their MTV

Preview Image

Future Proof

The Buybaculator helps you calculate how many bucks your old tech junk is worth in Buy Back gift card currency.

Preview Image

Space Oddity

The first interspace music video, with the most relevant of soundtrack as it's content.

Preview Image

AI Vocabulary Cheat Sheet

If you've felt lost in AI jargon this year, here's your catch-up: the essential terms and who's actually winning the model race.

Preview Image

MTV Rewind

For those still wanting their MTV

Preview Image

AI Vocabulary Cheat Sheet

If you've felt lost in AI jargon this year, here's your catch-up: the essential terms and who's actually winning the model race.

Preview Image

MTV Rewind

For those still wanting their MTV

Preview Image

Future Proof

The Buybaculator helps you calculate how many bucks your old tech junk is worth in Buy Back gift card currency.

Make
good
things

With good people
for good brands

Make
good
things

With good people
for good brands

Make
good
things

With good people
for good brands