The AI Dictionary where Buzzwords Do Not Buzz
AI isn’t a brain. It’s a chef who has memorized every recipe but never tasted the salt. It’s a cracked bat that can still hit sixes if the crowd cheers loud enough. It’s math dressed up in fancy clothes.
Everywhere I look, people throw around words like tokens, transformers, and embeddings. It feels like jargon at a wedding—everyone nodding politely, nobody really tasting the food. But AI isn’t mystical. It’s just math dressed up in fancy clothes. Let’s break it down the way we’d explain cricket to a kid or pani puri to a friend.
I’ve decided that if I hear the word 'Large Language Model' one more time without someone actually explaining what it is, I’m going to lose it. It’s like being at a wedding where everyone is nodding at the jargon but nobody has actually tasted the food. So, I’ve decided to stop the madness. Let’s talk about AI, but let's do it using things that actually make sense—like pani puri, dhabas, and that one uncle who tells tall tales
Tokens
AI doesn’t eat words whole. It chews them into little pieces. “Understanding” becomes “under,” “stand,” “ing.” Like rice grains—one grain is nothing, but together they make a meal. That’s why spelling sometimes trips it up; it’s busy looking at the grains, not the dish.
Context Window
This is the model’s short-term memory. Think of it as the size of your thali. Small plate, fewer dishes. Big plate, more variety. If the plate is too small, the AI forgets what you said earlier because it literally ran out of space.
Transformers
The real engine. Transformers use “attention”—a way of deciding which words matter most. Like a Bollywood director on set, spotlighting the right actor so the scene makes sense. Without that, “bank” could mean money when you actually meant river.
Embeddings
Every word is turned into numbers. Those numbers live in neighborhoods. “Chai” and “Filter Coffee” are close by, “Laptop” is far away. That’s how the model knows they belong together even if they are spelled differently.
Parameters
The “knowledge” of the model. Think of these as the knobs and dials on a massive professional soundboard. A model doesn’t “know” history; it has just had its billions of dials tuned so precisely that when you play the “History” note, the output sounds like a textbook. It’s not wisdom; it’s just the world’s most complex setting on a microwave.
Multimodal
The Full Sensory Experience. Older AI was like a food critic who could only read the menu but never see the food. "Multimodal" means the AI has finally been given eyes and ears. It doesn’t just chew text; it can look at a photo of your fridge and suggest a recipe, or hear a recording of rain and describe the mood. It’s like moving from a radio play to a 4D cinema—connecting the sound of the tadka to the sight of the steam.
Fine-Tuning
The specialized internship. If a base model is a student with a general degree, fine-tuning is sending them to a six-month intensive workshop on South Indian temple architecture or legal contracts. It doesn’t give the AI a new brain; it just gives it a specific vocabulary and teaches it the “house rules” of a particular job.
RLHF (Reinforcement Learning from Human Feedback)
The “Good Boy” system. Left to its own devices, an AI is like a wild animal that might bite. RLHF is a room full of humans clicking “Yes” or “No” to its answers. It’s training the puppy with treats until it learns that polite answers get a reward and unhinged rants get the metaphorical rolled-up newspaper.
Prompt Engineering
Ordering at a Dhaba. If you just walk into a dhaba and yell "Food!", you might get anything. Prompt engineering is just being a specific customer. "Make it spicy, no onions, extra butter, and bring it in five minutes." The better you describe the order, the less likely the kitchen is to mess it up.
Temperature
The creativity dial. Low temperature = safe, boring answers. High temperature = risky, sometimes brilliant, sometimes nonsense. It’s the masala level on your pani puri.
Hallucinations
We all know that uncle at weddings who tells stories with full confidence, even when half of it is made up. That’s hallucination. AI doesn’t know facts; it just predicts the next word so smoothly you believe it.
The Reality Check
So, what is actually happening?
When you type a prompt, your words are chopped into rice (tokens), mapped onto a neighborhood (embeddings), and spotlighted by a director (transformers). The engine's power depends on how many knobs it has (parameters), whether it went to finishing school (fine-tuning), and if it was taught manners by a human (RLHF). It now perceives the world through multiple senses (multimodal) and serves it all on a plate (context window) with just the right amount of spice (temperature) based on how you ordered (prompt engineering).
It’s a pipeline, not a miracle. AI isn’t a brain; it’s a chef who has memorized every recipe in the world but has never actually tasted the salt. Once you see the pipeline, the buzzwords stop being intimidating. It’s just a sharper knife in the kitchen, a cracked bat on the field, a better lens on the film set.
Useful, yes.
Brilliant at imitation, absolutely.
But thinking? Not really.
Rachana Bahel