🌐 Read in your language:
AI Tools

AI Terms You Keep Hearing — Finally Explained in Plain English

Prompt, token, context window, usage limit — what do they actually mean? A jargon-free guide for graduates and professionals using AI tools.

Every time someone talks about AI, a new set of words appears that sounds like it belongs in a computer science textbook. Prompt. Token. Context window. Hallucination. Usage limit. Temperature. If you have ever nodded along while secretly having no idea what these words mean — this article is for you.

I use AI tools every day in my CA practice. When I started, I had no idea what most of these terms meant either. I learned them through trial, error, and a lot of confused moments. This guide is what I wish someone had handed me on day one.

You do not need to understand the engineering behind AI to use it well. But knowing what these terms mean will make you dramatically better at getting results from any AI tool.

1. Prompt

A prompt is simply the instruction or question you type into an AI tool. It is how you communicate with AI. The quality of what you get back depends almost entirely on the quality of your prompt.

Layman version: Think of it like placing an order at a restaurant. A vague order ("give me something good") gets you something random. A specific order ("one masala dosa, crispy, with extra chutney") gets you exactly what you want. AI works the same way.

Bad prompt: "Write something about GST." Good prompt: "Write a simple 150-word explanation of GST input tax credit for a first-year commerce student with no prior knowledge of taxation."

2. Token

A token is the unit AI uses to measure text — roughly equal to 4 characters or about ¾ of a word. AI tools do not read words the way humans do. They break everything into tokens and process them. Tokens count for both what you type in and what the AI types back.

Layman version: Think of tokens like currency. Every conversation with an AI has a budget. The words you write and the words AI responds with both cost tokens. When you run out of budget, the conversation has to stop or reset.

Real example: The sentence "How do I file my ITR?" is roughly 8 tokens. A 1,000-word article is approximately 1,300 tokens. Most free AI plans give you a daily or monthly token allowance.

3. Context Window

The context window is how much text an AI can "remember" in one conversation at a time. It includes everything — your messages, the AI's responses, and any documents you paste in. Once the conversation exceeds the context window, the AI starts forgetting the earlier parts.

Layman version: Imagine you are working with a very capable but forgetful colleague. They can remember everything you discussed in the last hour perfectly. But if the conversation goes on for three hours, they start forgetting what was said in the first hour.

Real example: If you paste a long contract and then ask questions about it, the AI answers well initially. But after a long back-and-forth, it may forget what the contract said at the beginning. This is the context window limit in action.

4. Usage Limit

Usage limits are the restrictions AI platforms put on how much you can use their tools — especially on free plans. These limits can be daily message caps, hourly token limits, or restrictions on which AI model you can access.

Layman version: Think of it like a mobile data plan. You get a certain amount of data every month. Use it wisely and it lasts. Stream videos all day and you hit the limit by the 10th of the month.

Real examples: ChatGPT free plan has limited access to GPT-4o and resets every few hours. Claude free plan gives limited messages per day on the Sonnet model. Gemini free offers generous limits but slower response on the free tier. If you hit a usage limit, switch to a different AI tool for the rest of the day rather than waiting.

5. Hallucination

Hallucination is when an AI confidently gives you information that is completely wrong or made up. It is not lying intentionally — it is a technical limitation where the AI generates plausible-sounding text that happens to be false.

Layman version: Imagine asking a very confident colleague for a phone number. They give you one immediately, with full confidence. You dial it and it is the wrong number. They were not lying — they just filled in the gap with something that sounded right.

Real example: Ask an AI to cite a court judgement or a specific section of the Companies Act, and it may give you a citation that sounds completely real but does not exist. Always verify legal, medical, and financial information from original sources.

6. Model

The model is the specific version of AI you are using. Different models have different capabilities, speeds, and costs. GPT-4o, Claude Sonnet, Gemini Pro, and Llama are all different models — like different car models from different manufacturers.

Layman version: Think of AI companies as car companies, and models as their specific cars. OpenAI makes ChatGPT — their latest model is GPT-4o. Anthropic makes Claude — their strong model is Claude Sonnet. Google makes Gemini. Each has its own strengths.

Practical tip: For writing and analysis, Claude tends to give more nuanced answers. For coding, GPT-4o is widely preferred. For research with sources, use Perplexity. You do not have to pick just one.

7. Temperature

Temperature is a setting that controls how creative or predictable an AI's responses are. Low temperature means more factual, consistent, and safe answers. High temperature means more creative, varied, and sometimes surprising answers.

Layman version: Think of it like a fan speed dial. Low speed — calm, steady airflow. High speed — more powerful but less predictable.

When it matters: Most users never change temperature — it is set automatically. But if you are using an AI through an API or a custom tool and the answers feel too rigid or too random, temperature is the setting to adjust.

8. System Prompt

A system prompt is a set of hidden instructions given to the AI before your conversation begins. It tells the AI how to behave, what tone to use, what to avoid, and what role to play. You usually do not see it — but it shapes every response you get.

Layman version: Imagine calling a customer care line. Before the call connects, the agent has been briefed: "Be polite, only discuss billing issues, never promise refunds over ₹500 without approval." You do not hear that briefing, but it shapes every answer you get.

Why it matters for you: If an AI built into a company's app behaves differently from the same AI on its main website — the system prompt is why. Businesses customise AI behaviour for their specific use case.

9. RAG (Retrieval-Augmented Generation)

RAG is a technique where the AI looks up information from a specific database or set of documents before generating its answer — rather than relying only on what it learned during training. This makes answers more accurate and up to date.

Layman version: Instead of answering from memory, the AI is allowed to look at a reference book first. A CA using an AI trained on the entire Income Tax Act with RAG will get far more accurate tax answers than one using a general AI.

Real-world example: Perplexity AI uses RAG — it searches the web before answering, which is why it cites sources. Many enterprise AI tools use RAG to answer questions about internal company documents.

10. Prompt Engineering

Prompt engineering is the skill of writing clear, structured, and effective prompts to get better results from AI. It is not coding. It is the art of communicating well with an AI — giving it the right context, constraints, and instructions.

Layman version: It is like knowing how to give a good brief to a designer or a writer. A vague brief produces generic work. A clear brief with examples, tone guidance, and specific requirements produces something genuinely useful.

Simple prompt engineering formula: Role + Task + Context + Format + Constraint. Example: "You are a career counsellor advising a fresh commerce graduate in India [Role]. Write a list of 5 AI skills they should learn in 2026 [Task] to improve their chances in finance and accounting jobs [Context], formatted as bullet points [Format], using simple language with no jargon [Constraint]."

What This Means For You as a Graduate

You do not need to memorise all of this. But understanding these terms does two things for your career.

First, it makes you a better AI user. When you know that a vague prompt produces vague results, you start writing better prompts. When you know about hallucination, you start verifying AI output before submitting it. These habits separate confident AI users from careless ones.

Second, it makes you more credible in interviews and at work. When a hiring manager asks "how do you use AI?" and you can mention context windows, prompt engineering, and model selection — you instantly stand out from the candidate who says "I use ChatGPT sometimes."

The language of AI is fast becoming the language of every profession. Start speaking it now.