Early in my practice, a junior in our office used an AI tool to look up a GST exemption limit for a client query. The tool gave a confident, well-formatted answer with the correct-sounding section number. The limit it cited was from an earlier notification that had been superseded. The client was given the wrong threshold. It was caught before any damage was done — but only because a senior reviewed the advice before it went out. The junior had not realised that checking the source was not optional. It was the job.
This is the scenario that plays out in offices across every field, every day. Not because graduates are careless, but because AI outputs look authoritative. They are well-written, confidently phrased, and structurally sound. Nothing in the appearance of the text signals that it might be wrong. That is precisely why verification is not a nice-to-have skill — it is a professional survival skill.
Why AI Gets Things Wrong
To verify AI outputs intelligently, you need to understand why AI makes errors in the first place. The reason is not a bug — it is a feature of how the technology works.
Modern AI tools like ChatGPT and Claude are trained on enormous amounts of text. From that training, they develop a statistical understanding of language: which words and ideas tend to appear together, how information is typically structured, what an authoritative response sounds like. When you ask a question, they generate a response that is statistically consistent with the patterns they have learned — not a response retrieved from a verified database of facts.
This means AI can generate text that reads exactly like a factual answer without that answer being factually grounded. It is pattern-matching, not understanding. When a pattern suggests that a response should include a specific number, statute, or case reference, the AI will generate one — whether or not it accurately reflects reality. This is what the industry calls a "hallucination": a fluent, confident, grammatically perfect piece of text that is simply wrong.
The other common source of error is currency. AI models have training cutoffs — dates after which they have no information. India's tax regulations, SEBI guidelines, RBI circulars, and legal precedents change regularly. An AI trained before a regulatory update will give you the old rule with the same confidence it gives you a stable fact. It has no mechanism for knowing that things have changed.
The Confidence Trap
The single most dangerous feature of AI outputs is that confidence and correctness are completely unrelated. An AI is not more confident when it is correct and more hesitant when it is guessing. It produces equally polished, assured text regardless of whether the underlying information is accurate, outdated, or fabricated.
This is the opposite of how humans communicate uncertainty. When a colleague is unsure about something, you can usually tell — they hedge, they qualify, they suggest checking. AI does none of this unless it is specifically designed to. And even AI tools that include uncertainty caveats do not always apply them to the cases where they are most needed.
The most dangerous AI errors are not the ones that look wrong. They are the ones that look exactly right — polished, specific, and completely incorrect. Your professional judgment is the only filter that catches them.
The Five Verification Checks
You do not need to verify every word AI produces. You need a systematic habit for the types of content that carry professional risk. Here are the five checks that matter most.
1. Check every specific fact, number, and date against a primary source. Statistics, legal thresholds, regulatory limits, case citations, financial figures — anything numerical or statute-specific must be verified against the original source before you use it professionally. For Indian tax and regulatory matters, that means the Income Tax Act, GST notifications, SEBI circulars, RBI guidelines, or MCA announcements directly — not a summary of them. AI summaries of regulations are frequently accurate, and occasionally dangerously wrong. You cannot tell which by reading the summary.
2. Verify that the information is current. Ask yourself: could this have changed since the AI was trained? For anything involving law, regulation, market data, company information, government policy, or current events — assume it might have changed and check a current source. Perplexity AI (which cites current web sources) or a direct government website will tell you the current position in seconds.
3. Check case citations and named references independently. If AI gives you a court case, a research paper, a book, a named study, or a specific person's statement — verify that it exists before you cite it. The problem of AI fabricating plausible-sounding citations is well-documented. A lawyer in the US was sanctioned for filing a brief that cited AI-generated cases that did not exist. The same risk applies to any professional domain where you cite authorities. Google the case name, the paper title, or the reference independently.
4. Evaluate whether the advice applies to your context. AI generates general answers based on the broadest available patterns in its training data. For a global tool, that often means US or UK-centric information. Advice about employment law, tax filing, financial regulations, and professional standards varies enormously by country. A response that is perfectly accurate for the US may be completely inapplicable to an Indian graduate. Always ask: is this relevant to India, to my state, to my specific situation? If the AI has not explicitly addressed your context, its answer is incomplete at best.
5. Read for logical consistency. Does the response actually make sense from start to finish? AI can generate text that is locally coherent — each sentence follows the previous one sensibly — but globally contradictory or internally inconsistent. Read the full response as a whole. If the conclusion does not follow from the reasoning, or if early and late paragraphs contradict each other, that is a sign the output needs significant rework before you use it.
Domain-Specific Warnings
Some fields carry higher risk than others when AI errors go unchecked. If your career is in one of these areas, apply extra scrutiny.
Law: Never cite a case without verifying it on SCC Online, Manupatra, or Indian Kanoon. Never state a legal position based solely on AI output — always cross-check the relevant Act, notification, or rule directly. Indian procedural requirements (filing deadlines, court formats, jurisdictional rules) vary by court and state; AI frequently conflates them.
Finance and accounting: Tax rates, exemption limits, GST slabs, TDS thresholds, and MCA filing requirements change with every budget and notification cycle. AI training data may predate the latest changes. Verify all financial thresholds against the official Income Tax website, GST Council notifications, or ICAI guidance notes.
Healthcare: Drug dosages, interaction warnings, diagnostic criteria, and clinical guidelines are life-critical and change with new research. Never rely on AI for medical information that will influence any health-related decision without verifying against clinical guidelines or consulting a qualified professional.
Academic work: AI-generated statistics, research citations, and literature references are frequently wrong or fabricated. Every citation must be independently verified against the actual source before submission.
The Professional Liability Question
Here is the question that should anchor your verification habit: whose name is on this work?
When you submit a report, send a client advisory, file a compliance document, or present an analysis to your manager — your name is attached to it. The AI tool's name is not. If the information is wrong and consequences follow, the professional liability rests with you, not with the software. The tool that helped you produce the work does not share your accountability.
This is not a reason to avoid AI. It is a reason to use it with the same professional discipline you would apply to any other source of information. You would not submit a report citing a single unverified internet source without checking it. AI output deserves exactly the same scrutiny — and in some respects, more, because it is so much easier to mistake it for verified fact.
Building the Verification Habit
The goal is not to check everything equally — that would be impractical and would defeat the efficiency AI provides. The goal is to build a fast, reliable sense for what needs checking and what does not.
General explanations of concepts, brainstormed ideas, structural outlines, and first drafts of text that you will revise substantially — these carry low verification risk. The stakes are low if a first draft contains an error because you are going to rewrite it anyway.
Specific facts, figures, citations, regulatory positions, legal statements, market data, and any content that will go out under your name without substantial revision — these always require verification before use. No exceptions.
Before anything AI-assisted leaves your hands professionally, ask three questions: Is every specific claim accurate? Is the information current? Does it apply to my specific context? Ten minutes of verification on these three fronts will protect you from the errors that damage careers — and mark you as the kind of careful, critical professional that every good employer wants.
The most effective AI users are not those who trust AI most. They are those who verify fastest — and have the professional judgment to know which questions most need a second pair of eyes.