🌐 Read in your language:
AI & Society

Digital Safety in the AI Age

We lock our homes, double-lock our cars, and whisper our ATM PIN — then upload our identity documents to a website we have never heard of. Why our digital habits have not caught up with our instincts, and what to do about it.

There was a time when privacy was simple. A stranger knocked on the door and we said no. We peeped through the keyhole, tightened the latch, and politely declined. Today, that same stranger does not need to knock. We have already opened the door — digitally — and handed over our identity, habits, location, and sometimes our most sensitive documents. Not because we are careless. But because the world changed quietly, one click at a time.

The Privacy Paradox

We would never tell a stranger on the street where we live, where we work, or where our children study. But we tell the internet — voluntarily, every single day. A birthday photo on social media. A location tag while travelling. A professional update about a new job. A check-in at a restaurant. We do not register these as disclosures because they feel like ordinary social behaviour. They are also, from a data perspective, extraordinarily detailed records of our lives.

Our phones now know more about us than our closest family members: where we travelled last week and how long we stayed, what we searched late at night, what we bought and what we almost bought, what topics worry us most. Every "Allow access" is a silent permission slip. Every "Accept cookies" is an agreement to be tracked. We stopped reading these prompts years ago, which means we stopped knowing what we are agreeing to.

A professional I know recently applied for a loan. Frustrated with the paperwork, he uploaded his bank statements and identity documents into an AI tool he had found online to help summarise the requirements. That night, when he told his wife what he had done, she looked at him and said: "You will not give your ID photocopy to the building security guard without asking ten questions. But you uploaded everything to a website you have never heard of?"

He had no answer. It was not stupidity. It was convenience. And convenience always wins — until something goes wrong.

What AI Tools Actually Do With Your Data

This is the question most people using AI tools do not ask — and should. The answer varies enormously depending on which tool you use and how you use it, but the broad picture is important to understand.

Major AI tools like ChatGPT (OpenAI), Claude (Anthropic), and Gemini (Google) have published privacy policies that describe how they handle your data. For the free versions of these tools, your conversations may be used to improve the model. This means that text you type into the chat — including documents you paste, details you share, and questions you ask — could, depending on your privacy settings, become part of training data reviewed by people at these companies.

This is not necessarily a reason to avoid these tools. It is a reason to think about what you share. Discussing a professional concept or asking for writing help: low risk. Pasting a client's confidential financial data, uploading a personal identity document, or sharing sensitive medical details: genuinely risky and entirely avoidable.

Enterprise versions of AI tools — which many large companies use — typically have stronger data protection guarantees, including contractual commitments not to use your data for training. If you are using AI tools for professional work involving confidential information, the enterprise tier is not a luxury. It is a professional responsibility.

What You Should Never Put Into a Public AI Tool

Some categories of information should simply not go into a public AI tool under any circumstances. Keeping this list in mind takes seconds; not keeping it in mind can have consequences that last years.

  • Identity documents — Aadhaar, PAN, passport, driving licence. No legitimate AI task requires you to upload these.
  • Client or employer confidential data — financial records, internal strategy documents, personnel information, unreleased products. This is a professional ethics issue, not just a privacy one.
  • Bank account details, credit card numbers, or financial passwords — these should never be typed into any web-based tool that is not your bank.
  • Personal medical information — yours or anyone else's. Medical data is sensitive in ways that can affect insurance, employment, and relationships.
  • Details that identify someone else without their knowledge — sharing identifying information about another person in an AI tool raises both privacy and ethical concerns.

The governing principle is simple: if you would be uncomfortable with this information appearing in a data breach, do not put it into a tool whose data practices you have not verified.

The Growing Crisis of Digital Trust

Before AI, the internet had a content problem: anyone could publish anything, validation was weak, and misinformation spread easily. AI has not solved this problem. It has accelerated it by several orders of magnitude.

AI tools can generate articles, images, audio, and video at a scale and speed that was impossible just a few years ago. A convincing-looking news article that never happened can be produced in seconds. A photograph that was never taken can be generated in moments. An audio recording of a voice saying something it never said can be created with minimal effort. The volume of synthetic content online is growing faster than the tools for detecting it.

This has a direct practical implication for graduates entering professional life: the information environment you are navigating is less trustworthy than it was even five years ago, and the skills for evaluating sources critically — developed in academic training but often abandoned in everyday life — are now more important than they have ever been.

Apply the same source evaluation discipline to everything you consume online that you would apply to a source in an academic paper. Who produced this? What is their interest in how you respond to it? Is there a primary source you can check? Does this match what credible established sources are reporting? These are not burdensome questions. They are the basic habits of a digitally literate person in 2026.

AI-Powered Scams and Deepfakes

The most immediate safety risk for fresh graduates in 2026 is not abstract data harvesting — it is AI-powered fraud that is becoming increasingly difficult to distinguish from legitimate communication.

Voice cloning technology can now produce convincing audio of a person saying something they never said, using only a few seconds of their recorded voice. Video deepfakes — realistic-looking footage of people in situations that never occurred — are advancing rapidly. These are being used in scams that target individuals: a call that sounds exactly like a family member in distress asking for urgent money transfer, a video that appears to show a company executive authorising a payment.

The defence against these attacks is not technological — it is behavioural. Establish a verification word or phrase with close family members that can confirm identity in an emergency. Always verify unusual financial requests through a separate channel (call the person back on a known number) before acting on them. Be especially sceptical of urgent requests that create time pressure — urgency is a manipulation tactic designed to bypass careful thinking.

Account Security Basics Most Graduates Skip

Digital safety is not only about what you share. It is also about how well you protect what is yours. These three practices cost nothing and protect significantly.

Use different passwords for different accounts. If one account is compromised and you use the same password elsewhere, every account with that password is compromised. A password manager — Google Password Manager, Apple Keychain, or a dedicated tool like Bitwarden (free) — makes this practical without requiring you to memorise dozens of different passwords.

Enable two-factor authentication (2FA) on every account that matters. Email, banking, LinkedIn, UPI apps — all of these should have 2FA enabled. A compromised password alone is then not enough to access your account. This single step defeats the majority of common account attacks.

Be suspicious of links in messages, even from known contacts. A large proportion of successful phishing attacks in India involve messages that appear to come from known contacts whose accounts have been compromised. If a link or request feels unusual — even from a known sender — verify through a different channel before clicking or responding.

Three Practical Shifts That Cost Nothing

Digital safety does not require technical expertise. It requires the same instincts you already apply in the physical world, extended into your digital behaviour.

Treat digital sharing like physical sharing. If you would hesitate to hand a document to a stranger on the street, do not upload it to a platform without understanding where it goes and who can access it.

Ask one question before every click. "Do I actually need to share this?" More often than not, the honest answer is no. The habit of pausing for one second before each digital permission removes the majority of avoidable privacy exposure.

Use AI consciously, not casually. AI is a powerful tool — but it is not your diary, not your lawyer, and not your personal vault. Think about what you are sharing before you share it, and choose enterprise or privacy-protecting tools for anything sensitive.

Privacy is not about hiding. It is about choosing what to share, when to share it, and with whom. In the age of AI, that choice requires a little more active thought than it used to. It is worth giving it.

We have upgraded our tools. Now it is time to upgrade our instincts to match.