AI in India

AI-Generated Food Fraud: How Fake Refund Scams Are Hitting Zomato and Swiggy

Customers are using generative AI to create convincing fake images of damaged food and pocket refunds. Zomato's CEO has flagged the trend as "insane." Here is how the scam works, who it hurts the most, and what platforms are doing — and not doing — about it.

Generative AI has changed a lot of things. It has made writing faster, research quicker, and learning more accessible. It has also, quietly and significantly, made it easier to commit fraud — and nowhere in India is this more visible right now than in food delivery refund claims.

Reports from late 2025 and early 2026 have documented a sharp rise in fraudulent refund requests on Zomato and Swiggy — India's two dominant food delivery platforms. The method is simple, the technology is accessible, and the scale has become large enough that Zomato's CEO Deepinder Goyal publicly called it an "insane" rise in fraudulent complaints. This article explains exactly what is happening, who is paying the price, and why this trend matters beyond the food delivery industry.

How the Scam Works

The fraud follows a straightforward playbook. A customer places an order on Zomato or Swiggy and receives it. The food arrives intact and is consumed normally. The customer then opens a generative AI image tool — the kind freely available on smartphones — and uses it to alter a photograph of the food to make it appear damaged, contaminated, or of the wrong item. Common manipulations include adding mould or discolouration, inserting foreign objects, changing the apparent quantity, or making the packaging appear tampered with.

The edited image is then submitted through the platform's complaint system alongside a written complaint requesting a refund or free replacement. Automated complaint systems — which process thousands of claims per hour across the country — struggle to verify whether an image is genuine or AI-generated. Many claims are approved without human review.

The technology enabling this is not exotic or expensive. Tools like Google Gemini Nano, freely available editing apps with AI features, and image manipulation prompts accessible to anyone with a smartphone are sufficient to produce convincing fake food damage images. The barrier to attempting this kind of fraud is effectively zero for anyone with basic digital literacy.

Real Cases That Were Caught

Two documented cases illustrate the problem clearly — and also reveal how the fraud is occasionally detected.

A Mumbai bakery reported a customer who submitted an image of a cake that appeared to have damaged icing and incorrect decoration. When the restaurant investigated, they found that the strawberries in the customer's image had an unnatural, almost plastic appearance inconsistent with any fruit the bakery had sourced. The text on the cake in the image also contained a misspelling that did not match the bakery's actual order. These are classic artefacts of generative AI image editing — the model had generated plausible-looking strawberries and text, but not accurate ones.

In a separate case involving Swiggy Instamart, a user reportedly used a Google Gemini Nano prompt to simulate cracked and discoloured eggs in a delivery image. The claim was initially accepted by the automated support system. The case came to light only when the incident was reported publicly. It highlights a critical weakness: the fraud can succeed even when the AI-generated elements are imperfect, because automated systems do not examine images at the level of detail a human expert would.

Who Gets Hurt Most

The financial impact of fraudulent refund claims is not absorbed by Zomato or Swiggy alone. Understanding how platform economics work in food delivery is important here.

When a refund is approved on a food delivery platform, the cost is typically charged back to the restaurant partner — not the platform. This means a fraudulent claim for a ₹400 biryani does not come from Zomato's revenue; it comes from the restaurant's settlement. For large restaurant chains with finance teams and legal resources, this is manageable. For the hundreds of thousands of independent restaurants and cloud kitchens on these platforms — many of them run by individual entrepreneurs with thin margins — each fraudulent chargeback is a direct hit to their income.

Delivery partners are also indirectly affected. When restaurants absorb repeated losses from fraud, they reduce order volumes, cut portions, or exit the platform entirely. This reduces the pool of available orders for delivery workers, most of whom are paid per delivery. The chain of harm from a single fraudulent refund claim extends well beyond the platform's complaint inbox.

Small food businesses in particular have very limited recourse. A kirana-scale tiffin service or a home baker selling through Swiggy Instamart does not have the resources to contest a complaint, review image metadata, or appeal chargebacks through a platform's reconciliation process. They simply lose the money and move on.

How Platforms Are Fighting Back

Both Zomato and Swiggy have acknowledged the problem and are deploying countermeasures, though neither has publicly disclosed the full extent of their detection capabilities.

Zomato has introduced a behavioural scoring system it refers to as a "Karma score." Rather than evaluating each complaint in isolation, the system tracks a user's complaint history over time. A user who frequently raises complaints — particularly complaints that result in refunds — receives a lower Karma score, and their future claims are subjected to greater scrutiny or reduced benefit of the doubt. This is a sensible approach to distinguishing habitual fraudsters from genuine complainants.

Both platforms are also deploying AI-driven image analysis to detect inconsistencies in submitted complaint photographs. This includes checking for visual artefacts common in AI-generated images — unnatural textures, inconsistent lighting, improbable colour gradients, and text rendering errors. Manual reviews are triggered for claims that score above a suspicion threshold, with human agents examining flagged submissions before approval.

Real-time photo uploads — where the app requires a photo to be taken live within the application rather than uploaded from a gallery — are also being explored as a structural deterrent. If a customer cannot submit a pre-edited image but must photograph the food in real time, the scope for using pre-generated fake images is significantly reduced.

AI vs AI: The Detection Problem

Here is the deeper challenge that makes this fraud difficult to solve permanently: the same AI technology being used to create fake food images is also improving faster than the detection systems designed to catch it.

When Zomato's image analysis system learns to detect the visual artefacts produced by today's generative tools, the next generation of those tools will produce artefacts that are harder to detect. This is not a hypothetical future problem — it is already playing out. The images produced by the latest generative models are substantially more convincing than those produced by tools from even twelve months ago. Detection systems that are calibrated for current fraud patterns will be partially obsolete by the time they are deployed at scale.

This does not mean the fight is hopeless. But it does mean that technical detection alone — AI checking AI — is not a sufficient strategy. Behavioural signals, restaurant-side verification, and structural changes to how complaint evidence is collected are likely to be more durable defences than image analysis alone. The fraud is a systems problem, not purely a computer vision problem.

The same technology being used to commit the fraud is improving faster than the tools being used to detect it. The platforms that will win this battle are the ones that make the evidence harder to fake in the first place — not just harder to hide after the fact.

The Bigger Picture for India

India's food delivery market is one of the largest and fastest-growing in the world, with Zomato and Swiggy together processing tens of millions of orders each month. The infrastructure that makes this possible — automated dispute resolution, real-time payments, digital complaint systems — was built for efficiency at scale. That same efficiency creates attack surfaces that did not exist when complaints were handled manually.

What is happening in food delivery is a preview of a broader pattern that will play out across Indian digital commerce in the coming years. Anywhere that automated systems make trust-based decisions at scale — insurance claims, e-commerce returns, customer service escalations — generative AI tools will be used to manufacture the evidence those systems rely on. The food delivery fraud problem is early and visible. Similar dynamics are building in adjacent sectors.

For fresh graduates entering roles in operations, customer experience, compliance, or digital product management in any of these sectors, the ability to understand and design against AI-assisted fraud is already a relevant skill. The question "how would someone try to deceive this system using AI?" is one that every product and operations team in Indian digital commerce should be asking right now.

What Honest Customers Should Know

If you are a genuine customer who has received a legitimately damaged or incorrect order, none of these fraud countermeasures should significantly affect you — as long as you follow the right steps.

Photograph the issue immediately on delivery, before touching or consuming any of the food. Take the photo within the delivery app if possible, since in-app photos carry timestamp and location metadata that make them significantly more credible than gallery uploads. Be specific in your written complaint — describe exactly what was wrong and which part of the order was affected. Vague complaints with generic descriptions are flagged more frequently by automated systems as potentially suspicious.

If your genuine complaint is rejected and you believe the platform has made an error, escalate through the platform's consumer grievance channels rather than simply accepting the decision. India's Consumer Protection Act provides recourse for customers who have been genuinely wronged by platforms, and most platforms have formal grievance officer contacts that are legally required to respond within a specified timeframe.

The rise of AI fraud on food delivery platforms is not a reason to distrust the system entirely. It is a reason to understand how the system works, what evidence it relies on, and how to use it correctly when you have a legitimate complaint. The honest user who understands the platform is always in a stronger position than one who does not.