Apple has pulled 15 apps from the App Store after a damning investigation found that its own search and advertising systems were actively steering users toward nudify AI apps — tools that digitally strip clothing from photos of real people.
The findings, published by the nonprofit watchdog Tech Transparency Project (TTP), have reignited a fierce debate over how seriously Apple enforces its own content policies — and who ultimately pays the price when those rules are ignored.
What the TTP Report Found
The Tech Transparency Project ran searches using terms like “nudify,” “undress,” “deepnude,” and “AI NSFW” across both the Apple App Store and Google Play Store. The results were striking.
About 40% of the top 10 results for each search term returned apps that could render women nude or scantily clad. These aren’t fringe apps buried deep in search results — they were surfacing at the top, in plain view of anyone who looked.
The new report alleges that the platforms are “key participants in the spread of AI tools that can turn real people into sexualized images.” The concern isn’t just that these apps exist. It’s that the app stores’ own infrastructure was helping users find them.
How Apple’s Own Ads Promoted Nudify AI Apps
Perhaps the most damaging finding in the TTP report is that Apple wasn’t just failing to block these apps — it was running paid advertisements for them.
Apple controls all advertising in its App Store and has a stated policy against ads promoting adult content. Despite that, 3 of TTP’s App Store searches still returned a nudify ad as the very first result.
The autocomplete feature made things worse. Typing “AI NS” into the App Store search field prompted it to suggest “image to video ai nsfw,” which led to more nudifying apps in the top results.
This matters because it means Apple’s ad revenue system was directly monetising harmful content. The platform wasn’t a passive host — it was an active participant.
The Scale of the Problem
The numbers behind the TTP report put the problem into sharp focus.
The apps identified across both stores have been downloaded 483 million times and earned over $122 million in lifetime revenue. Apple and Google take a cut of that through paid subscriptions and in-app purchases, which TTP says may explain why enforcement has been lax.
In total, 46 apps were identified in Apple App Store searches, of which 18 offered nudifying features. On Google Play, 49 apps appeared in similar searches, including 20 with similar capabilities.
One of the most alarming statistics from the investigation: 31 nudify apps were rated suitable for minors — a notable finding given the growing number of sexual deepfake scandals in schools.
That means parents who checked age ratings before handing a phone to their child could have had no idea what that child was actually downloading.
Apple and Google’s Response
Once TTP’s findings were published and Bloomberg picked up the story, both companies moved — though critics say the response was too slow and too narrow.
Apple removed 15 apps after the findings, but others persist. The App Store had displayed ads for nudify apps in violation of its own advertising policies.
Apple told Bloomberg that it removed 15 apps identified by the group, while Google said that it suspended a number of them.
Neither company publicly commented on why these apps passed the initial review process. The fact that they did — and that paid ads were running alongside them — points to a systemic failure, not a one-off oversight.
TTP director Katie Paul put it plainly: “It’s not just that the companies are failing to actually appropriately review these apps and continue to approve them and profit from them.”
The Grok Connection
The TTP report also flagged an unexpected link to xAI’s Grok chatbot.
An app called “Uncensored AI — No Filter Chat” appeared on the App Store as a general AI chat and photo editing tool. When tested, it generated manipulated images when prompted, while stating that data “may be processed by xAI.”
The app developer confirmed they were using Grok for image generation but claimed they “had no idea it was capable of producing such extreme content.” The developer pledged to tighten moderation settings.
The developer has since rebranded the app to “Chat AI – Simple AI” and increased its age rating from 16+ to 18+.
Meanwhile, Grok itself remains in the App Store. Apple had reportedly privately threatened to remove it over its ability to generate deepfake nude images. As of the latest reports, no removal has taken place.
What the Rules Say — and What Actually Happened
Apple’s own App Store Review Guidelines are explicit. The Safety section includes rules to prevent apps containing upsetting or offensive content. Apple’s guidelines under section 1.1.4 directly ban content that is overtly sexual or pornographic in nature. Point 1.2, User-generated Content, also requires a mechanism to filter objectionable material.
These rules exist. The problem is they weren’t enforced.
The failure doesn’t just stop at the developer — the apps had to pass the App Store Review Process and be approved before being allowed in the App Store in the first place.
That review process is Apple’s responsibility. Each app that slipped through represents a failure at the platform level, not just by the developer.
Governments Are Watching
The TTP findings land at a moment of heightened regulatory scrutiny around AI-generated sexual content.
The proliferation of nudify and deepfake apps has pushed some governments to propose laws against them. The UK’s Children’s Commissioner recently called for a ban on AI deepfake apps that create nude or sexual images of children. The US and other countries have proposed or created laws banning explicit deepfakes, and the California Attorney General recently sent Elon Musk’s X a cease and desist order over Grok’s explicit deepfakes.
For Apple, the timing is particularly uncomfortable. The company has built its brand around privacy and user safety. Being named in a report about hosting and advertising non-consensual AI imagery directly contradicts that positioning.
What Needs to Change
The TTP investigation points to several gaps that need urgent attention.
First, Apple’s app review system needs better detection tools for apps that obscure harmful functionality behind generic descriptions. Several nudify AI apps in the report marketed themselves as photo editors or general AI tools — nothing in their store listings flagged their actual capabilities.
Second, Apple’s ad platform must enforce the same content standards it claims to uphold. Running paid placements for apps that violate its own rules is not just a PR problem — it is a policy failure with real-world consequences.
Third, age ratings need to reflect actual content risk. The investigation found 31 nudify apps that were rated suitable for minors. That is not a minor clerical error. It represents 31 missed opportunities to protect the most vulnerable users.
Apple removing 15 apps after a media report is a reactive measure. What is needed is a proactive enforcement architecture — one that does not require a nonprofit watchdog to force action after the damage is done.
The App Store has long been positioned as a safer, more curated alternative to the open web. Reports like this one challenge that claim directly. Apple now faces mounting pressure — from regulators, researchers, and its own users — to prove that its policies mean something beyond the text of a developer agreement.
Stay informed on the latest in tech accountability and AI regulation — bookmark Global Report Online for in-depth coverage.
#Apple #AppStore #NudifyAI #DeepfakeApps #AIRegulation
