Grok, the AI app from Elon Musk's xAI, reportedly came dangerously close to being pulled from Apple's App Store over a growing deepfake controversy.

According to a new report, Apple warned xAI earlier this year that Grok could be removed entirely if it failed to address the spread of sexualized AI-generated images circulating on X. The warning came amid mounting pressure from US lawmakers, who had raised concerns about Grok's ability to generate explicit, non-consensual deepfakes of real people. Apple confirmed in a letter to senators that it had rejected earlier versions of the app, forcing xAI to make changes before it would allow updates to go live.
The situation, however, appears far from resolved. A recent investigation has found that despite xAI implementing safeguards, such as prompt filters, monitoring systems, and model updates, problematic content is still being generated and shared online. That includes AI-generated images depicting real individuals in revealing or suggestive scenarios.
xAI maintains that it strictly prohibits such use, but the persistence of these outputs raises questions about how effective those protections actually are. Apple, for its part, has made its stance clear, stating that apps enabling this type of content violate its policies and risk removal if compliance isn't maintained to the standards outlined in its policies.
For now, Grok remains available on the App Store after Apple determined improvements had been made, but the situation is clearly on a knife's edge. If the issues continue, Apple has no more chances left for xAI: Grok could still be pulled from the App Store in its entirety.




