TECHPRENEUR billionaire and CEO of Tesla and SpaceX, Elon Musk has announced that his AI company, xAI, is developing new features for Grok to detect deepfake videos and trace their origins online.
A deepfake is AI-generated media that realistically alters or fabricates a person’s voice or image, often used to deceive or misinform.
On October 9, 2025, Musk replied to a post on X by Matt Walsh, who warned about the growing threat of realistic AI-generated videos.
Musk said: “Grok will be able to analyse the video for AI signatures in the bitstream and then research the Internet to assess origin.”
Grok later confirmed in the same thread that it’s being trained to spot subtle AI artifacts in videos — such as inconsistencies in compression or generation patterns — that humans can’t detect. It will also use online data to verify a video’s source.
This development, should it come, will be a major improvement in fighting misinformation through deepfakes.
Rooting out deepfakes hasn’t gotten any easier lately despite improvements in detection tools, as deepfake technology has kept on evolving, making it harder to spot manipulated videos and images.
Deepfakes have also been mimicing facial expressions, voice, and even body movements with impressive accuracy, allowing the spread misinformation, political disinformation, and scams more effectively than before.
Tools to create deepfakes have also been more accessible, meaning that anyone with basic tech skills can generate convincing fake media.
Social media companies have increased efforts to flag and remove deepfake content, using AI systems and manual review teams.
However, the volume of content posted every minute is overwhelming, making it impossible to catch everything.
Interest in Deepfake searches online
Interest in deepfake searches online has been growing at a faster pace elsewhere, with little interest in Africa, according to data by the 2024 Global Deepfake Intest Report.
The report shows that South Korea (13,399), Czech Republic (11,356), and Sweden (10,443) are the only places with more than 10,000 deepfake-related searches per million people.
In Africa, South Africa has the highest interest, with 176 searches per one million searches, while Egypt and Lybia have 19 and 13 searches per one million people.
Tools to root deepfakes
Lately, there has also been an increase in tools that have been used to root out deepfakes, among them;
Detecting deepfakes has become increasingly challenging as the technology evolves. However, several advanced tools have been developed to address this issue.
- Intel’s FakeCatcher – which stands out by analysing subtle blood flow patterns in video pixels, achieving a 96% accuracy rate in real-time detection, as per Lifewire.
- Meta’s Video Seal – gas been offering an open-source solution that embeds invisible, tamper-resistant watermarks into videos, aiding in the identification of AI-generated content.
- Reality Defender – provides a user-friendly browser extension, enabling individuals without technical expertise to assess the authenticity of content.
- Amber Video – caters to media organizations by not only detecting deepfakes but also offering editing solutions to ensure content authenticity.
- Vastav AI – developed by Zero Defend Security, is a cloud-based platform designed to detect AI-generated or modified videos, images, and audio.
- FairVoice – an initiative by UC Berkeley, focuses on equitable audio deepfake detection, addressing biases across diverse speaker demographics
These tools have been making it easier to debunk deepfakes, though some of them are not open source.
Grok challenges with geolocking
While xAI’s Grok is on path to to roll out these crucial deepfake detection features, concerns are emerging over the tool’s limited global accessibility due to geolocking.
This could also hinder collaboration, especially in regions facing a rise in politically motivated deepfakes and disinformation campaigns.
Without access to advanced tools like Grok, journalists, fact-checkers, and watchdog groups in affected countries may struggle to verify content quickly and accurately.
Additionally, geolocking may limit Grok’s ability to gather diverse data inputs needed to improve its deepfake detection accuracy across different languages, accents, and cultural contexts. – IOW Data.
