AS big tech continues to improve AI models, indications show that human oversight remains vital.
This week, Google chief executive Sundar Pichai warned that people should not “blindly trust” information produced by AI systems, saying current models remain prone to errors.
Speaking to the BBC, the Alphabet boss urged users to rely on AI alongside other sources because the technology is still developing and inaccurate at times.
Pichai said this showed the importance of “a rich information ecosystem”, adding: “This is why people also use Google search, and we have other products that are more grounded in providing accurate information.”
Burden of verification to users
Experts argue that major AI companies should not shift the burden of verification to users, and should instead focus on improving the reliability of their systems.
While Pichai accepted that the tools were useful for creative writing, he said people have to use them for what they are good at and not blindly trust everything they say.
Google uses disclaimers to warn that its models can make mistakes, but this has not protected it from criticism.
AI Overviews, a tool which summarises search results, faced mockery earlier in the year for presenting inaccurate answers.
The wider problem of generative AI producing misleading or false information continues to worry researchers, and the industry still lacks a fact-checking method that works at scale.
Self-verification
In October, executive chairman of X, Elon Musk, said his AI company xAI is developing features for Grok to detect deepfake videos and trace their origins online.
A deepfake is AI-generated media that alters or fabricates a person’s voice or image.
On 9 October 2025, Musk replied to a post on X by Matt Walsh, who warned about the growing threat of AI-generated videos.
Musk said: “Grok will be able to analyse the video for AI signatures in the bitstream and then research the Internet to assess origin.”
Can self-verification be trusted?
According to Democracy in Africa (DIA), human oversight remains crucial to curb misinformation and propaganda.
This is because AI is increasingly used by powerful states and companies to shape global standards, control data flows, influence economies and strengthen strategic advantage.
For instance, DIA says Africa has become increasingly vulnerable to these challenges due to changes outside its borders, with research showing that 60% of disinformation campaigns targeting Africa are foreign-sponsored, primarily from Russia, China, and Gulf states.
Human oversight increasingly important
DIA says fact-checking organisations have made progress, but they face a tough task. Fact-checking is a manual process that cannot match the speed at which AI-generated content spreads.
Current measures used by social media firms, such as forwarding limits and labelling, are easy to bypass and often ineffective when dealing with audiovisual AI-generated content such as TikTok videos. – IOW Data.
