Unreliable AI Tools for Detecting Fake Content

Unreliable AI Tools for Detecting Fake Content
AI can detect suspicious content, but the decision on truthfulness is made by people.
Illustration: Said Selmanović
The new digital era and the advancement of artificial intelligence (AI) technologies have brought numerous benefits and conveniences to the daily work of journalists. However, AI also has significant downsides, namely the ease with which fake yet highly realistic content, news, photos, and videos can be created. This makes it difficult to distinguish between the artificial and the real, between truth and falsehood.
For this reason, media professionals must remain vigilant in order to identify fraud and prevent potential harm. We sought answers to the question of how much AI helps them in this task.
Borislav Vukojević, senior assistant at the Department of Communication at the Faculty of Political Sciences, University of Banja Luka, has been monitoring the development of AI-assisted fact-checking and its application in major global media outlets for the past two years.
“AI can speed up the processing of raw data and the detection of ‘suspicious’ claims, but it cannot independently make a final decision on truthfulness—the human expert is still necessary to verify and contextualize the findings”, says Vukojević.
They cannot afford good tools
This is exactly the case at the Croatian portal Faktograf. The newsroom uses AI tools as an initial indicator that some content may be fake, after which verification follows.
“When tools indicate a higher probability that the content is AI-generated, we try to detect who first published the video or photo. In many cases, the original source notes that it is AI-generated content, but it spreads further without that label. With videos, we try to identify the clip that was used to generate it. In this way, it can be determined, for example, that a politician in an interview did not actually say what is claimed in the AI video. We also seek confirmation from individuals or institutions as to whether what we saw and heard really happened”, explains Ivica Kristović, deputy editor-in-chief of Faktograf.
Vesna Kerkez, editor at the Mondo web portal in Banja Luka, believes it would be useful to organize training at the newsroom level and invest in tools that can help detect disinformation.
“It would make our work much easier, especially in this era of pseudo-truth”, says Kerkez, who uses AI tools daily, but for other purposes such as structuring texts and conducting searches. She has trust in these tools, but with caution.
Vukojević points out that most media outlets in Bosnia and Herzegovina lack the resources for their own research and development, or for licences to access commercial AI platforms. “Fact-checking is still done manually, and local portals most often do not even use free APIs like ClaimBuster”, he notes.
This is confirmed by Enes Hodžić, a journalist at the Balkan Investigative Reporting Network (BIRN) in BiH. He notes that AI-based tools are already widely available, but the most effective ones still remain out of reach for Bosnian media—especially for those engaged in daily reporting, as they require both time and money to use.
Local context is unknown to AI
“The development of all AI tools is still in its infancy, including those that could be useful for fact-checking. In BiH, as in many other fields, we lag behind developed countries due to various limitations. Still, there are many professionals who, in different ways and through their own efforts, are trying to bring a touch of AI into the fact-checking community in BiH. Unfortunately, due to various limitations, these tools are not actively used in our country”, says Hodžić.
He points to a few examples, such as PimEyes, a facial recognition tool particularly useful in investigative media.
“But its use is geographically restricted and not available in BiH. Others are impractical because they are based on English or simply too expensive for daily use.”
In uncovering disinformation, BIRN relies mainly on the work of authors and researchers, much like Raskrinkavanje, a platform dedicated to fact-checking information in the public sphere in BiH.
“We have started using some tools to check content suspected of being AI-generated, but the human factor remains key in our work—especially when it comes to more subtle forms of manipulating facts and narratives built on half-truths, and when the local context matters. So, AI tools are currently only an ‘aid’ to journalists and editors”, explains Rašid Krupalija, editor-in-chief of Raskrinkavanje.
Unaware, not malicious
Assoc. Prof. Dr. Senka Krivić, from the Faculty of Electrical Engineering at the University of Sarajevo, also highlights the need for locally adapted AI solutions.
“BiH, and the Balkans in general, have their own specificities that require adapting existing tools and developing new ones to understand language, phrases, cultural references, and specific narratives in this region”, says Krivić.
She also emphasizes the importance of educating citizens in digital and media literacy, warning that the public in BiH remains largely unaware of how disinformation works or the dangers it carries.
“Fake news and manipulative content are often shared unconsciously, out of ignorance rather than malice. In such an environment, even the most advanced AI systems cannot be fully effective without education and better public awareness”, notes Krivić.
Al Jazeera Balkans once had its AI program “Labeeb”, which proved highly useful in daily tasks such as handling transcriptions, translations, and text coding. After attending International Journalism Festival in Perugia in April 2025, Haris Buljubašić began to seriously consider using AI to detect AI content.
“My first thought was—we are not ready for this. I fear that in the Western Balkans we are still not sufficiently trained to recognize fake AI content, and the consequences of, for example, deepfake political videos in this region could be huge”, says Buljubašić, former digital content producer at AJB.
Harder to detect low-quality fake photos
Speaking about the limitations of these programs, Buljubašić gives the example of the photo of Pope Francis in a large white jacket, which AI programs easily recognize as fake.
“But once that photo is repeatedly downloaded and reposted online, its quality degrades, as does the ability of AI programs to flag it as fake. One data point shows that the ability to detect a lower-quality version of the photo drops from 99 per cent to 33 per cent. The same happens if internet users slightly alter the original fake photo”, says Buljubašić.
Borislav Vukojević highlights several AI tools for detecting disinformation.
ClaimBuster detects “checkable” claims in text or speech and ranks them by importance for further verification. Full Fact Monitoring uses ML-AI to automatically flag and classify claims in news and social media, though the final fact-check decision remains in human hands. Reuters News Tracer monitors X in real time, clustering similar tweets and assessing source credibility, but leaves ultimate confirmation to journalists. Logically combines AI claim assessments and human moderators, especially on health and politics. Facticity.AI, still in beta, claims 92 per cent accuracy, but still needs to prove stability across languages and domains.
“All these tools achieve, at best, 70–90 per cent accuracy under ideal conditions, while in outside samples they often drop to 50–60 per cent, which is below the reliability threshold required for independent use”, concludes Vukojević.
Elections lost in Slovakia due to deepfake
Faktograf.hr has, over the past two years, reported on dozens of social media scams involving AI-generated content. Most often, these are related to investment scams or the sale of fake medicines.
“They use AI-generated videos of well-known figures, politicians, or doctors. Some striking examples detected by Faktograf were videos of former Croatian Health Minister Vili Beroš allegedly promoting a miracle drug, while in investment scams we detected AI videos of Croatian President Zoran Milanović and Prime Minister Andrej Plenković”, says Kristović, editor of the “Razotkriveno” (Exposed) section.
According to him, scams using fake photos or videos in the region have not yet entered the political sphere, but it is not excluded that they will. The Slovak politician Michal Šimečka, currently a strong candidate in national elections, has already felt the “power” of deepfake firsthand.
“This really sounds like me”, Šimečka told CNN after a fake audio recording emerged in which he appeared to discuss rigging elections and raising beer prices.
The Crime and Corruption Reporting Network (KRIK) in Serbia also closely monitors the development of AI software relevant to journalism worldwide. They use AI tools in different ways, including creating social media content, subtitling videos, and illustrating articles. In detecting and analysing disinformation, they, like others, believe human investigation and verification remain crucial.
“We tested several software programs designed for voice recognition, hoping they would help us distinguish real human speech from AI-generated speech. In one test, I recorded my own voice—first speaking normally, then making grimaces. Ironically, in both cases the software assessed it as a robot voice”, said Stefan Kosanović, a journalist who otherwise uses AI for various tasks ranging from proofreading and translating texts to helping with cooking.
AFP signed deal with French start-up
Kosanović believes that fact-checkers in Serbia have not yet encountered large-scale production of AI-generated fake news, but that does not mean such a scenario cannot be ruled out in the future. Despite the many advantages AI brings to newsrooms, he, like others, also points out its shortcomings—most importantly unreliability and error-proneness—which require the full involvement of journalists in every process.
“The greatest help for us would be the automation of processes for identifying potentially manipulative and false news that require verification. There are already initiatives to develop software for this purpose. If such tools were open-source and capable of objectively analysing thousands of texts daily, it would greatly facilitate the work of small newsrooms. We also expect further progress of AI in content analysis”, he says.
And how do some international newsrooms approach this? Vukojević tells us that Reuters has for years relied on News Tracer and Lynx Insight in the initial processing of information. AFP recently signed an agreement with the French start-up Mistral to integrate its articles into an AI-powered chatbot that helps with fact-checking.
In the US, newsrooms such as The Messenger are experimenting with the Seekr tool, which automatically assesses headlines for subjectivity, bias, and clickbait. Larger fact-checking organizations (PolitiFact, Snopes, Full Fact) are also developing or piloting AI tools, but use them only as a ‘first filter’ or for statistical analyses, rather than for making final decisions.
Success of tools depends on literacy of consumers
“As someone engaged in the development of AI methods aimed at benefiting humanity, I firmly believe that the relationship between humans and AI should be team-based, built on mutual understanding, cooperation, and complementarity, not mere oversight or control. AI excels at processing large volumes of data, recognizing patterns, and automating tasks, while humans are unparalleled in providing context, empathy, evaluation, and ethically sensitive decision-making. Together, they can achieve much more than individually”, concludes Assoc. Prof. Dr. Senka Krivić.
Raskrinkavanje editor-in-chief Rašid Krupalija stresses that the fight against disinformation must also focus on improving citizens’ media and information literacy in the digital age, which will greatly influence the effectiveness of AI tools. In other words, AI’s usefulness is limited if we do not have educated consumers of media and social network content.
This text was produced with the financial support of the European Union. The content is the sole responsibility of the Mediacentar Foundation and does not necessarily reflect the views of the European Union.