New Video of Fugitive Raises Questions About AI Manipulation

A recently surfaced video allegedly featuring fugitive Miloš Medenica has sparked significant debate on social media, with many questioning its authenticity. Medenica, who was sentenced to ten years and two months in prison on January 28, 2023, for his role in a cigarette smuggling operation, appears to address Lazar Šćepanović, the director of the Police Directorate of Montenegro, claiming he will continue to make statements until he is captured.

In the video, Medenica reportedly states, “I will be speaking every day until I am arrested or until they deny that I am a bot.” His assertion has led to discussions on whether the footage is a product of artificial intelligence (AI) manipulation or an authentic recording. The Police Directorate has yet to officially comment on this latest video, which follows an earlier release confirmed to be AI-generated.

Concerns regarding the potential for AI in creating misleading content have been highlighted by experts in digital forensics. Doc. Dr. Nikola Cmiljanić, a professor at the Faculty of Information Technology and a forensic expert, explained to Pobjeda that the emergence of videos featuring individuals wanted by law enforcement raises significant skepticism among the public and media.

Cmiljanić emphasized the need for a comprehensive approach in serious cases, stating, “One should not rely on a single ‘quick check’ but rather a combination of methods and multiple independent indicators.” This caution is essential, particularly when determining the authenticity of video content.

Understanding AI-Generated Content

AI-generated or modified video content refers to recordings where elements such as image or sound are created or altered using generative models. Cmiljanić noted that this technology can produce highly convincing videos, where algorithms can generate new footage, swap faces, or even create voices that mimic real individuals.

The quality of AI-generated videos can be so high that an average viewer might struggle to assess their authenticity at a glance. Cmiljanić explained that deepfake technology is a specific subset of AI video content aimed at impersonation, making it appear as though a specific person is saying or doing something they did not actually do.

The broader category of AI-generated videos can include entirely synthetic scenes or manipulations that do not necessarily involve face-swapping, such as altering backgrounds or modifying details within the frame.

Challenges in Detection and Attribution

Cmiljanić indicated that the effectiveness of forensic analysis largely depends on the quality and originality of the material. The most reliable assessments occur when original files or recordings are available, allowing for the examination of technical traces in the footage, such as file structure, codecs, and consistency in lighting and shadows.

In response to inquiries about the forensic tools available to Montenegrin law enforcement for identifying AI-generated content, Cmiljanić admitted he was not familiar with specific resources or their practical application. He noted that tools for automatic detection of AI-generated videos are still in the early stages of development and standardization, resulting in variable reliability, particularly when the footage is sourced from social media or has undergone multiple compressions.

The attribution of AI-generated content remains more complex. Identifying the creator and distributor of such material often requires a combination of digital forensics and traditional investigative methods. Traces are typically found on devices used for processing and publishing the content, such as computers or phones.

Cmiljanić pointed out that the dissemination of content through fake accounts and VPNs complicates the chain of responsibility, as such tactics are designed to obscure the source.

As the debate surrounding the authenticity of the latest video of Medenica continues, Cmiljanić urges caution, particularly in sensitive situations. “When sensational videos emerge, especially involving individuals who are wanted, it is crucial to wait for official verification and not jump to conclusions based solely on impressions,” he advised.

The potential for AI to create convincing but misleading videos poses risks, including the potential to incite panic or damage reputations. Cmiljanić stressed the importance of verifying sources and original materials, as the speed of information dissemination often outpaces the verification process, leading to serious consequences.

Miloš Medenica remains at large after fleeing Montenegro via an illegal border crossing into Serbia, where he is reportedly seeking assistance from influential contacts. He has been identified as a key figure in organizing and financing a criminal group involved in smuggling large quantities of cigarettes from the Free Trade Zone in Bar, Montenegro.

The Program Director of the Center for Democratic Transition (Milica Kovacević) stated that her organization attempted to verify the latest video but could not definitively conclude whether it was AI-generated or authentic. She highlighted that they used advanced tools and consulted prominent experts, who also could not confirm the video’s origin.

Kovacević expressed concern over the Police Directorate’s quick determination that the video was AI-generated, stating, “For the sake of trust, they should publish how they reached that conclusion and provide the forensic evidence.” The ongoing scrutiny of this case underscores the growing importance of credible verification in an era increasingly influenced by digital manipulation.