Alla Mazur - attack 02.08.2025
Місто фіксації онлайн-атаки
KyivДата онлайн-атаки
02.08.2025Зафіксовані види онлайн-атак
Source of Threat
UnknownСоціальна мережа, сайт чи інший онлайн-простір онлайн-атаки
FacebookOn July 2, 2025, a video was circulated on Facebook using the likeness of well-known TV presenter Alla Mazur (1+1 channel). In the video, she allegedly advertises the “Institute of Veins” and a remedy for varicose veins.

To create this video, the perpetrators used:
- Emotional messages (“Attention, anyone with varicose veins!”, “Helping all patients”) to increase audience trust.
- AI-generated voice and image, imitating the appearance and speaking style of the presenter. The voice in the video resembles Mazur’s real voice but sounds unnatural: flat, mechanical, and lacking intonational nuance. Lip movements lag behind the audio, which is typical for synthetically generated videos.
- Unstable facial expressions: the face remains static, with lips moving in a limited way, lacking natural jaw and muscle dynamics. The video looks as though lip-sync deepfake technology was overlaid onto real footage.
- Fake content styled as a news segment (“Tyzhden” program, visual style of 1+1) to create an illusion of authenticity.
The manipulation involved the unauthorized use of Alla Mazur’s likeness and name without her knowledge, the creation of a fake news piece styled as a TV program to enhance credibility, the use of clickbait tactics promising viewers a “remedy” after completing a survey, and the spread of fraudulent medical content exploiting health concerns — posing particular risks to vulnerable people. The perpetrators’ goals were to collect personal data via the fake “survey,” sell or advertise dubious “medicines,” and exploit public trust in a well-known journalist and the 1+1 brand to legitimize the scam.
This case combines several key aspects of manipulation: creation of a fake news format for credibility, unauthorized use of a journalist’s likeness and name, clickbait tactics tied to health-related scams, and dissemination of fraudulent medical content. The aim was both to deceive and to profit, while undermining public trust in journalism and media institutions.
The consequences of such manipulations include reputational risks for Alla Mazur and the 1+1 channel, the spread of health-related disinformation that may harm people who believe the false advertising, and the encouragement of broader use of deepfake practices in Ukraine — particularly through the exploitation of well-known journalists’ images in fraudulent schemes.
The significance of this case lies in the fact that it is a telling example of how AI can be used for fraud in the health sector, for attacks on media trust through undermining the authority of major TV channels, and it also highlights the gender dimension of such attacks — since the likeness of a woman journalist was exploited for manipulation.
Report an online attack
If you are a woman journalist who encountered an online attack or witnessed such an incident, we would appreciate your contribution. Please report this case to help us identify threats and protect the rights of women in the media. Fill out the form, share key details, and join us in creating a safer information environment.
Fill Out the Form