All materials

How Generative AI Is Becoming a Tool to Discredit Women Journalists and Fuel Information Attacks Against Ukraine

23.01.2026

Generative artificial intelligence has significantly lowered the barrier to producing fake content that mimics journalistic formats, recognizable public figures, and media brands. The theft of a well-known woman journalist’s image, voice, or style, the imitation of “news reports” using the logos of real media outlets, and the mass distribution of such content on social media are becoming tools for professional discreditation, undermining trust in the news, and destabilizing Ukraine’s information environment.

Technology-Facilitated Gender-Based Violence (TFGBV) targeting Ukrainian women journalists became one of the key areas of the Women in Media’s systematic work in 2025. The Online Attacks Map has recorded numerous incidents connected to women media professionals’ work. A new, but already widespread, type of such attacks involves the use of generative AI, specifically, the creation of AI-generated photo, video, and audio materials that use women journalists’ likenesses without their consent.

These attacks can be long-term and may manifest through sustained harassment, abusive comments, threats on social media and in emails, the dissemination of personal data, as well as the creation of manipulative content using a woman journalist’s images. The emergence of widely available generative AI tools has significantly expanded the scale and speed of such attacks, making them harder to debunk and reducing accountability for perpetrators.

Women journalists are a distinct target because professional discreditation is combined with gender-based forms of humiliation, including sexualization and objectification. Because many of them serve as the public “faces” of media outlets and enjoy audience trust, the theft of their image or voice becomes an effective tool of information manipulation.

As shown by the Women in Media’s study “Her Voice, Their Target,” 81% of surveyed women journalists had experienced digital attacks, and 14% reported that online threats spilled into real life — from stalking to the exposure of personal data. 

The emergence and wide availability of AI tools, particularly for generating photo, video, audio, and text content, substantially expand the opportunities to carry out such attacks and increase their scalability. According to the study “When Artificial Intelligence Turns Hostile,” among 119 Ukrainian women journalists surveyed by the Women in Media NGO, 7% had already encountered AI-enabled online attacks, and another 16% reported observing similar attacks against their colleagues.

Professional discreditation as the primary goal of AI-prompted attacks

Online attacks using generative AI and targeting women journalists most often have a clear objective: professional discreditation. The aim is not only to harm an individual woman media professional, but also to undermine trust in the outlet she represents and in journalism overall.

These attacks follow several recurring mechanisms: stealing a journalist’s likeness or voice; falsely attributing content to real media outlets (logos, visual styles, news lower-thirds); and mass distribution of manipulative materials across social media and messaging apps. Generative AI can reproduce recognizable faces and voices with a high degree of plausibility, making it harder for audiences to quickly identify fakes.

As a result, a journalist may find herself in a situation where statements or actions she never made are attributed to her, while the fabricated content is presented as part of a real journalistic product. This creates risks not only to the woman journalist’s personal reputation, but also to the newsroom, TV channel, or publication whose brand is used without permission. In a number of cases, these attacks show signs of coordinated information operations aimed at fostering distrust of Ukrainian media and inflaming internal social tensions.

Below are several illustrative cases documented by the Women in Media that show how generative AI tools are used for professional discreditation.

Stealing a journalist’s likeness and a fake “news” reality

On December 9, 2025, the Telegram channel @zeleboba_ai posted a message with the lead-in: “Today, businessman Ihor Kolomoisky suddenly died of a heart attack in a pre-trial detention center,” — Nataliia Moseichuk. The post included a video featuring a woman who visually resembled Nataliia Moseichuk, a well-known host on the 1+1 TV channel. The clip contained the channel’s watermark, the logo of the morning program “Breakfast with 1+1,” and the journalist’s first and last name, creating the impression of a real news product.

In the video, the woman emotionally reports the alleged death of Ihor Kolomoisky, a former co-owner of media assets within the 1+1 holding. The content shows signs of being AI-generated: an overly “idealized” image, unnatural tear movement, slight video-audio desynchronization, and incorrect stress patterns in pronunciation. The clip then shows an AI-generated image of a man on a metal table in a room stylized as a morgue, as well as a photo of Kolomoisky with a mourning ribbon. In another corner of the frame is the logo of TSN – Television News Service.

This case combines several forms of technology-facilitated gender-based violence: impersonation (the theft of a recognizable woman journalist’s likeness), gendered disinformation, and online defamation through the distribution of a fake video presented as a statement by a real media professional. The goal of such actions may be to professionally discredit both Nataliia Moseichuk and the 1+1 TV channel. According to Women in Media, online attacks against Nataliia Moseichuk, as one of the channel’s public faces, were recorded in at least seven instances, including attacks involving AI tools.

A different form of professional discreditation is illustrated by the case of Olena Mudra, a journalist from Uzhhorod. During the campaign against her, artificial intelligence tools were used not only to attack the journalist herself, but also to draw in her family. Unknown actors created and disseminated a video using an image of her son, overlaying it with a generated voice, and also spread manipulative disinformation messages on behalf of a generated character, Marik Fedirko.

“This video is edited from clips of video files… and an AI-generated video featuring Marik Fedirko in front of the Uzhhorod City Council. As of now, it has about 4,500 views,” Olena Mudra wrote on her Facebook page. In a comment to Women in Media, she noted that these attacks are part of a coordinated discreditation campaign related to her professional work.

Journalist Iryna Vedernikova of Dzerkalo Tyzhnia also faced attempts at professional discreditation. In July 2025, the Telegram channel “Joker” published a post containing claims that discredited the journalist and the outlet, as well as accusations involving her husband. The channel later posted a series of publications using derogatory language directed at Vedernikova and the editorial team.

These posts were accompanied by AI-generated illustrations with zoomorphic features, including a “pig snout,” a form of visual humiliation. In this way, the attack simultaneously targeted the journalist, the media outlet she represents, and her family. 

The cases reviewed demonstrate a systematic tool of professional discreditation that undermines trust in individual women media professionals, newsrooms, and Ukraine’s media environment as a whole.

Stealing a trusted persona

Another popular 1+1 host who is frequently targeted by online attacks is Solomiia Vitvitska. The study “Artificial intelligence and TikTok: how famous female journalists are being systematically used for undisclosed AI generated content,” conducted by Texty.org.ua journalists in partnership with the Women in Media, found 7 video generations and 83 audio manipulations featuring Vitvitska while analyzing TikTok content. 

For example, one news report depicts what appears to be Solomiia Vitvitska promoting “miracle cures” live in the 1+1 studio products that, as the fake claims, “doctors keep quiet about and you can’t find in pharmacies.” 

Scammers keep mastering AI. Now they’re generating stories. Just a snippet of stolen TSN footage with Solomiia Vitvitska, and after that, it’s pure AI. They could at least have kept the cube consistent,” media lawyer Ihor Rozkladai wrote on Facebook, drawing attention to the video.

To create such segments, perpetrators may use a fragment of a real broadcast featuring Vitvitska, overlay an AI-generated audio track with the desired message, and edit the clip together with other footage. Such manipulations are often noticeable due to audio-video desynchronization. Alternatively, they generate an artificial “Solomiia Vitvitska” — a person who visually resembles her. 

“Sometimes hate and harassment online hits harder than any news,” Solomiia Vitvitska previously noted in a comment to Women in Media. In the fall, she became one of the participants in a program of individual retreats for women media professionals who face online violence connected to their work. 

Similar TikTok videos also feature other well-known hosts, including from 1+1. For example, in May 2024, such an online attack targeted Marichka Padalko. A video using her likeness spread disinformation about supposed payments of UAH 2,000 to pensioners, allegedly available by filling out a form via a link in the TikTok account’s bio. Women in Media also documented a similar attack targeting host Oksana Guttsayt: the TikTok account dobrodar_media allegedly shows her promoting “payments for internally displaced persons.” 

The goal of such actions may be to exploit familiar TV faces that carry authority and audience trust, along with the brand of a well-known channel, to spread false narratives or pursue fraudulent schemes. A call to fill out an unknown form may be an attempt to illegally collect personal data. And because the appeal is presented as coming from Marichka Padalko or Oksana Guttsayt, it can also harm their professional reputations. 

Stoking public anger 

In the Ukrainska Pravda article “The Last Days of Social Media: How Artificial Intelligence Is Breaking the Internet,” published on January 8, 2026, the author notes that the Oxford Dictionary selected rage bait as one of 2025’s “words of the year” — a term for content deliberately designed to provoke anger and outrage in order to drive traffic and engagement.

The AI factor has sharply amplified this effect. As Angele Christin, a researcher at the Stanford Institute for Human-Centered AI, explains, neural networks have made the production of provocative content cheap, fast, and scalable. If outrage once had to be carefully engineered, now it’s enough to generate an absurd or offensive visual,” the piece states. 

Women in Media also observe this trend when analyzing AI-enabled online attacks against women journalists. Indeed, provoking anger and outrage is one of the tasks of such attacks. The ultimate goal is likely to destabilize the country’s socio-political climate. These videos often focus on issues that are sensitive for audiences and may be perceived as controversial. For example, mobilization, through the spread of the popular false narrative: “why don’t lawmakers’ children fight?” One such video again features the host Solomiia Vitvitska. In a segment posted on the TikTok channel ukrnews_today, she supposedly urges viewers to sign a petition that would require members of parliament to go to the front. The image even has a large caption “MPs to the front?” to draw even more attention. 

The creators of another ukrnews_today segment also exploit the topic of mobilization. It uses a fragment from a TSN broadcast on 1+1 featuring host Yuliia Borysko. She calls on viewers to follow a link in the TikTok page’s bio and sign a petition allegedly initiated by former Commander-in-Chief of the Armed Forces of Ukraine Valerii Zaluzhnyi against mobilizing young men aged 18–25. In this clip, the video and audio are out of sync, and the voice does not resemble Yuliia Borysko’s real voice, indicating the use of AI. 

In these attacks, several components appear at once: another reminder of the contentious mobilization issue, prompting viewers to follow an anonymous and potentially fraudulent link, and attempts to undermine the hosts’ professional reputations. 

Attempts to destabilize the situation may be carried out not only by stealing the likeness of real women journalists, but also by creating generalized anchor personas. For example, in a video posted on the channel 0001.kkkk, a host holding a blue microphone with Ukraine’s coat of arms asks an unknown man in a suit why his children are not going to fight. Her interlocutor is presented as a member of parliament, but his image is also generalized. They appear to be speaking in a setting that resembles a government or parliamentary building. In reality, however, it is unclear who the host is or who the man is. The microphone with a trident-style logo is also a generic invention — there is no Ukrainian media outlet that uses such an official emblem as a logo. 

Another unknown journalist speaks with an unknown man in a suit, addressing him as “Mr. MP”, in a TikTok video on the channel zsy.2026. She also asks where his sons are and why they are not on the front. This time, however, the caption under the video says it was created using artificial intelligence. Despite that, the post has six comments praising the “journalist.” This example shows how realistic such segments can seem to those who do not look closely at details or try to understand what they are watching — even when a caption explicitly states that AI was used. 

Another example is an AI-enabled attack against journalist Olha Butko carried out in May 2025. Videos edited to make it appear as if host Olha Butko, during a broadcast of the United News telethon, called Russian President Vladimir Putin the “president of Ukraine” appeared on YouTube, TikTok, and the Telegram channel “Rhymes and Punches.” This attack harms not only the journalist’s image and the reputation of the outlet where she works but also trust in state communications more broadly. Given the content of the fake, the choice of distribution platforms, and the systematic use of similar narratives, this case may bear the hallmarks of an information-psychological operation aimed at discrediting Ukraine’s government and public institutions and is likely connected to the interests of Russia as the aggressor state.

Jokes on the edge

Today, artificial intelligence is widely used to create entertainment and satirical content. In a number of cases, however, this practice blurs the line between humor and manipulation, creating additional ethical and information risks. 

One such video was posted on the TikTok channel aleksander.ai and captioned “Baby interview — full version.” It shows a boy in a vyshyvanka sitting in a stroller that is also heavily decorated with embroidery. In the background is a building resembling a traditional Ukrainian rural whitewashed hut. Several different “women journalists” take turns interviewing the boy — these are generalized anchor personas. They have different skin tones and hair, hold microphones with different, unreadable logos, and are all dressed in blue-and-yellow outfits. As the child responds, some of the “hosts” move their lips in sync with him and cover their mouths with a hand in the same way, as if holding back laughter. 

The boy is asked who is in charge in the family — dad or mom — what he knows about friendship, and so on. He gives witty answers and laughs. The creator labels the content as AI-generated. However, the post has 139 comments, including: “What are you teaching the child?”, “Well done, wonderful kid,” “The boy is awesome,” “Amazing! Kids are so wise,” and so forth. This suggests that despite the AI label and the video’s overall unreality, some users still perceive it as genuine. 

Unfortunately, the ethical line is sometimes crossed by journalists themselves when they use AI tools in an attempt to entertain their audience. A recent example is a post on the Facebook page of Dumka Media, where the editorial team used artificial intelligence to “match” Kharkiv officials and politicians with an “ideal partner.” The descriptions included phrases like “a brunette with expressive eyes,” “a young blonde with a pleasant, pretty appearance,” and “a delicate, elegant blonde.” 

Distributing this kind of content through media outlets is problematic for several reasons. First, there is a significant risk that these images could later be reused on third-party platforms without any disclosure that they were AI-generated, presented as if they were real photographs for manipulative purposes, thereby misleading readers. Second, creating and sharing such female “ideal partner” images with these captions is a classic example of sexism, where a woman is used as “decoration” for a successful male official. 

In addition, the availability of generative AI tools lowers the barrier to creating fakes, making it possible to rapidly produce and widely distribute content whose debunking requires substantial resources. AI-enabled attacks increase pressure on women journalists and raise the risk that online violence will spill into offline harm.

“Coordinated dissemination of AI-generated content on socially sensitive topics creates conditions for manipulating public sentiment and can be used as part of broader information and psychological operations against Ukraine,” says Liza Kuzmenko, Head of the Women in Media NGO and a member of the Commission on Journalistic Ethics.

We also note that Women in Media has produced a guide titled “Steps for Newsrooms to Take in the First 24 Hours Following an Online Attack against a Woman Journalist.” An online attack can have a devastating impact on a woman journalist’s psychological well-being, physical safety, professional reputation, and willingness to continue working. The editorial team’s actions in the first 24 hours are critically important for stabilizing the situation, preventing escalation, and restoring trust.

This document has been produced with the financial support of the European Union, within the framework of RSF’s action in Ukraine, as part of the project ‘Not Artificial Threats: Tackling AI-Facilitated Violence Against Women Journalists in Ukraine’, implemented by the NGO Women in Media. The contents of this document are the sole responsibility of the NGO Women in Media and can under no circumstances be regarded as reflecting the position of either RSF or the European Union.

Copied!