
Artificial intelligence (AI) changes the way we create and disseminate information. Alongside the benefits, this technology involves new major risks, particularly for women journalists. In Ukraine, the dangers are exacerbated by the ongoing war, high levels of digitization, and existing patterns of gender-based online violence.
Women journalists encounter a concerning combination of technology-based violence: AI-generated deepfakes, cloned voices, fake profiles, doxxing (leaking of personal information), and coordinated online persecution campaigns. Such tactics damage reputations, encourage self-censorship or even force the targets to abandon their profession while simultaneously affecting their mental wellbeing. Despite growing global awareness of these threats, there is almost no systematic data on their manifestations in Ukraine.
This study was one of the first attempts in Ukraine to assess the gender aspect of media work during wartime exacerbated by new technological challenges. We collected testimonials from 119 Ukrainian women journalists. A dedicated section describes five representative cases that illustrate how AI is used to discredit and persecute women journalists; while these examples are not an exhaustive list, they serve to demonstrate key patterns.
We are convinced that the results of the study will help make new threats to Ukrainian women media workers more visible and serve as the basis for creating support and security programs aimed at overcoming both gender inequality and technology-facilitated violence.
Key Findings
- Today, Ukrainian women media workers operate in an environment of rapid digitalization and, at the same time, growing risks of gender-based violence, made even more complex and multidimensional by technology, in particular artificial intelligence, combined with the ongoing war and social vulnerabilities;
- Cases of AI use in organizing attacks against women journalists are not widespread, but they already occur, and as technology progresses further, these threats can grow exponentially.
- Among the 119 women journalists we surveyed, one in fifteen (7%) had already encountered AI-generated online attacks, and 16% more observed such attacks against their colleagues. Based on the survey, it can be concluded that AI-based online attacks mainly target journalists and media executives as the most public individuals representing the work of the media outlet.
- Interestingly, 43% of respondents have indicated they see AI-generated content every week, indicating its widespread presence in the media environment. At the same time, only 7% reported personal experience with AI-enhanced attacks. This may indicate that the scale of such attacks remains limited so far, but also that some respondents may not necessarily recognize them as AI-facilitated. The indicator “not sure” (14%) in the question regarding personal experience with such attacks may show a lack of certainty or subjectivity in self-assessment, which should be considered when interpreting data.
- The study has found that over 70% of attacks encountered by the respondents or their colleagues had taken place on Facebook. The authors believe this is due to the fact that Facebook remains the most common platform for professional communication of journalists in Ukraine, which is why it is also where smear campaigns and attempts of pressure tend to take place. Their intensity is probably due to the fact that the Ukrainian context has not yet been sufficiently taken into account by Meta, and local moderation and protection of users, in particular women journalists, require further work.
- Although 61% of respondents have undergone training on the use of AI, only 21% of newsrooms have official policies on its use. This indicates an imbalance between individual initiative and institutional support at the newsroom level. At the same time, 35% noted that AI policy is “being discussed,” meaning that we can expect rapid growth in the formalization of practices in the near future.
- The survey results show that editorial policies on AI are mostly focused on ethical content creation and informing the audience, but do not take into account the potential risks and threats to media workers themselves, especially women, who constitute a more vulnerable group.
- Notably, over half the participants (58%) indicated that training programs on AI did not include any component related to online violence or gendered risks posed by technology.
- More than half the respondents (53%) who underwent training did so through donor or NGO initiatives rather than newsrooms. This confirms the role of NGOs as the main source of professional development and security for women in Ukrainian media.
- The most common types of AI attacks are gendered disinformation, impersonation, meme campaigns, and “swarm” attacks (dogpiling). These forms have a distinct gendered and psychological nature.
- All respondents who had experienced attacks reported stress and increased anxiety; some reported self-censorship and temporary breaks from work. Thus, online violence directly impacts freedom of speech and women’s participation in journalism.
- In cases of attacks, women journalists typically turned to colleagues, newsrooms, or friends, rather than to the police or the platforms. Many respondents expressed a sense of defenselessness in the face of AI-facilitated attacks, many responded directly that there was currently nothing that could help or “you cannot protect yourself against it.” Such responses may be due to a sense of limited control over the situation and a certain distrust in the effectiveness of existing response mechanisms. Considering the experience described by the respondents, it can be assumed that law enforcement agencies still need extra support for better understanding of the gravity of such threats and for improvement of response procedures.
- Among the expected forms of assistance, the following were mentioned most often: legal support, rapid response from platforms, technical tools for detecting AI content, and protection provided by the police (cyber police). Answering open questions, women journalists mostly emphasized the need to develop technical skills (recognizing deepfakes, understanding how AI works), to introduce accountability mechanisms for AI-facilitated attacks, and to enhance the response by the police and digital platforms. This indicates the formation of a clear demand for systemic rather than individual solutions for women’s security in the media sector.
Although Ukraine is actively working on the development of a regulatory framework, no legislative regulation is currently offered. Issues of the legal status of technologies, standards of ethical use, and liability for violations remain unclear or advisory in nature. This poses risks for both users and the state, especially in the context of protection against technology-facilitated gender-based violence (TFGBV).