All materials

International and Ukrainian Standards for Regulating AI

02.02.2026

Over the past five years, there have been numerous documents attempting to regulate AI. For Ukraine, the most relevant ones are regulatory and advisory acts developed by UN, Council of Europe, and EU institutions. At the same time, it should be taken into account that international organizations, uniting a broad community of states, tend to develop more general standards (mostly of a recommendatory nature) to take into account different national contexts.

What specific regulations are in place today, and how do they impact combating gender-based violence.

Recommendations on Ethics of AI (UNESCO). These include principles and values primarily targeting the government sector, which countries should implement into their national practice when developing regulatory approaches. Notably, gender issues are mentioned as a separate area requiring special standards and regard for contexts and vulnerabilities. Among the key recommendations in this regard, it is worth mentioning the requirement to filter out gender stereotypes when creating AI systems [§89-90], as well as to increase the representation of women in AI-related areas: entrepreneurship [§91] and academia [§92].

Framework Convention of the Council of Europe on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework Convention on AI). The document covers the regulation of AI technology and the overall assessment of its impact on human rights, where one of the mandatory elements is determining the compliance of an AI system with the principles of non-discrimination, including gender equality (Article 10). It is important to note that the Framework Convention has been repeatedly criticized for lowering standards to engage more signatories. As a result, its texts and mechanisms it proposes are phrased in highly generic terms, allowing national governments significant discretion to identify the procedure for its implementation.

Artificial Intelligence Act. The document offers a number of fairly detailed norms aimed at creating a safe space in the context of the creation and application of AI technologies. In particular, it contains a categorization of systems by risk level, prohibiting AI aimed at manipulating human behavior, social scoring, or biometric identification. In addition, the Act provides a definition of a deepfake (Article 3(44)(b)(I)) and obliges providers to ensure that audio, images, video and text generated by AI are appropriately labelled in a machine-readable format and are clearly identified as having been generated or manipulated (Article 50(2)). This obligation serves as a powerful tool to combat gender-based violence by preventing revenge porn and other forms of disinformation against women. In addition, Article 95 indicates that voluntary codes of conduct in the field of AI should also cover the assessment and prevention of risks for vulnerable groups, respecting the principle of gender equality.
 
Since these tools were developed within the framework of mechanisms to which Ukraine already has access or plans to gain access, their implementation is an important step in establishing our state as a strong player in the field of artificial intelligence. In this context, we suggest considering what steps have already been taken by Ukraine toward regulating technology, particularly focusing on combating gender-based violence.
 
Ukrainian Perspective of AI Regulation
 
Although Ukraine is actively working on the development of a regulatory framework, no legislative regulation of AI is currently offered. Issues of the legal status of technologies, standards of ethical use, and liability for violations remain unclear or advisory in nature. This poses risks for both users and the state, especially in the context of protection against technology-facilitated gender-based violence (TFGBV).
 
At the international level, Ukraine signed the Council of Europe Framework Convention in May 2025, and now plans to begin the process of ratification and development of by-laws.
 
Ukraine also became one of six countries that conducted pilot testing of the HUDERIA methodology, which allows preparing Ukrainian businesses for the implementation of regulation in the field of AI. One of the elements of assessment within this methodology is the implementation of non-discrimination principles and communication with key stakeholders with expertise in the area of equality. When the Framework Convention becomes fully operational in Ukraine, and human rights impact assessments are mandatory for all public sector AI systems, it is important that civil society is involved in the discussion processes.
 
In addition, as part of its European integration obligations, Ukraine must implement the AI Act. Although implementation is just beginning, it is important to ensure that this process becomes holistic and takes into account all contextual characteristics, as well as risks and threats. As of now, the Ministry of Digital Transformation has developed a White Paper, i.e. an action plan for regulating AI in Ukraine, and is also working on developing an AI Strategy — a document that will determine priorities for the development of this area. According to the White Paper, legislative regulation of the AI sector should be developed by 2027.
 
In the context of protecting the rights of women journalists from gender-based online violence, we should point out a number of advisory documents which formalize the rules of ethical and responsible use of AI in various sectors:
 
Recommendations for the responsible use of AI in the media sector. The Recommendations point out the need to adhere to non-discrimination principles; however, there are no specialized provisions on combating TFGBV. At the same time, the document contains a lot of points about clear labeling and the need of content verification, which is highly important in combating deepfakes, gendered disinformation, and similar threats. In the future, the document should be updated, with more attention given to gender issues.

Recommendations for the responsible use of AI in the advertising and marketing communications sector. This document contains a lot more points concerning gender equality, referring to the prohibition of discrimination overall and gender-based discrimination in particular in the advertising sector. Thus, the recommendations emphasize not only the need for formal equality, but also the need to ensure equal access to technology, as well as women’s participation in decision-making processes in practice.

Recommendations for responsible development of systems using AI technologies. Although the document does not focus on gender issues, and is generally more technical, it offers a number of solutions to ensure equality when creating AI tools: access and consultation with stakeholders, filtering out biases when compiling datasets, and providing feedback to overcome discrimination issues if they are identified during the use of systems.
 
Despite the availability of guidance and a special Glossary, they do not include notions related to technology-facilitated gender-based violence (TFGBV). This indicates the insufficient involvement of women’s and human rights organizations in the process of developing policies and standards in the field of AI, as well as the lack of integration of a gender-focused approach into national documents on ethics and regulation of artificial intelligence.
 
Finally, Ukraine has launched a regulatory sandbox for AI projects — a space where developers can test their products for compliance with standards. Projects of public importance will be the first to gain access to this tool. In addition, Ukraine is planning to create its own LLM. It is important to ensure that both initiatives take into account the issues of technology-facilitated gender-based violence (TFGBV) and focus on ensuring the principles of equality and non-discrimination.

This section is part of the study When AI Turns Hostile: Gendered Threats Against Ukrainian Women Journalists

Copied!