Vietnam Artificial Intelligence Law 2025 – Impacts and compliance roadmap for businesses

1. Context and key concepts of the Artificial Intelligence Law 2025

On 10 December 2025, the National Assembly of Vietnam officially passed the Artificial Intelligence Law (AI Law), the first comprehensive legislation in Vietnam concerning this field. With an effective date of 1 March 2026, this Law is not merely the establishment of a legal framework but a strategic declaration, affirming Vietnam’s commitment to mastering technology and building digital sovereignty. This is a turning point, positioning AI as one of three strategic pillars—alongside data and digital infrastructure—to promote innovation, enhance national competitiveness, and ensure technological self-reliance in the new era.  

To effectively interpret the Law’s provisions, a firm grasp of the core legal definitions in Article 3 is essential. 

Term  Definition 
Artificial Intelligence System  is a machine-based system, designed to execute AI capabilities with varying degrees of autonomy, capable of adaptation after deployment; based on clearly defined or implicitly formed objectives, the system infers from input data to generate outputs such as predictions, content, recommendations, or decisions that may affect the physical or virtual environment. 
Developer  means an organization or individual that designs, builds, trains, tests, or fine-tunes all or part of an AI model, algorithm, or system, and has direct control over the technical methodology, training data, or model parameters. 
Provider  means an organization or individual that places an AI system on the market or puts it into service under its own name, trade name, or trademark, regardless of whether the system was developed by them or by a third party. 
Deployer  means an organization, individual, or state agency that uses an AI system under its control for professional activities, commercial purposes, or service provision; excluding use for personal, non-commercial purposes. 
User  means an organization or individual that directly interacts with the AI system or uses the output generated by that system. 

2. Core operating principles and prohibited acts

The AI Law establishes a set of legal “boundaries” to ensure that all AI activities in Vietnam adhere to core values. Understanding and integrating these principles into business strategy is not only a compliance requirement but also the foundation for building a responsible and sustainable AI culture within the enterprise. 

2.1. Fundamental principles 

Article 4 of the Law stipulates four fundamental principles that guide all AI-related activities: 

(i) Human-Centric Approach: This principle affirms that AI is a tool to serve humanity, and must ensure human rights, privacy, national security, and legal compliance. This requires businesses to center customer protection in all AI design and deployment, ensuring maximum respect for customers, employees, and the community in every application. 

(ii) AI does not replace human authority and responsibility: Humans must always retain ultimate control and the ability to intervene in decisions made by AI. Business strategy must integrate a “human-in-the-loop” mechanism, especially for critical decisions, to ensure that accountability is not transferred to machines. 

(iii) Fairness, Transparency, and Non-Discrimination: Organizations are responsible for proactively detecting and preventing bias and discrimination in AI models. If not strictly managed, algorithms can reproduce or amplify existing prejudices in training data, leading to unfair decisions, reputational damage, and potential legal risks. 

(iv) Sustainable Development: The Law encourages the development of AI in a “green” direction, conserving energy and minimizing negative environmental impacts. This is a crucial factor that businesses should consider integrating into their Environmental, Social, and Governance (ESG) strategy. 

2.2. Prohibited acts 

Article 7 of the Law lists a catalogue of absolutely prohibited acts to safeguard social safety and the rights and interests of individuals and organizations: 

(i) Taking advantage of or appropriating an AI system to infringe upon the legitimate rights and interests of organizations or individuals. 

(ii) Developing or using AI for the purpose of deception, manipulation of perception, or causing serious harm to people and society (e.g., creating deepfakes for fraud or spreading public panic). 

(iii) Taking advantage of the vulnerabilities of at-risk groups such as children, the elderly, or people with disabilities. 

(iv) Collecting or using data to train or operate AI in contravention of legal regulations on personal data protection, intellectual property, and cybersecurity. 

(v) Obstructing human control mechanisms over AI systems or concealing information that is mandatory to be transparent. 

These macro principles and prohibited acts are formalized into a practical legal framework, classifying obligations based on the level of risk that an AI system may pose. 

3. Analysis of the risk-based regulatory framework

The central pillar of the AI Law is the risk-based classification model, an approach inspired by the standards of leading international legislation such as the European Union’s Artificial Intelligence Act. The principle of this classification model is based on the view that a business’s compliance obligations shall be proportional to the potential risk that their AI system may pose to individuals and society. This allows regulatory authorities to concentrate supervisory resources on areas with the greatest impact, while simultaneously creating flexible space for low-risk innovations. 

Risk Level  Definition  Scope of Application 
High Risk  AI systems that may cause significant damage to the life, health, legitimate rights and interests of organizations and individuals, national interest, public interest, and national security.  Systems in essential sectors such as healthcare, education, finance, justice, and critical infrastructure. 
Medium Risk  AI systems that have the potential to mislead, impact, or manipulate users because the users do not recognize that the interacting entity is an AI system or that the content is AI-generated.  Chatbots, product recommendation systems on e-commerce platforms, marketing content generation tools, and applications that pose a risk of affecting user perception. 
Low Risk  AI systems that do not fall into the above two categories.  Internal support tools such as spam filters, inventory optimization tools, or applications with minor impact that do not affect critical human decisions. 

3.1. Strict management obligations for high-risk AI systems 

For systems classified as high-risk, both the provider and the deployer must comply with a series of strict management obligations before and throughout the operation process: 

  • Conformity assessment: Before being placed on the market, the system must undergo a conformity assessment in accordance with technical standards and regulations issued by the competent state authority. Specifically, certain systems on the list stipulated by the Prime Minister shall be mandatory to be certified by a conformity assessment body before operation. 
  • Risk management and data governance: A risk management mechanism must be established and maintained throughout the system’s lifecycle. Concurrently, there must be a rigorous governance process for the data used for training, testing, and validating the model to ensure quality and minimize bias. 
  • Record-keeping and activity logging: Mandatory storage of detailed technical documentation for the system and maintenance of automatic activity logs are required. These records must be readily available for provision to competent authorities upon request for supervision or post-audit. 
  • Human oversight and intervention: The system must be designed to ensure effective human supervision, intervention, and control. Humans must have the capability to disable or temporarily halt the system in the event of detected risk. 
  • Requirements for foreign providers: Foreign providers placing high-risk AI systems on the Vietnamese market must have a lawful contact point in Vietnam. For systems subject to mandatory certification, the foreign provider must have a commercial presence or an authorized representative in Vietnam. 

3.2. Requirements for medium and low-risk systems 

Compared to high-risk AI systems, the obligations for these two groups are significantly simpler: 

  • Medium-Risk Systems: The principal obligation is ensuring transparency. Users must be informed that they are interacting with an AI system or that the content they are exposed to is AI-generated. Furthermore, the provider and deployer must be accountable upon request from a state agency. 
  • Low-Risk Systems: Compliance obligations are minimal, primarily focusing on general accountability when requested. 

These risk-based obligations are applied flexibly to each relevant party across the AI lifecycle, thereby forming a clear chain of responsibility from development to deployment. 

4. Responsibility along the AI value chain: From Developer to Deployer

The AI Law adopts a role-based liability model, a strategic approach to clearly and reasonably allocate legal obligations. Accordingly, a company’s responsibility depends not only on the risk level of the AI system but also on its specific function in the value chain: whether it is a Developer (design, training), a Provider (placing on the market), or a Deployer (using in professional activities). 

4.1. Transparency and labelling 

One of the most important and cross-cutting requirements of the Law is transparency, stipulated in Article 11. This requirement aims to ensure that users have sufficient information to make decisions and to protect them from manipulation or confusion. 

  • Recognition of interaction with AI: The provider and deployer must ensure that users recognize when they are interacting with an AI system, as opposed to a human. 
  • Labelling of AI-Generated content: This is a breakthrough regulation against fake news and deepfakes. All audio, image, and video content generated by an AI system must be labelled with a machine-readable identifier—a technical standard that allows other platforms and tools to automatically identify and process the content’s origin, supporting large-scale filtering and authentication. When this content is provided to the public and is likely to cause confusion, the deployer must clearly notify and label it to distinguish it from authentic content. 

4.2. Incident reporting and liability for damages 

The Law establishes a clear mechanism for managing incidents and determining liability when damages occur. 

  • Incident reporting: When a serious incident occurs, the relevant parties (developer, provider, deployer) must promptly remedy the situation and report it to the competent authority through the Single E-Portal on Artificial Intelligence (Article 12). This mechanism helps the regulatory body monitor the situation, issue warnings, and request necessary measures such as system suspension or recall. 
  • Liability for damages: For high-risk AI systems, the deployer is the principal party liable for compensating the injured person, even if the system was operated in compliance with regulations. However, the Law also establishes a mechanism allowing the deployer to seek reimbursement of this compensation from the provider, developer, or other relevant parties if there is a prior agreement between them. This model creates a strategic risk shift, requiring deployers to conduct rigorous due diligence on their AI providers. The selection and procurement of an AI system is now not just an IT decision, but a critical risk management function requiring close supervision from both the legal and technical departments. 

Understanding these legal obligations is the first step. The next and more crucial step is to build a specific action roadmap to bring the organization into a state of full compliance. 

5. Strategic compliance roadmap for businesses 

Phase 1: Immediate action (Prior to March 1, 2026) 

This is the foundational preparation phase. The following actions should be prioritized before the Law takes effect: 

  • Establish an AI compliance team: Build a cross-functional team (including legal, technology, risk management, and business) responsible for monitoring, planning, and implementing compliance with the Law throughout the organization. 
  • Inventory and map AI systems: Review, identify, and create a detailed inventory of all AI systems currently being used, developed, or procured across the organization. Record the purpose, data sources, providers, and scope of impact for each system. 
  • Conduct preliminary risk classification: Based on the inventory, screen and classify AI systems into high, medium, or low-risk groups according to the Law’s definitions. This step helps determine the scope and priority level for compliance efforts. 
  • Review data governance processes: Evaluate current data governance policies and practices, especially for data used to train AI. Ensure these processes comply with both the AI Law and regulations on personal data protection. 

Phase 2: Transition period roadmap (Effective March 1, 2026 onwards) 

Article 35 of the Law stipulates a transition period to allow AI systems already in operation before the effective date time to adapt. Businesses need to be aware of the following critical deadlines: 

  • Prior to March 1, 2027 (12 months): Deadline for general AI systems to complete their compliance obligations. 
  • Prior to September 1, 2027 (18 months): Deadline for AI systems in sensitive sectors such as healthcare, education, and finance. 

The core actions in this phase include: 

  • Finalize classification and notification: For systems identified as medium or high-risk, businesses must complete the classification documentation and submit a formal notification to the Ministry of Science and Technology via the Single E-Portal on Artificial Intelligence
  • Implement management plan for high-risk systems: Execute necessary steps such as conducting conformity assessment (self-assessment or through a certifying body), establishing human oversight and intervention procedures, and building a system for technical documentation and activity log storage as required by the Law. 
  • Update transparency and labelling mechanisms: Deploy necessary technical solutions to apply machine-readable identifiers to AI-generated content. Update user interfaces and communication channels to clearly notify users when they are interacting with an AI system. 

Phase 3: Sustainable governance and opportunity exploitation 

Once fundamental compliance has been achieved, businesses should transition to sustainable governance and leverage the strategic opportunities presented by the Law: 

  • Develop an internal AI governance framework: Integrate the Law’s principles into the company’s policies, procedures, and culture. Establish an AI ethics council or committee to oversee new projects and ensure the responsible development of AI. 
  • Monitor guiding decrees and technical regulations: The AI Law is an umbrella law. Detailed regulations will be issued through subsequent decrees and technical standards. Businesses must proactively monitor and update their processes to ensure continuous compliance. 
  • Evaluate opportunities from incentive policies: Analyze the possibility of accessing State incentive policies, including tax, credit, and investment incentives. Proactively explore opportunities to participate in controlled testing mechanisms (sandboxes) for new technologies and access funding from the National Artificial Intelligence Development Fund. 

The AI Law 2025 is both a compliance challenge and a strategic opportunity for pioneering businesses to build trust and assert a leading position. Compliance is not the destination, but the foundation. The true opportunity lies in leveraging this legal framework to build Trustworthy AI as a core competitive advantage, not only in the domestic market but also on the international stage. By adopting a proactive approach, leaders will transform legal obligation into a driving force for innovation and sustainable growth. 

Date written: 20/12/2025


Disclaimers:

This article is for general information purposes only and is not intended to provide any legal advice for any particular case. The legal provisions referenced in the content are in effect at the time of publication but may have expired at the time you read the content. We therefore advise that you always consult a professional consultant before applying any content.

For issues related to the content or intellectual property rights of the article, please email cs@apolatlegal.vn.

Apolat Legal is a law firm in Vietnam with experience and capacity to provide consulting services related to Technology and contact our team of lawyers in Vietnam via email info@apolatlegal.com.

Share: share facebook share twitter share linkedin share instagram

Find out how we can help your business

SEND AN ENQUIRY



    Send Contact
    Call Us
    Zalo