How the Vietnam AI Law Reporter implements Article-accurate compliance tracking, risk classification, and regulatory reporting.
Under Article 12 of Law No. 134/2025/QH15, providers must self-classify their AI systems before deployment. The Vietnam AI Law Reporter implements a guided classification engine that walks your team through this process step by step, ensuring every criterion is evaluated against the law's exact requirements.
The classification engine evaluates your system against three risk tiers (High, Medium, Low) using a structured scoring methodology. Each question in the guided flow maps directly to a specific article and clause, so your classification decision is traceable and defensible.
Think of the classification engine like a structured interview. Instead of reading the entire law and interpreting it yourself, the tool asks targeted questions, scores the answers, and produces a documented classification with full article citations for every factor considered.
Each AI system is evaluated against defined criteria for High, Medium, and Low risk. The scoring methodology produces a weighted result based on sector, purpose, autonomy level, and impact scope as defined in Articles 12-14.
Healthcare, finance, education, transport, recruitment, justice, and energy are designated high-risk sectors. Any AI system operating in these domains triggers elevated classification and mandatory conformity assessment under Article 13.
Before risk scoring begins, the engine screens for prohibited practices under Article 8. Systems that match prohibited use patterns (social scoring, subliminal manipulation, vulnerable group exploitation) are flagged immediately and cannot proceed to deployment.
Critical: Self-classification is not optional. Article 12 requires providers to classify every AI system before making it available in Vietnam. The classification must be documented, and the provider bears legal responsibility for the accuracy of the classification. Incorrect self-classification can result in penalties under Article 30.
The law defines 5 entity roles, each with distinct obligation sets. The Vietnam AI Law Reporter generates a tailored obligation checklist for each AI system based on the combination of your organization's role and the system's risk tier. Every obligation links back to its source article.
For organizations that hold multiple roles (for example, both developing and deploying an AI system), the tool merges obligation sets and highlights where responsibilities overlap or compound.
Obligations under Articles 15-16: technical documentation, safety and security testing, training data governance, bias evaluation, and providing sufficient information for downstream providers to perform classification.
Obligations under Articles 12, 17-19: self-classification, conformity assessment, National AI Portal registration, user-facing transparency disclosures, and post-market monitoring.
Obligations under Articles 20, 29: human oversight implementation, incident reporting, strict liability for high-risk systems, impact assessments, and maintaining operational logs for regulatory inspection.
Rights under Articles 22-23: right to know when interacting with AI, right to understand how decisions are made, and access to human review of automated decisions affecting their interests.
Rights under Articles 24-25: right to explanation of AI-driven decisions, right to contest outcomes, access to remedy mechanisms, and the right to lodge complaints with regulatory authorities.
Think of obligation tracking like a project management board. Each obligation is a task with an owner, a deadline, a status, and a regulatory citation. The tool tracks completion percentage per role and per tier, so compliance leadership can see exactly where gaps remain before the March 2027 or September 2027 deadlines.
Deadline awareness: The tool tracks two distinct grace periods. The general 12-month deadline (March 2027) applies to most AI systems. The extended 18-month deadline (September 2027) applies to healthcare, education, and finance sectors. Each obligation in the tracker shows which deadline governs it.
Article 19 requires conformity assessment for AI systems before deployment. The law offers two paths: self-assessment (for most systems) or third-party assessment (for systems on the Prime Minister's designated list). The Vietnam AI Law Reporter supports both paths with structured, checklist-driven workflows.
The conformity assessment module walks your team through each requirement, allows evidence attachment for every checklist item, and generates the assessment report in the format expected by the National AI Portal.
Available for AI systems not on the PM's designated list. The tool provides a structured checklist covering technical documentation, risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity, all mapped to Article 19 sub-requirements.
Required for AI systems on the Prime Minister's designated list (expected via implementing decree). The tool helps you prepare the documentation package that a certified assessment body will need, tracking the same checklist items with evidence requirements.
Conformity assessment is similar to a building inspection. Just as a building must demonstrate it meets fire safety, structural, and electrical codes before occupancy, an AI system must demonstrate it meets risk management, transparency, and safety requirements before deployment. The tool is your inspection checklist and evidence binder.
Every checklist item supports document attachment (test reports, design documents, audit logs). Each piece of evidence is linked to the specific Article 19 sub-requirement it satisfies. When you generate the conformity assessment report, the tool produces a complete evidence matrix showing coverage of all requirements, identifying any gaps that need to be addressed before submission.
PM's List: The Prime Minister's designated list of AI systems requiring third-party assessment is expected to be published through an implementing decree. The tool references the latest available version and will be updated as decrees are issued. Until the list is published, the self-assessment path applies to all systems under the general conformity provisions of Article 19.
Article 26 mandates incident reporting for AI systems that cause harm or near-misses. The Vietnam AI Law Reporter provides a complete incident management lifecycle, from initial detection through root cause analysis to regulatory submission. Incidents are classified by severity, tracked against response timelines, and documented in formats ready for National AI Portal submission.
The incident module integrates with the analytics dashboard, giving compliance teams visibility into incident trends, response times, and recurring failure patterns across all registered AI systems.
Incidents are classified as Critical, High, Medium, or Low severity based on impact scope, harm type, and affected population. Each severity level maps to specific response timeline requirements and reporting obligations under Article 26.
The tool tracks time-to-detection, time-to-response, and time-to-resolution for each incident. Automatic alerts notify your team when regulatory reporting deadlines are approaching, preventing late submissions.
Each incident supports documented root cause analysis with categorized findings. The tool captures contributing factors, corrective actions taken, and preventive measures implemented, building an audit trail for regulatory review.
Incident reports can be exported as PDF or Excel files formatted for National AI Portal submission. The generated documents include incident details, timeline, severity assessment, root cause findings, corrective actions, and all required regulatory fields specified by Article 26.
The incident analytics dashboard provides charts and summaries showing incident frequency by severity, average response times, common root causes, and system-level incident histories. This helps compliance teams identify systemic issues and demonstrate continuous improvement to regulators.
Deployer liability: Under Article 29, deployers of high-risk AI systems bear strict liability for incidents caused by those systems. The incident management module helps deployers document their due diligence, response actions, and corrective measures, which are critical for demonstrating compliance during regulatory investigations.
Vietnam's AI Law does not operate in isolation. It intersects with 6 other major Vietnamese laws, and compliance with the AI Law alone is not sufficient. The Vietnam AI Law Reporter maps obligations across all related legislation, identifies overlapping requirements, and highlights gaps where additional compliance measures are needed.
The cross-law mapping engine provides a unified obligation view, so your compliance team can manage all AI-related regulatory requirements from a single dashboard rather than tracking each law separately.
AI systems processing data on Vietnamese networks must comply with cybersecurity requirements including data localization, security assessments, and incident reporting, which overlap with the AI Law's own security and incident provisions.
AI systems that process personal data must comply with the Personal Data Protection Decree, including consent requirements, data processing records, impact assessments, and cross-border transfer restrictions that affect training data governance.
AI-generated content and AI training data raise IP questions. The cross-law mapping identifies where AI Law transparency requirements intersect with IP protections for training data sources and generated outputs.
AI systems deployed on digital platforms face additional obligations under Vietnam's digital technology legislation, including platform transparency, algorithmic accountability, and user protection measures.
AI systems involved in automated contract formation or electronic transactions must comply with e-transaction validity requirements, digital signature standards, and record-keeping obligations.
Consumer-facing AI systems (chatbots, recommendation engines, pricing algorithms) must also satisfy consumer protection requirements around fair dealing, disclosure, and complaint handling.
Think of cross-law mapping like a Venn diagram of regulations. Each law is a circle with its own requirements, and where circles overlap, you have obligations from multiple laws covering the same activity. The tool identifies these overlaps so you fulfill all obligations simultaneously rather than discovering gaps during an audit.
Gap analysis: The cross-law compliance engine runs automated gap analysis to identify obligations from related laws that are not covered by your existing AI Law compliance measures. For example, PDPD consent requirements for training data may not be addressed by AI Law documentation obligations alone. The tool flags these gaps and suggests specific compliance actions.
The Vietnam AI Law Reporter generates compliance reports in PDF and Excel formats designed to match the document structure expected by the National AI Portal. All reports include full article citations, evidence references, and audit trail data, so your submission package is complete and traceable.
Because this is a local-first application, no data leaves your machine. You generate reports locally, review them, and submit them to the National AI Portal manually. This keeps your compliance data under your control at all times.
Classification reports, conformity assessment reports, incident reports, and compliance status summaries are generated as professionally formatted PDF documents with article citations, evidence matrices, and organizational details required for portal submission.
Obligation tracking sheets, cross-law mapping tables, incident logs, and system inventories can be exported as Excel workbooks for integration with your organization's existing compliance management processes and internal reporting.
Every classification decision, obligation status change, assessment update, and report generation is logged with timestamps and user attribution. This audit trail demonstrates to regulators that your compliance process is systematic, not ad hoc.
Report formats are aligned with the expected submission structure of the National AI Portal (Articles 27-28). As implementing decrees define the exact portal requirements, the tool's templates will be updated to match, ensuring your exports are always portal-ready.
Think of report generation like preparing a tax filing. The tool is your compliance accounting software: it collects all the data throughout the year, organizes it into the right forms, and produces the submission-ready documents. You review, sign, and file them yourself.
Local-first principle: Your compliance data never leaves your machine. The Vietnam AI Law Reporter stores everything in a local encrypted database (AES-256-GCM). Reports are generated locally and exported as files. There is no cloud upload, no external API calls with your data, and no third-party access. You maintain full control over your compliance information at every stage.
Explore the Vietnam AI Law Reporter demo and see how the classification engine, obligation tracker, and report generator work in practice.
Launch the Demo