Generative Ai
Malaysia AI Governance: PDPA, MyDIGITAL, and What's Next
Oct 01, 2025

Malaysia AI Governance: PDPA, MyDIGITAL, and What's Next

Malaysia's AI governance spans PDPA, MyDIGITAL, and BNM guidance. Here is the full regulatory picture every organisation needs to plan around in 2026.


Malaysia does not yet have a dedicated AI Act. But that does not make AI deployment in Malaysia a regulatory vacuum. AI systems built and deployed in Malaysia operate within a layered framework of existing laws, sector-specific regulatory guidance, and the government’s strategic ambition for the country’s digital future. Understanding that framework is the prerequisite for compliant, responsible AI deployment — and for planning the governance investment your organisation will need to make as the framework matures.

The picture for 2026 is clear enough to act on. What follows is each layer of the current framework, what it requires, and what is expected to change in the near term.

Layer 1 — The Personal Data Protection Act

The Personal Data Protection Act 2010 (PDPA), as amended by the Personal Data Protection (Amendment) Act 2024, is the foundational data governance law affecting any AI system that processes personal data of individuals in Malaysia.

The PDPA establishes seven data protection principles. For AI deployments, four are directly material.

The General Principle requires that personal data is processed with the consent of the data subject, unless one of the specified exemptions applies. For an AI system processing customer data — whether for credit decisioning, fraud detection, personalisation, or customer service — there must be a documented lawful basis for that processing. Consent is the most explicit basis; where consent is not practical, the legitimate interest basis is available but requires a documented legitimate interest assessment weighing the organisation’s interest against the data subject’s rights and expectations.

The Data Minimisation Principle requires that personal data processed is adequate, relevant, and not excessive relative to the purpose. For AI systems, this is an active design constraint: the system should receive only the personal data fields that are actually necessary for its function. Building AI pipelines that pass complete customer records to LLM APIs when only a subset of fields is relevant to the task is a PDPA compliance failure, not merely an architectural inefficiency.

The Retention Principle requires that personal data is not kept longer than necessary. AI systems that log inputs and outputs — which they should, for audit purposes — must have defined retention policies for those logs, and those policies must be enforced. Indefinite retention of logs containing personal data is not compliant.

The Data Subject Rights provisions give individuals the right to access personal data processed about them and to correct inaccurate data. For AI systems, this creates a practical requirement: the organisation must be able to identify what personal data was processed by an AI system in relation to a specific individual, in order to respond to a subject access request. Systems with no audit trail — which cannot reconstruct what data was processed — cannot meet this requirement.

The 2024 Amendment made two significant changes. It introduced mandatory data breach notification: organisations must notify the Personal Data Protection Commissioner within 72 hours of becoming aware of a data breach that likely poses a risk to data subjects’ rights and freedoms. For AI systems handling personal data, this requires a breach detection and response capability that most organisations building AI systems have not explicitly designed. It also strengthened enforcement, increasing the maximum fine for personal data breaches to RM 1 million per contravention and introducing personal liability for senior officers of organisations that commit serious breaches.

Layer 2 — The MyDIGITAL AI Roadmap

MyDIGITAL is Malaysia’s national digital economy blueprint, launched in 2021, targeting Malaysia’s status as a leading digital economy in Southeast Asia. Within MyDIGITAL, the AI Roadmap sets out the government’s strategic framework for AI development and adoption across three pillars: talent, infrastructure, and governance.

The National AI Office (NAIO), established under the Ministry of Science, Technology and Innovation (MOSTI), is the coordination body for AI governance in Malaysia. NAIO’s mandate includes developing national AI policy frameworks, facilitating AI adoption across priority sectors — financial services, healthcare, manufacturing, government — and representing Malaysia in international AI governance forums.

The posture of the MyDIGITAL framework toward AI is promotional, not restrictive. The government’s goal is to accelerate AI adoption in Malaysia and to position the country as a destination for AI investment. This creates a regulatory environment that is actively supportive of AI deployment, with governance frameworks designed to enable responsible adoption rather than to constrain it.

For businesses, this means the regulatory environment for AI in Malaysia in 2026 is fundamentally permissive. The government wants organisations to deploy AI. The obligation is to do so responsibly — with PDPA compliance as the floor and sector-specific guidance from the relevant regulator where applicable.

Layer 3 — BNM’s AI Guidance for Financial Institutions

Bank Negara Malaysia’s Risk Management in Technology (RMiT) policy document, issued in June 2020 and effective January 2021, is the primary regulatory framework for technology risk management at BNM-regulated financial institutions. RMiT addresses AI as part of its broader coverage of technology risk.

RMiT’s requirements for AI include: governance frameworks for the use of AI in regulated use cases, model risk management practices covering development, validation, and ongoing monitoring, third-party risk management for AI systems and components procured from external vendors, and documentation requirements for AI models in production.

BNM has been consulting on dedicated AI governance guidance for financial institutions. The consultation process has been underway through 2024 and 2025, and the resulting circular is expected to be issued in 2026. Based on publicly available consultation materials and BNM’s consistent policy direction, the anticipated requirements of the forthcoming guidance include: formal model risk management frameworks aligned with international standards (the US Federal Reserve’s SR 11-7 guidance on model risk management is frequently referenced as a baseline); explainability requirements for AI systems used in credit decisions; third-party AI vendor assessment protocols covering model transparency, data handling, and ongoing performance monitoring; and human oversight requirements for consequential AI-assisted decisions.

Financial institutions that are not yet investing in AI model governance — model inventories, validation processes, performance monitoring, documentation standards — should treat the forthcoming BNM guidance as the prompt to begin. The framework is coming; the organisations that have laid the groundwork will find compliance straightforward.

Layer 4 — Securities Commission Malaysia

The Securities Commission Malaysia (SC) exercises jurisdiction over capital markets activities, including AI applications in investment services. The Capital Markets and Services Act 2007 (CMSA) and its subsidiary legislation apply to AI-generated investment recommendations, automated portfolio management (robo-advisors), and AI-assisted market surveillance.

SC’s Digital Markets framework, under which digital investment management (DIM) licences are granted, sets out requirements for robo-advisory services including algorithm governance, disclosure requirements to clients, and human oversight of automated investment decisions. Firms operating under DIM licences have been working to these requirements since their introduction.

For firms using LLMs to generate investment research, commentary, or recommendation content, the CMSA’s provisions on investment advice apply regardless of whether the advice is generated by a human analyst or an AI system. The firm providing AI-generated investment content to clients holds the regulatory obligation for that content’s accuracy and appropriateness. AI does not transfer or reduce that obligation.

Layer 5 — EU AI Act Implications

Malaysian companies with European operations, European customers, or data subjects who are European Union residents are affected by the EU AI Act, which entered into force in August 2024 and is being phased in over a two-year implementation period.

The EU AI Act classifies AI systems by risk level. Prohibited AI practices — those that the Act bans outright — include real-time remote biometric identification in public spaces for law enforcement purposes, social scoring systems operated by public authorities, and AI systems that manipulate persons through subliminal techniques. Any AI system your organisation operates that touches EU persons must be reviewed against the prohibited practices list.

High-risk AI systems face the heaviest obligations: conformity assessments before deployment, registration in the EU database of high-risk AI systems, technical documentation, post-market monitoring, and incident reporting. High-risk categories include AI systems used in credit scoring and creditworthiness assessment, AI systems used for employment and worker management decisions, AI systems used in education and vocational training, and AI systems in critical infrastructure. Financial institutions that use AI for credit decisioning and have any exposure to EU persons are deploying high-risk AI systems under EU AI Act definitions.

The practical implication for Malaysian firms is that the EU AI Act is not a matter of extraterritorial curiosity — it is a compliance obligation for any organisation with EU market exposure. The governance framework required by the EU AI Act — documented risk assessments, human oversight mechanisms, transparency obligations, post-market monitoring — is also broadly consistent with good practice for responsible AI deployment in any jurisdiction.

What Is Coming in 2026 and 2027

Three developments are anticipated in the near term.

BNM’s AI governance circular is expected in 2026, formalising the model risk management and explainability requirements for financial institutions that have been signalled through consultation. Organisations that wait for the circular before beginning to build governance frameworks will find themselves significantly behind on implementation timelines.

PDPA enforcement ramp-up is expected to follow the 2024 Amendment’s entry into force. The Commissioner’s office has been building enforcement capability, and the higher maximum penalties introduced by the Amendment create both the incentive and the public mandate for active enforcement. Data breach notification compliance, data minimisation, and retention controls are the most likely initial focus areas.

A potential Malaysian AI Act consultation is anticipated in 2026, based on government statements and the trajectory of the global AI regulatory environment. The form and timing of any Malaysian AI Act are uncertain — the government has consistently signalled a preference for enabling frameworks over restrictive regulation — but the consultation process will begin to define the longer-term AI governance landscape.

The Right Response to an Incomplete Framework

The right response to a regulatory landscape that is clear in some areas and still forming in others is not to wait. The obligations that are clear — PDPA compliance, RMiT requirements for financial institutions, EU AI Act obligations for firms with European exposure — are not contingent on the completion of the framework. They apply now.

The governance investment that the anticipated forthcoming guidance will require — model documentation, human oversight mechanisms, explainability capabilities, third-party vendor assessment — is also not contingent on the guidance being issued. These are good practice for responsible AI deployment regardless of the regulatory mandate. Organisations that build this governance now will not need to retrofit it when the mandate arrives.

The prudent approach is to build AI governance that would satisfy the strictest likely requirement across the relevant regulatory layers. In practice, this means: PDPA compliance as a baseline for all AI involving personal data, BNM-aligned model risk management for financial institution use cases, and EU AI Act-consistent documentation and oversight mechanisms for any deployment with European exposure. That combination covers the current requirements and positions the organisation well for the forthcoming ones.


Find out how Nematix’s Strategy & Transformation practice can align your technology investments to business outcomes.