APRA's AI letter: a shift from framework to targeted expectations
Stronger supervisory action and, where appropriate, enforcement sits in reserve for entities that fail to manage AI risks proportionate to their size, scale and complexity.
APRA's letter to industry on artificial intelligence, issued on 30 April 2026, represents APRA's first published, AI-specific expectations of boards and accountable executives, drawn from a targeted supervisory engagement with selected large banks, insurers and superannuation trustees.
The letter is framed in technology-agnostic language. The substance is anything but. For the first time, APRA has published a structured set of AI-specific expectations, observation area by observation area, that go materially beyond what CPS 220, CPS 230 or CPS 234 say in their own terms. Entities should read this as a meaningful shift in supervisory posture: from "AI risk is covered by the existing framework" to "here, specifically, is what APRA expects on AI."
Here, we set out what the letter says, why the shift in posture matters, and what regulated entities should do now.
Key takeaways from APRA's AI letter
APRA has moved from generalised guidance to targeted expectations. The letter sets out specific, AI-focused expectations across four observation areas: cyber and information security, governance, supplier risk, and change management and assurance. Entities can no longer rely on the principle-based framework alone, APRA has laid out clear expectations for executive management.
Board understanding and literacy is now a published expectation. APRA expects boards to maintain "sufficient understanding and literacy" to provide effective challenge on AI risk and strategic direction. The corollary, given APRA's observed reliance on vendor presentations, is that board education programs must be structured, assessable and independent of vendors.
Cyber expectations are AI-specific. APRA names prompt injection, data leakage, insecure integrations and AI generated code, exploit injection and the manipulation of autonomous AI agents as attack pathways requiring controls. Identity and access management for non-human actors is called out as an area where current capability has not adjusted. APRA also mentioned threats from AI frontier models require a step change in cybersecurity practices.
Supplier concentration and opacity must be managed. As AI moves into full production in core business processes, APRA expects entities to maintain visibility over the full AI supply chain, including material third and fourth-party dependencies, and to actively manage concentration risk including substitution, portability or exit arrangements.
Traditional change management and assurance are no longer fit for purpose. Probabilistic, adaptive systems require continuous validation and monitoring, not periodic sample-based review. Organisations should be employing globally recognised control frameworks including control libraries, and change control for AI implementations. Internal audit and second-line risk functions must build the technical capability to assess AI systems, including agentic workflows.
Prudential supervision will increase. APRA has flagged proportionate prudential reviews, thematic activities and AI supplier engagement. The letter flags that APRA will continue to assess potential prudential risks and consider whether further APRA policy action may be needed.
The shift: from technology-agnostic framework to AI-specific expectations
For more than a decade, APRA's response to emerging technology risk has followed a familiar pattern. The principle-based prudential framework has been positioned as technology-agnostic, with APRA layering in guidance and supervisory activity over time.
The April 2026 letter signals a different posture. While APRA reiterates that the prudential framework remains "technology and vendor agnostic", the letter itself is the most prescriptive AI-specific intervention APRA has made. It does not just reaffirm existing standards. It tells regulated entities what those standards mean for AI, with worked expectations for boards and accountable executives across each of the four observation areas.
This matters for three reasons.
First, the published expectations are testable. Each of the four observation areas in the letter sets out, in plain language, what APRA expects. A regulated entity now has a direct supervisory yardstick providing an expectation to evidence.
Second, Governance requirements have been made clear. The letter explicitly observes that "many boards are still developing the technical literacy required to provide effective challenge on AI related risks and oversight" and notes "an overreliance on vendor presentations and summaries without sufficient examination of key AI risks". Individual accountabilities for AI risk now have a reference point issued by APRA against which competence can be tested, which APRA says is designed to help CRO, CTO and CISOs.
Third, the letter rejects the "AI is just another technology" framing. APRA explicitly calls out the tendency to treat AI as a routine IT change, with few entities having operationalised governance in practice. Treating risk in this way risks missing key differences such as predictive systems, adaptive behaviour in models, ethical considerations (such as inherent bias), and privacy and data risks. Given these differences, entities cannot rely upon point-in-time assurance, traditional change-and-release management and conventional vendor due diligence. Entities that have responded to AI by extending existing frameworks without rethinking underlying assumptions are explicitly in scope for criticism.
What APRA expects: a closer look
The letter sets out expectations across four observation areas. Each is paired with specific actions APRA expects regulated entities to take.
Cyber and information security. APRA expects entities to assess the implications of AI reliance for operational resilience and business continuity, with credible fallback processes where AI supports critical operations. Security controls must address AI-specific threats including strong privileged access management, timely patching, hardened configurations, automated vulnerability discovery, penetration testing, and controls over agentic and autonomous workflows. Robust security testing should cover AI-generated code, software components and libraries. Third-party and concentration implications must be considered for common platforms, services and providers.
Governance. APRA expects consistent governance arrangements that include, at a minimum: formal frameworks (policy, standard, guidance) and reporting lines for safe and responsible AI adoption; ownership and accountability across the AI lifecycle (design, development, deployment, monitoring, decommissioning); an inventory of AI tooling and use cases; human involvement and accountability for high-risk decisions; and structured training and education of staff on AI use, misuse, limitations and secure practices.
Supply chain risk. APRA expects entities to map and maintain visibility over the full AI supply chain, including material third-party and fourth-party dependencies. Contractual and governance arrangements must provide sufficient transparency, auditability and assurance, including the ability to understand model behaviour, material changes, performance issues and risk management practices across the service lifecycle. Concentration risk must be actively managed, with plausible failure scenarios tested and the credibility of substitution, portability or exit arrangements assessed for critical AI providers.
Assurance. APRA expects globally recognised control frameworks to be applied to AI implementations, with integrated assurance across cyber, data governance, model performance risk, operational resilience, privacy and conduct. Specific frameworks are not named, but could include ISO42001 or the NIST AI Risk Management Framework. Second-line risk and internal audit functions must possess the technical capability and tooling to independently assess AI systems, including probabilistic models and agentic workflows. Comprehensive risk and information security assessments are expected prior to deployment and throughout the lifecycle, with continuous monitoring proportionate to the criticality of the use case.
The expectations of boards are equally direct. Boards must maintain sufficient understanding and literacy to set strategic direction and provide effective challenge and oversight, and must oversee an AI strategy consistent with the entity's risk appetite and tolerance settings, supported by effective monitoring and reporting (including for third-party dependencies) with clearly defined triggers aligned to resilience objectives.
Five priorities for organisations
Take the letter to the board. Table it at the next board risk committee and traffic-light each of the four observation areas. The gaps will surface, and the documented record of engagement begins.
Treat board AI literacy as a deliverable. Commission a structured education program with assessable content, drawing on independent frameworks, alongside entity-specific material.
Ensure CPS 230 efforts have mapped the AI supply chain. Include third parties, embedded AI in non-AI products, and fourth-party foundation model dependencies. Where exit or substitution is not credible, name it and document the compensating controls.
Rebuild change management and assurance for probabilistic systems. Existing change-and-release controls and point-in-time, sample-based assurance methods assume a stable artefact; AI does not have one. Apply continuous validation, integrated coverage across cyber, data governance, model risk, resilience, privacy and conduct, and build the technical capability in second line and internal audit to assess agentic workflows and AI-assisted code.
Engage early with APRA. The letter expressly invites engagement with the Non-Financial Risk Team. Structured early engagement produces a materially softer supervisory posture than waiting to be examined.
The April 2026 letter is the start of an active supervisory program; proportionate prudential reviews, thematic activities and AI supplier engagement, with further policy action signalled if needed. Supervisory engagement over the next 12 months will be its substance.
Stronger supervisory action and, where appropriate, enforcement sits in reserve for entities that fail to manage AI risks proportionate to their size, scale and complexity.
Get in touch