"The clock is at a minute to midnight": ASIC’s open letter on cyber resilience and AI
ASIC Commissioner Simone Constant's open letter last Friday to all AFS licensees and market participants, calling for urgent action on cyber resilience in the face of AI-accelerated threats, has one central message: frontier AI models have materially changed the threat environment, and organisations that have not already strengthened their fundamentals are running out of time.
The letter arrives within a fortnight of ASIC’s $2.5 million enforcement outcome against FIIG Securities Limited for cyber security failures under its general AFS licence obligations, and APRA’s letter to industry on artificial intelligence issued on 30 April 2026. Read together, these interventions signal a wide-reaching regulatory expectation that cyber resilience and AI governance are board-level priorities, not mere IT workstreams.
AI as a threat multiplier
ASIC opens with the proposition that frontier AI models are
lowering the barrier to sophisticated cyber activity and threat capability
increasing the speed and scale of attacks; and
enabling forms of exploitation that were previously beyond the reach of most threat actors.
This doesn't mean that entirely new categories of risk have arisen as a result, but organisations' existing cybersecurity controls are now more likely to be tested, more often, and under greater pressure. As the sophistication of the threat environment evolves with technology, particularly with the rapid advancement in AI, cybersecurity controls must keep pace.
Critically, ASIC is not calling for organisations to adopt novel AI-based security tools. Instead, the Commissioner emphasised a return to first principles – the consistent execution of well-established controls, supported by clear governance and adequate resourcing. To that end, the letter sets out 12 specific steps that ASIC expects licensees to take now, spanning threat reassessment, critical asset protection, access control, patch management, defence-in-depth architecture, incident response readiness and third-party risk management. These are framed as baseline expectations for cyber resilience, not aspirational goals.
Evidence, not assurance, when it comes to cyber risk
In the letter, ASIC also outlines its governance expectations of boards (which are largely reflective of their existing directors' duties under the Corporations Act). Boards and senior executives must take steps to understand their organisation’s cyber risk posture and be able to evidence the effectiveness of their governance frameworks, including through test results, audit findings, lessons from incidents and independent validation. Reporting to the board must address end-to-end control effectiveness, not just activity metrics.
Importantly, the Commissioner directed that the letter be tabled and discussed at each entity’s board and risk governance committees. This does not appear to be a suggestion, but rather a clear regulatory expectation.
ASIC's AI letter against the background of its enforcement activity
Importantly, the letter was not issued by the Commissioner in a vacuum and should be read by licensees and market participants in the broader context of ASIC's recent enforcement activity. Specifically, this year, in ASIC v FIIG Securities Limited [2026] FCA 92, the Federal Court imposed a $2.5 million penalty on FIIG Securities, an AFS licensee whose cyber security failures over four years resulted in the theft of 385GB of confidential client data, belonging to some 18,000 clients.
This was the first time civil penalties have been awarded for cyber security failures under general AFS licence obligations, but not the first time ASIC has taken enforcement action over failures to observe adequate cybersecurity practices. In 2022, the Federal Court held section 912A of the Corporations Act 2001 (Cth) had been breached by an AFS licencee's failure to manage its cybersecurity risks (ASIC v RI Advice Group Pty Limited [2022] FCA 496). A third proceeding on the same basis is currently being heard against Fortnum Private Wealth.
The sum of these parts is that ASIC's enforcement appetite is increasing in a manner that is commensurate with the growth of the cyber threat landscape, which has been accelerated by the proliferation of AI.
Governance: the gap between adoption and controls
While the ASIC letter is primarily a cyber resilience intervention, the conceptual link between AI and cybersecurity is inextricable. The letter must therefore be viewed through the prism of AI as an enabler and accelerator of cyber risk. In particular, it must be read in conjunction with APRA’s 30 April 2026 letter, which identified material gaps in AI governance, risk management, assurance and operational resilience across its regulated community. APRA's consistent finding in this letter was that AI adoption is outpacing the controls designed to govern it, and so the letter may be viewed as a useful barometer for the AI and cyber governance challenges facing the broader Australian business community.
Taken together, the ASIC and APRA letters create a clear regulatory expectation (for financial services entities in particular): organisations deploying AI, whether through internally developed models, vendor-supplied tools, or embedded AI features within existing platforms, should assess the full spectrum of AI risk, including cyber risk, through robust AI and data governance frameworks.
The practical challenge for most Australian organisations is not a lack of awareness, but a lack of structured governance. AI deployments are often decentralised, operating within individual business units that adopt tools without coordinated pre-deployment risk assessment frameworks, or adequate governance controls and oversight protocols across the product lifecycle. From a cyber risk perspective, organisations often struggle to adequately map the way in which data flows throughout an organisation, how it is used, by whom and for what purpose. In addition, they lack the frameworks to detect vulnerabilities and threats. When these two governance deficiencies converge, their risk is amplified exponentially, exposing organisations to a raft of risks in their day-to-day operations.
Accountability for these deficiencies and the resulting risks rests irrevocably with boards; despite any particular organisational accountability frameworks or roles and responsibilities structures that may be implemented, directors and senior executives cannot delegate or otherwise shirk that accountability. Accordingly, it is imperative that boards become, and remain educated on the AI and cyber risks to which their organisations are exposed.
This starts with asking the right questions as to their AI adoption; the use cases, the capabilities, the limitations and the risks. From a cyber perspective, boards must ask: what information assets do we hold, how is data flowing throughout the organisation, where are our vulnerabilities, how are we monitoring them, and importantly, what are we doing about them?
Regulatory convergence
One of the key challenges for many Australian organisations in the current regulatory environment in Australia from a governance perspective, is the potential convergence of obligations across multiple regulatory regimes. A single cyber incident involving AI-processed personal information can simultaneously engage the AFS licensing regime (overseen by ASIC), the prudential standards (enforced by APRA, under CPS 234 (Information Security)), the Notifiable Data Breach scheme under the Privacy Act 1988 (Cth) (enforced by the OAIC), and in some cases, the Security of Critical Infrastructure Act 2018 (Cth) (for entities operating relevant critical infrastructure assets).
Each regime carries its own assessment criteria, notification timelines and escalation requirements, but the common practical effect for boards is that AI and data governance, and in particular, incident response planning, may need to account for the cumulative burden of parallel regulatory obligations. This potential regulatory complexity supports the Commissioner's central thesis of the importance of governance, but in addition, also emphasises the need for governance that is robust and can withstand the test of a multifaceted regulatory environment; for example, incident response playbooks that were designed for a single-regulator notification scenario may be inadequate for the multi-vector, multi-regulator response that serious cyber incidents now demand.
AI and cybersecurity risk: a checklist to action
Get in touch