AI governance, risk management, and director duties: discharging due care in the age of artificial intelligence

Samy Mansour, Simon Newcomb, Nigel Williams, Doug Nixon and Michelle Dawson
17 Oct 2025
5 minutes

Artificial intelligence has rapidly shifted from a niche technology to a central driver of business transformation and efficiency. In just a few years, organisations have adopted a vast array of technology enabled by large language models from automated decision engines to agentic AI-powered customer interfaces, each capable of fundamentally reshaping, and sometimes destabilising, risk profiles overnight. The opportunities are significant, but so are the hazards: AI amplifies conduct risk and has attracted new regulatory obligations while also being subject to many existing regulatory obligations. For boards and executive management, the imperative is to ensure that AI is governed with the same rigour and oversight as other critical risk domains such as cyber security, privacy and financial controls.

The Governance Institute of Australia’s 2024 white paper, AI Governance: Leadership Insights and the Voluntary AI Safety Standard in Practice , highlights that Australian business leaders face a unique challenge and opportunity in adopting AI. The white paper emphasises that effective AI governance is not just about compliance, but about building trust, driving value, and ensuring responsible innovation. It also underscores the importance of aligning AI strategy with organisational purpose and values and of embedding AI risk management into existing governance frameworks.

Directors’ duties in the age of AI

The rise of AI brings directors' duties into sharper focus. Under the Corporations Act 2001 (Cth), amongst other duties, directors must exercise their powers and discharge their duties with due care and diligence, act in good faith in the best interests of the company and for a proper purpose. In today's environment, this now must include the oversight of AI systems and their associated risks. Boards must have a clear understanding of how AI is deployed within their organisations and ensure that appropriate governance, risk management and compliance frameworks are in place. Failure to meet these standards may leave companies vulnerable to lawsuits, regulator censure, and potential claims that directors have breached their duties if AI-related risks eventuate.

This is underscored by ASIC’s 2025–26 Corporate Plan, which identifies both the use of AI and directors’ conduct as focus areas. It is clear that responsible AI adoption, strengthened governance, and enhanced controls over technology and data are not optional – they are regulatory expectations.

The Governance Institute’s white paper further reinforces that directors must be proactive in their oversight of AI, ensuring that AI is used ethically, lawfully, and in a way that aligns with stakeholder expectations. The paper also notes that boards should set the “tone from the top” on responsible AI, and that leadership commitment is essential for building a positive AI risk culture. Boards that fail to engage meaningfully with AI governance risk reputational damage and broader public and stakeholder criticism, especially as AI becomes more deeply embedded in business operations and public life.

And AI governance risk is a serious practical issue. For example, a 2024 paper published by ASIC that examined the ways Australian financial services and credit licensees are implementing AI found, amongst other things, that AI governance arrangements varied widely, the maturity of governance and risk management did not always align with the nature and scale of licensees' AI use, and not all licensees had appropriate governance arrangements to manage associated risks.

Risk management and directors’ duties: three key focus areas

From both a risk and legal perspective, directors and executives should focus on three key areas:

  1. Understanding the technology and its application:
    Boards must ensure they have sufficient knowledge to ask the right questions about how AI is being used in their organisation. Ignorance is not a defence, directors are expected to be AI-literate to the extent necessary for effective oversight. The Governance Institute’s white paper recommends ongoing director education and engagement with AI experts to build this capability.

  2. Building and testing governance frameworks:
    While delegation is necessary, oversight cannot be passive. Directors must ensure that AI governance frameworks are robust, regularly reviewed, and that controls are stress-tested. Pure application of existing model risk frameworks may not be enough, AI model governance needs to cover both prompt and output moderation/controls. The white paper highlights the need for clear roles and responsibilities, documented policies, and regular board-level reporting on AI risks and performance. It also advocates for integrating AI risk into existing risk management and assurance processes, eg. Model Risk Management frameworks, rather than treating it as a siloed issue.

  3. Staying ahead of regulatory and compliance risks:
    AI can trigger a range of legal and reputational risks, from privacy and discrimination to intellectual property and consumer protection. Regulators are watching closely, and global standards are emerging that will require boards to demonstrate robust AI governance. The Governance Institute’s white paper notes that boards should monitor regulatory developments, participate in industry initiatives (such as the Voluntary AI Safety Standard), and ensure compliance with both local and international requirements.

What does good AI risk governance look like?

As adoption of AI accelerates and proliferates, directors need to ensure they can demonstrate how they are discharging their responsibilities with due care and diligence. Drawing on the Governance Institute’s white paper, the Australian Institute of Company Directors’ "A Director’s Guide to AI Governance", and leading practice, boards should ensure their organisation can meet evolving AI risk governance expectations:

  • Clear roles and accountability: Boards retain ultimate responsibility for AI decisions; oversight cannot be delegated away. Appoint a senior executive responsible for AI risk, with clear escalation pathways and ownership of the AI inventory. Regular board or committee reporting on AI risk and performance is essential.

  • Skills: Invest in building workforce capability to design, procure, and challenge AI systems, including training on bias and the risks of AI “hallucinations.” Directors should ensure the board itself has sufficient understanding to discharge its oversight role.

  • Legal and ethical compliance: Ensure AI use aligns with laws, ethics, and organisational values, and act swiftly to correct issues. Directors should be alert to emerging legal requirements and ethical considerations unique to AI.

  • Transparency: Maintain clear, auditable records and ensure that AI decisions are explainable and understandable to stakeholders. This includes documenting decision-making processes, being open about how AI is used, and ensuring that all public statements about AI are accurate and evidence-based.

  • Risk management: Integrate AI risks into existing risk management frameworks. This involves regular reviews, stress-testing, robust validation before deployment, and ongoing monitoring and reporting. Boards should require evidence that controls are effective and ensure independent audits and reviews are scheduled.

  • Supporting infrastructure: Maintain strong data governance, cyber security, and third-party risk management for AI vendors. Directors should ensure these areas are adequately resourced and monitored.

  • Continuous improvement: Treat AI governance as dynamic; learn from incidents and adapt as technology and regulations evolve. Ongoing education and regular policy updates are essential to maintain effective oversight.

Practical questions for boards and risk committees

  1. Where is AI being used in our organisation?
    Maintain an up-to-date inventory of all AI models, datasets, and third-party vendors.

  2. Which of our key risks are accentuated by AI?
    Identify and prioritise potential failure scenarios, such as bias, intellectual property breaches, supplier outages, privacy breaches, consumer harm, or cyber incidents.

  3. Who is responsible for AI risk?
    Ensure there is clear executive ownership, executive accountability for model outcomes, and that AI risk is integrated into existing risk and audit committee structures.

  4. How do we test and monitor AI?
    Require evidence of robust validation before deployment, ongoing performance monitoring, and clear human oversight mechanisms.

  5. Are our public statements about AI accurate?
    Avoid overstating AI capabilities in disclosures and marketing; ensure transparency about limitations and risks.

  6. Are we keeping pace with regulatory and industry change?
    Regularly review developments in AI regulation, privacy, discrimination law, and relevant technical standards.

Looking ahead

AI is now a core boardroom risk, and directors should ensure that it is managed with the same rigour as financial or operational risks. The influence of AI systems on customer outcomes, organisational reputation, and long-term value means that boards cannot afford to treat AI as a peripheral issue. Directors must ensure that robust governance frameworks are in place, with clear accountability for AI risk ownership and a culture that prioritises responsible innovation. These will be the organisations best placed to capture AI’s benefits while minimising downside risk. Those that delay may face regulatory scrutiny, reputational damage, or even shareholder actions if AI failures impact value.

Ultimately, effective AI risk management requires informed oversight, structured governance, and a willingness to challenge assumptions. Boards that treat AI as both an opportunity and a risk, and govern it accordingly, in line with their legal obligations, will be best positioned to succeed in the age of artificial intelligence.

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.