The role of AI in corporate governance: Lessons from ASIC v Bekier

Scott Sharry, Carly Hanson, Belinda Hennessy and Abbey Gulley
10 Apr 2026
3 minutes

ASIC v Bekier emphasises that AI can be a valuable tool for directors in controlling, processing and analysing board materials and managing information overload, but it cannot replace a director's personal duty to exercise independent judgment.

The Federal Court's decision in ASIC v Bekier [2026] FCA 196 provides important guidance on the use of artificial intelligence (AI) by directors and officers. It is the first Australian case to engage with the use of AI in a board setting and how it may be used to control, process and analyse the large volume of board material that directors must read, understand and engage with in discharging their duties.

If appropriately utilised and controlled, AI can be used to assist directors in the discharge of their duties of care and diligence under section 180(1) of the Corporations Act 2001 (Cth). However, AI cannot replace a director's personal responsibility to exercise their informed independent human judgment in making decisions in the company's best interests. The duty under section 180(1) is personal and non-delegable.

Background

In ASIC v Bekier, ASIC brought claims against the former CEO, Group General Counsel (also the Company Secretary and Chief Legal and Risk Officer), and non-executive directors of Star Entertainment Group Limited for breaching their duty of care and diligence under section 180(1).

The directors argued it would be unreasonable to expect them to review hundreds of pages of board papers before each meeting, particularly where critical papers were provided only minutes before a meeting commenced.

Justice Lee was critical of this argument relying on the comments of Justice Middelton in ASIC v Healey [2011] FCA 717 that a board can (and must) control the information it receives and directors cannot rely upon an inability to cope with the volume of information they receive. A director, whether executive or non-executive, is required to take reasonable steps to place themselves in a position to guide and monitor the management of the company, and is expected to take a diligent and intelligent interest in the information available to them, understand that information, and apply an inquiring mind to their responsibilities.

Critically, his Honour observed that "a way of addressing information overload, at least in part, could be through the principled and transparent use of emergent technology", while emphasising that "analysing and understanding information provided by management is a core function of a board" and "the primary way by which directors access the information necessary to make informed, bona fide decisions."

His Honour added that "any use of AI should be controlled and transparent" and that it is "prudent that boards should discuss and deliberately govern any AI use by formal adoption of policies."

Practical applications of AI in the Boardroom

If used responsibly, AI can be used as a tool to assist directors in meeting preparation – however directors cannot rely solely on AI to analyse and understand information provided by management and substituting AI for careful reading and interrogation of board materials may increase risk and legal exposure. The following are examples of how AI might be used for a board meeting.

Preparing for the Board Meeting

  • Analysing board packs: AI-powered tools can analyse financial reports and other documents within board packs, identifying anomalies or irregularities that may require further investigation.

  • Leveraging historical board insights: Directors can use AI to analyse historical organisational data and after reviewing the board pack, they can pose questions to an AI model tailored specifically for board use, for example, identifying that a previous board or sub-committee had considered a similar acquisition, and thus giving valuable context for current decisions.

  • Converting text to audio: AI can convert board papers into audio summaries or podcasts, making materials more accessible for directors who need to review them on the go.

  • Developing questions: Directors can use AI to formulate targeted questions for board meetings or to test ideas related to agenda items, improving the quality and focus of discussions.

During Board Meetings

  • Transcription and minutes: AI can record and transcribe board discussions, capturing key points and producing a first draft of minutes to streamline the documentation process.

  • Real-time research: Generative AI can provide on-the-spot information in response to issues raised during meetings. For example, generating competitor insights – allowing the board to address matters in real time rather than deferring them and saving valuable time.

After Board Meetings

  • Post-meeting analysis: AI can analyse board discussions to provide feedback on participation and speaking time, helping to ensure balanced and effective engagement.

Risks and mitigation

There are risks in using AI in the board room, and responsible and acceptable use policies and AI registers should be regularly reviewed and updated to mitigate these risks.

It is the responsibility of directors to ensure that the way AI is used by directors to receive and analyse board material occurs in a responsible and transparent way.

The following risks are particularly relevant:

  • Data handling and confidentiality: Where a company relies on an external AI system, there is a risk that third parties could access records of sensitive boardroom discussions, compromising confidentiality and potentially waiving privilege. Where possible, companies should use proprietary AI systems or avoid inputting confidential or privileged information. Cyber criminals are active in exploiting security weaknesses in AI systems and even creating fraudulent AI products to mine sensitive corporate data.

  • Accuracy: Real-time AI outputs cannot always be immediately verified. Companies should ensure AI tools undergo rigorous accuracy testing, including cross-checking outputs against reliable alternative sources.

  • Discovery risk: AI chats, recordings and transcriptions should be treated just like emails and other documents. They may be discoverable in future court proceedings or regulatory investigations. Consider your retention policies to ensure that documents are not inadvertently deleted after they have become discoverable.

  • Formal board-endorsed policies: Boards should discuss and deliberately govern AI use by formal adoption of policies, rather than just by informal shadow use. These policies should be reviewed and updated regularly.

  • Transparency about how information is reduced and relied upon: Management should be transparent about whether and how AI has been used in preparing board papers, and directors should be transparent about whether and how they have used AI in digesting them.

Key takeaways

ASIC v Bekier emphasises that AI can be a valuable tool for directors in controlling, processing and analysing board materials and managing information overload, but it cannot replace a director's personal duty to exercise independent judgment. Directors must still critically analyse the information before them. To integrate AI effectively into boardroom practices, companies should adopt robust governance frameworks addressing data confidentiality, accuracy, and acceptable use, ensuring AI remains a responsible supportive tool rather than a substitute for human judgment.

Get in touch

Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.