AI and legal professional privilege: why common workflows now carry uncommon risk
AI is a technology that provides both boundless opportunities but also new risks. It is already embedded in how many businesses analyse documents, brief executives and prepare for disputes. But when those tools are public, consumer‑grade systems, they have the potential to fatally undermine the confidentiality that legal professional privilege (LPP) (or client legal privilege as it is also known) critically depends on. In Australia, both legal advice privilege and litigation privilege require confidentiality as a key ingredient, so conduct that is inconsistent with maintaining that confidentiality can waive privilege altogether.
Recent overseas cases and the Federal Court’s new generative AI practice guidance underscore the risk landscape and what prudent organisations should do next.
The core risk: confidentiality is the keystone – and AI can dislodge it
In Australia, LPP protects confidential lawyer-client communications and (for litigation privilege) confidential communications with third parties for the dominant purpose of anticipated or actual litigation. If confidentiality is not preserved, privilege fails. The High Court has recognised privilege as a substantive immunity against powers to compel disclosure that protects confidential legal communications made for a dominant purpose. However, that protection it is lost where a client acts inconsistently with maintaining that confidentiality.
Waiver of LPP is assessed objectively. The question is whether your conduct is inconsistent with keeping the communication confidential, not what you intended. Even inadvertent disclosure can have consequences, though Australian courts will usually correct genuine discovery mistakes. Outside the supervised discovery context, the focus remains on inconsistency with confidentiality.
Public, consumer‑grade AI platforms often reserve rights to retain inputs/outputs, use them for model training, and disclose them to third parties. Uploading privileged material, or even the gist of it, into such systems is hard to reconcile with maintaining confidentiality. Under Australian law, that creates fertile ground for waiver.
The Federal Court’s Practice Note contains warnings for litigants
The Federal Court of Australia’s recent Practice Note on the Use of Generative Artificial Intelligence (GPN‑AI) cautions both practitioners and parties against inputting confidential and privileged information into public AI tools. In particular, paragraphs 4.13 and 4.14 warn that confidentiality and privilege can be compromised by such use. The court also expects disclosure to the Court of any AI use that may bear on the accuracy or integrity of materials filed. (Many State Court Practice Directions do similar).
For organisations involved in litigation, such guidance sets a clear expectation: protect privilege by choosing the right tools and workflows, and be ready to explain and disclose relevant AI use where required by the Practice Note. You may be also best served by using an internal closed system AI tool where there is no third party issue and have your lawyers do the AI work for you.
What overseas courts are already saying (and why it matters here)
While no Australian court has yet ruled on privilege over AI‑assisted materials, recent decisions in the USA and UK point in a consistent direction on confidentiality and waiver:
USA
United States v Heppner (S.D.N.Y., February 2026): The court refused privilege over 31 AI‑generated documents created by a criminal defendant using a public AI platform. The three key reasons were: (1) no lawyer-client relationship with the AI; (2) lack of confidentiality given the provider’s data‑collection and disclosure terms; and (3) the work was not done at the lawyer's direction. (The judge noted that AI use under a lawyer’s direction could, in some circumstances, fit within the US Kovel “agent of the lawyer” doctrine – but that was not this case).
Warner v Gilbarco, Inc. (E.D. Mich., February 2026): By contrast, the court treated a self-represented litigant’s AI‑assisted materials as "work product", characterising AI as “a tool, not a person” and rejecting a rule that mere use of AI automatically waives protection.
UK
Munir v Secretary of State for the Home Department (Upper Tribunal Immigration And Asylum Chamber, November 2025): The Upper Tribunal was called upon to consider the professional conduct of legal representatives after concerns emerged over the use of generative AI in case preparation. While the immediate focus of the decision was on the submission of inaccurate, AI-generated legal authorities, the Tribunal made broader observations about confidentiality, privilege and the use of AI tools. It stated in unqualified terms that uploading confidential documents to an open source AI tool (such as Chat-GPT) places them in the public domain, breaching confidentiality and waiving privilege. It distinguished enterprise tools with contractual and technical safeguards.
These decisions converge on two points that align with Australian principles: (i) while LPP is a rule of substantive law that operates as an immunity against compulsory disclosure, its lifeblood depends upon confidentiality and (ii) direction by lawyers/counsel matters.
Leading Australian commentary has reached the same conclusion: public AI tools pose a significant privilege risk on confidentiality grounds, and even enterprise tools will not attract privilege unless the dominant‑purpose and legal supervision requirements are met.
Public vs enterprise AI: the distinction that will likely be determinative
Both Heppner and Munir draw a line between public/consumer AI and closed, enterprise‑grade systems. From an Australian perspective, that distinction will often be decisive:
Public/consumer AI: Terms often permit retention, training and third‑party disclosure. Using these tools for privileged content is difficult to square with maintaining confidentiality, exposing you to waiver arguments. (That said, major cloud providers such as Google and Microsoft have similar data-collection terms, so hinging a determination on a company's privacy policy could create tension with user expectations).
Enterprise/closed AI: Properly configured instances (for example, contractual prohibitions on retention/training, strict access controls, encryption, and auditability) can support confidentiality. But even then, LPP will only attach if the use is for the dominant purpose of obtaining legal advice or conducting litigation and is integrated into a legal workflow, not simply undertaken by business teams for general purposes..
Why lawyer/counsel direction and supervision matters
LPP protects legal work done for legal purposes. If business personnel independently use AI to draft “legal‑ish” content, or to summarise legal advice, there is a real risk that:
the dominant purpose is not legal advice/litigation; and
confidentiality is not safeguarded (especially on public platforms).
By contrast, when external or in‑house lawyers direct and supervise AI use as part of their advisory or litigation work, with confidentiality controls in place, Australian courts are more likely to regard resulting communications as privileged. That approach accords with long‑standing principles on agents and third‑party assistance in litigation, as well as the High Court’s emphasis on dominant purpose and confidentiality.
Suggested ways to maintain privilege when using AI
1. Choose the right AI for the job
Do not use public/consumer AI platforms for any material that reveals, summarises or paraphrases privileged communications, litigation strategy, draft pleadings, witness outlines, or internal legal instructions.
Use only enterprise‑grade, contractually “closed” AI for legally sensitive tasks – and only within lawyer designed workflows. Ensure contracts expressly prohibit retention, training on your data, and disclosure to third parties; require encryption, logging, access controls and audit rights.
2. Keep AI within a supervised legal workflow
Require all AI use on legal matters to be directed or supervised by a lawyer (external or in‑house). Treat AI as a tool used by or for the legal team — the closer to the lawyer's control, the stronger any privilege claim.
Maintain matter‑based governance: remember, in any contest, "focused and specific evidence" will be needed to establish the dominant purpose for which the document was created so record the use of AI per legal matter, by whom, why and at whose direction. Confidentially and securely store the prompts and outputs in a matter‑segregated drive.
3. Understand the data terms – before you upload anything
Conduct legal and security due diligence on AI providers: data flows (where and by whom), training/retention defaults, subcontractors, regulator/law‑enforcement access, and breach notification.
Configure privacy settings to “no logging/no training” where possible and document the configuration. Avoid features that share prompts/outputs for product improvement.
4. Train your teams on LPP hygiene in an AI world
Consider whether your LPP policies need to be updated to cover AI explicitly: i.e., what may be uploaded, to which systems, and under whose authority. Include clear examples of what amounts to “do not upload” content.
Reinforce that summaries or paraphrases of legal advice can themselves disclose the “substance, gist or effect” of privileged communications and trigger waiver.
5. Manage disclosure expectations in the Federal and State Courts
For matters in the courts, be familiar with the relevant Practice Directions/Notes and make any required disclosures to the Court consistent with those directions. Avoid inputting confidential or privileged material into public AI tools and be prepared to explain your AI use.
If privileged material is uploaded to a public AI platform by mistake, take immediate steps to contain it (account suspension, deletion requests), create a contemporaneous record, and seek legal advice on remediation.
Australian courts will often assist in remedying genuine mistakes in supervised discovery; outside that setting, your best protection is rapid, well‑documented corrective action.
The bottom line
LPP in Australia turns on confidentiality and dominant purpose. Public AI tools threaten both.
Courts here are likely to be influenced by the emerging international consensus: public uploads can destroy confidentiality; lawyer‑directed, closed‑system use is likely to fare better.
The Federal Court’s GPN‑AI makes expectations explicit for litigants. State Court Practice notes also do likewise. Be familiar with them.
Executives and General Counsel should act now by adopting enterprise‑grade tools with contractual safeguards, keeping AI within supervised legal workflows, ensuring that personnel needing legal advice are familiar with LPP risks, and implement robust governance and incident response.