
AI implementation 01: navigating different perspectives when deploying artificial intelligence

Whether you're legal, risk, operations or IT, there are ways to manage the different perspectives in your organisation and keep your AI deployment on track.
The recent surge in artificial intelligence development presents both significant opportunities and complex challenges. There are increased pressures on legal, risk, operations and IT professionals to consider AI across all aspects of operations, including legal practice.
Amid these discussions, there are many perspectives that must be considered and expertly navigated before effectively implementing AI. This article distils some of these, presents talking points and offers insights into seeking a consensus on how to move forward.
Artificial intelligence: its use cases in businesses and other organisations
Artificial Intelligence was first coined in the 1950s. It is a broad umbrella term that covers any computer system which is capable of performing tasks that ordinarily require human intelligence.
Recent advancements and popularisation of AI have been driven by generative AI. As a subset of machine learning, generative AI builds on earlier approaches to enable the creation of new content and facilitate innovative ways of analysing documents and data. Useful functions in legal services include:
natural language querying of documents and data sets to gain insights;
interpretation and classification of documents or data by analysing certain properties or responding to natural language queries;
intelligent extraction of information from documents and other data sources to support decision-making; and
augmenting legal research and improving the efficiency of evidence preparation and drafting.
1. The executive perspective: the solution is AI
A common question posed by senior executives is, "Can this be done by AI?"
While it would be convenient if the answer were straightforward, effective solutions typically use AI as one component of a broader toolkit.
It is important to keep front of mind that AI is not in and of itself a solution. Successful implementations share a common approach that:
integrates AI into specific use cases as part of a broader workflow, complemented by other technologies and human expertise
balances efficiency with appropriate risk management frameworks, aligned with best practices, relevant guidelines, and expert input
is driven by workflows prioritising repeatability, efficiencies of scale, and tracking of actions and outputs
2. The risk-averse perspective: AI is a new frontier (and might not be accepted)
When considering specific use cases in the practice of law, the first consideration is often whether it has been, or is likely to be, accepted by the Courts, regulator, or relevant governing body. It's therefore useful to ground this discussion in an understanding of the profession's response to previous similar technological leaps.
Despite the recent surge in popularity of generative AI and large language models, AI is not new to the legal profession.
For example, AI in the form of supervised learning, predictive models and classification has been used in document review during the discovery phase of litigation, and other document review activities such as regulatory inquiries and investigations, for many years.
Since about 2010, predictive models have been deployed as part of a broader umbrella commonly referred to as "Technology Assisted Review", which became a common and well understood practice by 2012. Fast forward another four years, and the Victorian Supreme Court became the first to consider and approve its use in a matter in Australia. A decade later, practice directions and industry standards now require these methods be at least considered, if not adopted, in line with the principles of achieving a just and efficient resolution of the real issues in dispute.
This pattern is likely to repeat itself with increasing intensity as technological developments speed up year on year. While Courts can be slow to recognise and approve new technologies, recent experience suggests that an approach appropriately managing risk, builds in quality assurance, and is well documented, will be accepted in a wide range of situations involving generative AI. Almost all Australian Courts have now published, or are close to publishing, Guidelines or Practice Directions relating to the use of AI by lawyers. Most Law Societies and Bar Associations have also published guidelines, stood up task forces, and prepared educational material.
Already, workflows are being deployed involving generative AI in the document review process and this adoption is accelerating. For example, throughout this year we've been successfully deploying Relativity's aiR product across multiple use cases, such as in regulatory response and discovery review workflows.
3. The traditionalist perspective: this is a tomorrow problem
While it is sensible to consider the "why", current evidence overwhelmingly indicates that engaging with the responsible use of AI is a pressing issue for today, not tomorrow.
Legal, Risk, Ops and IT professionals are now routinely addressing questions such as:
Can this workflow, task or activity be augmented by AI?
What guidelines or directions govern this work if AI were to be used?
Whose expertise do I require to progress integrating AI?
The answers to these important questions will regularly differ depending on the particular use case and context in which it exists.
What we've learnt from dealing with the different perspectives during AI implementation
It's clear that each of the three perspectives have something valuable to offer. It's equally clear that, taken too far, each can lead to poor choices, or even a total derailment of your AI deployment.
Pragmatic, commercial and ethical implementations that effectively balance all competing views is vital.
This reality underpins the need for trusted advisors in your corner at every step of the way, weighing up the multidisciplinary perspectives to move forward with clarity.
Part 2 and 3 of this series will dive deeper into guidance on what to look for when considering an AI implementation, and explore a case study in the document review space.
Data Intelligence practice
Clayton Utz's Data Intelligence practice delivers clarity to documentary, financial and complex system data. Through implementation of cutting edge technology and rigorous workflows, our multi-disciplinary team is equipped to tackle data related challenges and distil powerful insights in complex legal matters.
Get in touch
