
AI implementation 02: your execution checklist

Executing an AI implementation involves many important considerations, but some of the most critical include a focus on security, data quality, robust assurance processes, traceability, and cost management to deliver a scalable, value-driven and contextually appropriate solution.
Navigating the crowded AI vendor landscape is challenging, and implementing AI in your business adds another layer of complexity. This article highlights key considerations to keep in mind. This article is part of a series; the next article will explore a case study on the tactical implementation of an AI solution.
We've already explored how to approach different perspectives when adopting AI technology. But what comes next? Beyond the initial excitement, successful implementation may require integration with your existing systems, appropriate security measures tailored to your needs, and a risk-managed approach.
Which elements are most important to get right?
Security – a cornerstone of any AI implementation, security must be considered upfront before progressing too far into system functionality. Three particular items of focus are:
Model Type: Generally, you will want a solution that is a "Closed Model". Different to an "Open Model" such as standard ChatGPT, "Closed Models" generally do not allow prompts and results to train the underlying model, and do not retain any data. This manages risks in relation to unapproved disclosure of confidential or proprietary information.
Data Sovereignty: Ensure the AI system complies with local and international data sovereignty laws. For example, particular data may need to remain within Australian borders.
Updates: Understand at what frequency, and through what methods, your underlying Large Language Model (LLM) is trained and updated. Ensuring these underlying sources are secure and trustworthy, or otherwise subject to sufficient controls, is an important component of the implementation's security posture.
Quality In, Quality Out – the adage quality in, quality out rings true for AI. However, the additional layer now to consider is quality training. Take the time to understand the strengths and limitations of the model your solution is built on. For example, if your industry uses frequent technical jargon or terms of art, ensure the solution design is equipped to handle this. You may need to look at models that have invested in industry-specific pre-training to achieve optimal results. Without this, the AI may struggle to deliver accurate or meaningful outputs.
Quality Assurance – where decisions are made based on an AI-augmented process, robust quality assurance is critical. Statistical methods, such as precision and recall, play a critical role in validation. For example, in AI-supported document review, comparing a sample of AI-generated results against a human's blind review using objective measures can provide necessary assurance. This process not only helps validate the AI’s performance but also builds confidence among business leaders that the outcomes are reliable and suitable for decision-making. Regular testing and validation should be embedded into your AI implementation strategy to maintain ongoing quality assurance.
Tracking is key – the ability to trace work products and decisions is fundamental to a defensible and robust implementation. This is especially important as some jurisdictions are beginning to require disclosure of whether and how AI has been used, especially in heavily regulated industries like legal services. Consider implementing methods to monitor and document where AI has been involved in the development of work products. This could include maintaining logs of AI interactions or tagging outputs generated by AI systems. Coupling these measures with clear and visible internal policies on AI use will help ensure transparency and accountability.
Cost – AI implementation can be unnecessarily costly if mismanaged. Consider the scalability and flexibility of the solution. Will it grow with your business, or will its rigidity cause additional costs as your needs evolve? Additionally, assess whether an initial investment or short-term loss is acceptable, given the potential for a future return on investment. Similarly, does the current environment require a tactical implementation, where the technology is managed or run externally, limiting long-term cost exposure?
In the third and final instalment in this series, we will explore a case study on the tactical implementation of an AI solution to support a review workflow, developed by Clayton Utz on an active matter. This practical walkthrough will demonstrate how the above considerations were prioritised to ensure a successful outcome.
Get in touch
