
AI in restructuring and insolvency: unlocking value and balancing risks
Katie Higgins, Simon Newcomb
Time to read: 7 minutes
AI is increasingly being explored in insolvency and restructuring settings for its potential to improve efficiency, reduce costs, and support creditor outcomes. Here we consider emerging use cases, evolving regulatory frameworks, and key governance considerations for practitioners looking to adopt AI tools while meeting their professional obligations.
Insolvency processes are by definition resource-constrained environments, so artificial intelligence (AI) tools that enhance efficiency and reduce delay and cost (and ultimately improve creditor returns) are naturally a welcome development.
The potential for efficiency gains via AI use in insolvency processes and restructurings more generally is significant. This brings with it the opportunity to address some long-standing concerns over Australia's insolvency regime's complexity, inefficiency and cost – without the need for structural reform.
Given low levels of public trust and confidence in AI in Australia generally, and uncertainties as to how AI technology may ultimately develop, the way in which Australian businesses manage AI risks, and how its use should be regulated, remain critical issues. Highly regulated sectors, such as the insolvency profession – and the legal profession for that matter – will have to meet existing statutory and general law duties, professional obligations and standards of conduct in their use of AI. And those professions will also need to comply with emerging regulations and factor in official guidance in relation to the use of AI in restructuring and insolvency matters.
There are a range of well-known ethical and reliability risks inherent in AI systems. For example, AI systems are known to produce outputs that are sometimes inaccurate, incomplete or misleading. There are also concerns relating to the potential for bias or failing to respect privacy. And they also have limitations that some may not expect – for example, generative AI systems can be poor at mathematical calculation – an essential capability in the insolvency context. But rather than avoid AI systems entirely because of their limitations, in many cases, the benefits of using AI systems can outweigh the risks – so long as the risks are properly managed.
AI: the regulatory framework so far
There was a flurry of activity in late 2024 on the regulatory front. The Australian Government released a proposal paper for mandatory guidelines to be adopted by Australian organisations in developing or deploying high risk AI systems in September 2024. Alongside that, it released an AI safety standard with similar guardrails that can be voluntarily applied to any setting. Around the same time, ASIC released a report on its key findings from a review of how AFS licensees and credit licensees are using AI – and governance gaps that it had identified.
Both the Government's Proposal Paper and ASIC focus on three key areas in the safe and responsible use of AI:
testing – to ensure AI systems perform as intended;
transparency – about product development with end users, and others in the AI supply chain; and
accountability – for governing and managing the risks.
Given the myriad potential AI use cases in an insolvency or restructuring process, these are important principles to bear in mind when determining how AI tools should be deployed in these settings. This is particularly so given the crucial role that an external administrator plays – and the reliance placed on their personal judgment, skill, business intuition and integrity – in a formal insolvency process.
Added to that, there are of course existing (technology neutral) statutory and general law obligations that will apply in relation to the use of AI in an insolvency administration. Voluntary administrators, deed administrators, liquidators and receivers are all officers within the meaning of the Corporations Act and subject to general law and statutory duties of good faith, care and diligence as well as a large number of obligations under specific provisions of the Corporations Act, Regulations and other legislation. These include investigatory and reporting duties. A liquidator is an officer of the Court, through whom the Court itself notionally conducts compulsory liquidations. Voluntary administrators are charged with the responsibility to achieve the ends of Part 5.3A of the Corporations Act – their independence, impartiality, skill and diligence regarded as the "very marrow" of Part 5.3A. Courts, in determining the extent of these responsibilities, have endeavoured to balance the legislative intent that a voluntary administration process is to be swift and practical, against the need for the administrator to present reliable information to creditors on key issues. Nuanced recognition by the courts of the "delicate balance" between speed and accuracy in a voluntary administration will now need to take into account how AI is used by insolvency practitioners in the discharge of their duties.
Potential AI use cases in restructuring and insolvency processes
AI is already being put to work in insolvency and restructuring processes and uptake will only increase. Existing and potential AI use cases include:
automation of manual work (eg. input, collation and analysis of data) and production of reports to creditors and regulators – freeing up insolvency practitioners to focus on interaction and negotiation with stakeholders and strategic analysis and away from time-consuming admin;
deep and speedy analysis of large and complex data sets – including prior director transactions, potential fraud patterns or other malfeasance, or tracing assets which otherwise may be hidden to a human reviewer (or otherwise too expensive to bottom out from a cost / benefit analysis);
assessing and modelling in real time multiple alternative restructuring options / DOCA proposals;
assessing (at least at first instance) creditor claims; and
preparing for creditor meetings and assisting with creditor interactions more generally.
There is of course also significant scope for AI tools to be used more broadly in an R&I context – going a long way to simplifying and harmonising our insolvency regime in a way that could sidestep the cost and delay of a complete "root and branch" review. Although we're only scraping the surface here, given how quickly technology is advancing, potential use cases include:
assisting in data collection and systemic analysis of insolvency processes across the Australian economy – providing practitioners and regulators a better knowledge base to inform decisions and policy and streamline processes more generally (as called for by the Parliamentary Joint Committee in its 2023 Report on Corporate Insolvency);
codifying Australia's insolvency laws (currently spread across a number of Acts, rules, regulations, regulatory guides and court rules - as highlighted by Michael Murray);
modelling outcomes and "BATNA" positions of various creditors and other stakeholders in restructuring negotiations, and simulating negotiations with multiple stakeholders (some commentators pointing to the potential application of general existing AI negotiation tools in a consensual restructuring context);
automating debt collection processes (see for example the recent establishment of the world's "first AI law firm" Garfield.law, which uses an AI-powered litigation assistant to help creditors recover unpaid debts, guiding them through the small claims court up to trial);
providing more in-depth analysis and modelling for directors as to the viability of restructuring outcomes and satisfaction of the "better outcome" test in a safe harbour context;
providing alternative restructuring options for companies which are based on deep data analysis and pattern recognition that humans may miss; and
analysing financial or operational red flags earlier on in the piece in a cost-effective way, enabling directors to take proactive steps and model potential options for their company in real time, when there is still runway available to effect a turnaround.
The need for governance mechanisms and guardrails in insolvency administrations
A critical user of AI in our insolvency regime is the external administrator. The many efficiency gains that can be realised by an administrator using AI must be balanced against robust governance mechanisms and guardrails that are tailored to the particular demands and risks inherent in an insolvency process – most importantly, ensuring appropriate levels of human oversight.
As the Report on Corporate Insolvency notes, upon appointment, the external administrator becomes a custodian of what may be the livelihood of not just directors or employees of the immediate business, but potentially many other people. The tasks of external administrators in an insolvency process are complex and idiosyncratic. Much emphasis (and reliance) is placed on their judgment, business intuition, integrity and ability to personally manage often highly complex situations and disparate stakeholders with competing interests, and corral them where possible to an outcome. External administrators assume control of a company and become personally responsible to the company's creditors and personally liable for the company's actions. Becoming a registered liquidator requires meeting stringent statutory requirements as to professional qualifications, experience and knowledge.
Adding to the mix – external administrators will need to keep records on why they took certain decisions. This runs into the "black box" problem – where decisions based on AI may not be traceable or explainable. That same problem can also create challenges for external administrators' investigations into potential malfeasance by directors or officers.
Both the Proposals Paper and guidance from ASIC and the Governance Institute identify governance issues which are particularly pertinent in an R&I context – especially where AI tools involving automated decision-making processes are deployed. Finding the right balance between reducing cost and delay and maximising creditor returns on the one hand, and on the other hand determining appropriate risk governance (including levels of human oversight) so insolvency practitioners comply with their statutory and general law duties, is critical.
Key considerations for insolvency practitioners in using AI
Key considerations for insolvency practitioners (drawing on key themes identified in the Proposals Paper and by ASIC) Include:
Issue 1: High risk use cases
As individuals and groups are foreseeably exposed to significant adverse impacts from the use of AI in the R&I context, it seems likely that some of the potential AI use cases will be categorised as “high risk” under the new regulations proposed by the Australian Government. This would mean that anyone deploying AI for those use cases would have to implement the mandatory guardrails.
In preparation for compliance with that legislation (assuming it is introduced), there is also the opportunity to implement the Voluntary AI Safety Standard when deploying AI systems in R&I. Compliance with the Voluntary Standard will provide a pathway to compliance with the proposed mandatory regulations, given that the guardrails are very similar.
Issue 2: Contestability
Creditors, employees and other stakeholders need to be able to contest the outcome of a decision or recommendation. To the extent that AI tools are used in the decision-making process, there must be a comprehensive audit trail which links the decision-making process with applicable rules and facts and allows for external scrutiny (the courts have also given useful guidance on automated decision-making processes).
Issue 3: Explainability
Decisions made using AI tools are often not traceable as those made in traditional rule-based systems. AI models can be so complex that the pathways to decisions cannot be understood. This challenge is heightened where AI models developed by third party vendors are deployed by practitioners in an insolvency process.
Even though it may not be possible to explain the inner workings of an AI system, it will typically be possible to provide transparency about the AI system and the data it was trained on, and that will indirectly help to explain how the AI system arrives at its outcomes.
Issue 4: Transparency
Creditors, employees and other stakeholders should know when AI has been used to impact them, and also receive sufficient information about the AI system and how the risks associated with it have been managed.
Issue 5: Education, testing and verification
Insolvency practitioners must understand the capabilities and limitations of AI systems and have appropriate skills to manage the risks and intervene where necessary.
Appropriate processes also need to be set for testing, monitoring and human oversight of AI tools and processes, before and during its deployment – including detecting and responding to unintended results.
As the Proposals Paper makes clear – the purpose of AI is partly to automate certain activities and augment the ability of humans to process information. As a result, real-time human involvement in an AI system may not always be practical and may make a system less reliable. Developers will need to design systems so that humans can review operations and outputs and reverse decisions if necessary. This will add further complexity – and demands on insolvency practitioners in the discharge of their duties – in the "delicate balance" of speed and accuracy that is inherent in many insolvency processes.
Key takeaways
Other insolvency regimes, including in the US, leave control mostly with the company which has filed for insolvency, albeit with higher levels of Court oversight. In contrast, our insolvency processes for the most part take control away from a company upon an insolvency filing, and place it in the hands of an external administrator. Maintaining trust in the efficacy and fairness of our insolvency regime in the face of AI deployment by its practitioners is therefore critical.
The quid pro quo for the removal of control in an Australian insolvency process is the fact that external administrators are subject to stringent professional standards and assume personal responsibility for the management of the company and personal duties to creditors.
The reliance placed on the personal judgment, independence, skill and integrity of external administrators accordingly means that governance arrangements and appropriate guardrails for AI use will need to be carefully navigated by the industry in the near term.
But those caveats should of course not deter either practitioners or the industry more generally from exploring the obvious benefits AI can bring to bear in achieving the key outcomes our insolvency regime is ultimately directed towards: returning companies where possible to viability, or (where that isn't possible), maximizing returns to creditors via a fair and transparent process. Each of the use cases identified here (and no doubt many more on the horizon) can contribute in meaningful ways to those outcomes.
Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.