Accountability in the AI workforce
Hilary Searing, Cynthia Elachi
Time to read: 2 minutes
The rapid adoption of generative AI by Australian employers across the employment lifecycle is transforming not only how organisations hire, manage and plan their workforce, but also the legal risk landscape in which they operate.
The Australian Government’s National AI Plan sets an ambitious vision for productivity and inclusion. However, rather than introducing standalone AI legislation, it relies on existing legal frameworks, supplemented by voluntary standards and guidance.
By contrast, the European Union has enacted the world’s first comprehensive AI legislation, categorising systems by risk level and prohibiting those that pose unacceptable risk. Australia has taken a more flexible and less prescriptive path. Even so, employers remain responsible for ensuring their use of AI is fair, transparent and accountable, and for managing the associated regulatory, safety, employee and reputational risks.
Bias, accountability and "black boxes"
AI-driven recruitment tools screen thousands of applications in minutes. However when automated systems make or materially influence hiring and promotion decisions, anti-discrimination and workplace laws dictate that employers still retain responsibility.
Under anti-discrimination laws and the Fair Work Act, employers must ensure candidates and employees are not discriminated against due to a protected attribute. An employer remains legally responsible for discriminatory AI outcomes, even when unintended.
If training data includes historical recruitment practices reflecting structural bias, the algorithm will learn and replicate those patterns at scale. If an AI-driven, or AI-assisted decision is challenged, employers may struggle to explain its basis, as many AI tools operate as "black boxes" – systems whose internal decision-making processes are opaque or not readily explainable. This lack of transparency creates legal risk, as businesses may be unable to explain or justify decisions, hindering their ability to discharge the burden of proof in response to claims.
Performance management: surveillance, privacy and trust
Employers commonly use AI monitoring tools to track activity. While these tools may enhance productivity and safety in some aspects, they also create privacy and surveillance risks. Workplace surveillance has also recently been identified by a review in Victoria as itself being a psychosocial hazard. To manage these risks, employers must: comply with the Privacy Act and applicable state and territory workplace surveillance and health and safety laws ensuring any AI-based monitoring is transparent, based on meaningful consent, undertaken for a legitimate purpose and goes no further than reasonably necessary. They must also manage the significant privacy risks of AI monitoring – which can include sensitive biometric data – by ensuring lawful collection, secure handling and limited use, as a data breach affecting either the employer or an AI provider can trigger regulatory oversight and serious reputational damage.
Workforce planning
From December 2026, employers using AI to make decisions that "significantly affect" individuals will have new Privacy Act disclosure obligations. Businesses must disclose in their privacy policies the use of AI in decision making including what data is used and whether decisions are fully automated or materially assisted by AI. New South Wales has also recently passed legislation to regulate the use of digital work systems in a work health and safety context, including to ensure that the health and safety of workers is not put at risk from the use of digital work systems (including AI) by the business or undertaking. AI is now embedded in modern workforce management, and its role will only expand. Australia’s regulatory model places the onus squarely on employers to ensure that AI use complies with existing discrimination, privacy, workplace health and safety and employment laws.
Businesses that approach AI deployment with clear governance structures, documented decision-making processes, ongoing system auditing and genuine human oversight will be better positioned to harness its benefits while managing its risks. Transparent communication with employees about how AI is used – and how decisions are made – will also be critical to maintaining trust.
Responsible AI in the workplace is not about resisting innovation; it is about aligning technological capability with legal compliance, ethical standards and organisational values. Those employers who strike this balance will not only reduce risk, but strengthen their workforce and reputation in the process.
Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.