The Secret Agent: AI behind the scenes in commerce
Simon Newcomb, Kirsten Webb
Time to read: 2 minutes
AI agents may be the "secret agents" of the current day, but the consequences of their actions are certainly not invisible
ChatGPT's agent can autonomously search and book restaurants for consumers, while Moltbook, the new Reddit-like social media site, allows bots to interact while humans can only observe. They're not the only AI agents in use. AI agents can research, compare, and recommend products to consumers, eliminating the need for consumers to visit a business's website. While most consumers have not yet delegated entire purchasing decisions to AI yet, the shift towards A2A commerce is definitely underway.
These types of agent-to-agent (A2A) commerce, where autonomous AI agents perform tasks without human intervention, are not just changing the consumer buying journey and reshaping the retail, hospitality and financial services sectors. They are creating commercial and legal challenges for businesses beyond consumer interactions and into the B2B space.
Commercially, businesses must adapt to A2A commerce to stay relevant, but they face a dilemma. AI agents can increase a business's reach and streamline transactions. Simultaneously however, their use can reduce profit margins and influence over consumer decisions. Some businesses are already resisting the shift towards A2A commerce by shutting out consumer AI agents.
Legally, A2A commerce poses significant questions and risks, because AI agents can behave in unpredictable, unintended, and possibly unlawful ways caused by programming errors, third-party interference or manipulation by malicious actors. For example, an AI agent could mislead customers, or collude or engage in anticompetitive behaviour, which often is more difficult to detect and also attracts huge penalties. And despite their autonomy, AI agents have no distinct legal personality, meaning their actions must ultimately be attributable to a person.
These are not hypothetical risks. Air Canada was ordered to pay damages after its AI chatbot misled a customer. The Tribunal held chatbots are not separate legal entities and Air Canada was responsible for its misconduct. Automated systems, operating as intended and within their programmed parameters, can and have formed contracts against the relevant business's intentions and caused financial loss – and courts globally have upheld those contracts.
What remains unknown is whether these principles apply to autonomous AI agents and A2A commerce. Certainly, greater regulatory focus is required in this area.
In this A2A era, businesses should carefully consider:
consumer preferences and attitudes towards AI across various cultures, demographics, generations and product types in deciding whether to adopt agentic AI;
rethinking how they present, structure and share product information, and fine-tuning and optimising its branding and value proposition for an audience that may not be human;
understanding how consumers prompt AI agents;
establishing clear and responsible AI practices to address consumer concerns on ethical AI use, transparency and trust;
implementing safeguards to prevent unintended actions more broadly, monitoring and legal compliance; and
shaping and leveraging early alliances with agents.
AI agents may be the "secret agents" of the current day, but the consequences of their actions are certainly not invisible. Businesses must learn to manage their brands in the agentic AI era, or risk falling behind.
Disclaimer
Clayton Utz communications are intended to provide commentary and general information. They should not be relied upon as legal advice. Formal legal advice should be sought in particular transactions or on matters of interest arising from this communication. Persons listed may not be admitted in all States and Territories.