The use of surveillance technologies such as closed-circuit television is now commonplace, and newer surveillance technologies, such as facial recognition technology, are increasingly being used. This technology inherently relies on a large pool of personal information to be collected and stored. Although the innovative capabilities that the technologies offer are convenient and can improve the security and efficiency of properties, as well as the occupant experience, their use is not without risk. In particular, building owners and users must ensure that all use of surveillance technologies is done in a transparent, privacy-conscious and legally compliant manner.
Why are surveillance technologies used?
In the context of an office or retail space, for example, surveillance technologies might be used to:
- enhance access controls throughout the property with the goal of replacing more traditional methods of security (for example, to replace the need for concierge or security personnel monitoring access gates on the ground floor of a building);
- monitor and surveil people, including to identify and prevent security risks and potential criminal activity;
- monitor foot traffic to understand and analyse how spaces are being utilised. For example, the technology might be used to assist in hot desk planning or room reservation management for visitors;
- detect and oversee hazardous areas within buildings; and
- provide occupants with real-time data on the use of spaces within a building (such as meeting rooms and end of trip facilities).
Privacy and other legal considerations
The collection of visual images and data through surveillance technologies will comprise personal information as defined under the Privacy Act 1988 (Cth) (Privacy Act) to the extent that it includes information about an "identified individual", or an individual who is reasonably identifiable (for example, an individual’s facial features and other identifying information, such as tattoos).
Relevantly, the use of surveillance technologies may involve the collection of certain “sensitive information” as defined under section 6 of the Privacy Act, including health information, biometric templates and biometric information that is to be used for the purpose of automated biometric verification or biometric identification. Sensitive information is afforded a higher level of protection under the Privacy Act and the associated Australian Privacy Principles (APPs).
Organisations that are classed as “APP entities” under the Privacy Act, and are using personal information, including sensitive information (including through the means of facial recognition and other surveillance technologies), must:
- only collect this information through lawful and fair means;
- notify the individual about the collection of the personal information in accordance with APP 5; and
- ensure that all collection, uses and disclosures of personal information are made in accordance with all other APPs.
Notably, under the Privacy Act, APP entities must not collect sensitive information about an individual unless the individual consents to the collection of the information and meets the other conditions under APP 3. Certain limited exceptions apply, for example, if the collection of the sensitive information is required or authorised by or under an Australian law.
It is important to note that the use of CCTV and facial recognition technology may also be subject to other laws, including individual State and Territory privacy and surveillance device legislation and, depending on the jurisdiction, specific workplace surveillance laws, such as the Workplace Surveillance Act 2005 (NSW). This Act regulates the surveillance of employees within a workplace and mandates that surveillance of an employee must not commence without prior notice in writing to the employee. Special and additional requirements apply to particular types of workplace surveillance, including camera surveillance.
Being aware of bias and other errors
No technology is completely fail-proof. In particular, concerns have been raised that facial surveillance can fail due to algorithmic or user error which may be racially and gender biased. It is therefore important that all use and deployment of surveillance technologies is subject to ongoing testing and monitoring and is complaint with anti-discrimination laws. All use of the technology should be properly tested and verified by a human. The technology should be an aid and not the single source of truth.
Keeping data safe
Surveillance technologies collect vast amounts of data, which may be very valuable to a hacker or malicious actor, so it is vital that users protect their data and assess and carefully manage the cybersecurity and other security risks associated with collecting it (you can get some tips on how to manage and mitigate cybersecurity risks in a proptech context here). In addition, the data from surveillance technologies must not be used for an unlawful or illegitimate purpose, such as to track a current or former partner.
The road ahead
As newer technologies such as the use of facial recognition technologies become more prevalent and the subject of extensive media reporting, we are likely to see more targeted oversight and regulation of the area.
Large retail giants, including Kmart Australia and Bunnings, have already been the subject of extensive media coverage and concerns from consumer group, CHOICE about their use of facial recognition technology. Facial recognition technology was allegedly being used by Kmart Australia for the prevention of fraud and criminal activities, and by Bunnings to help identify persons who had previously been involved in incidents of concern within their stores. Reportedly, both Kmart Australia and Bunnings have now ceased their use of facial recognition technology.
The Office of the Australian Information Commissioner (OAIC) is currently undertaking an investigation in respect of the use of facial recognition technology by these entities. This follows its 2021 determination that 7-Eleven was collecting sensitive biometric information (through facial imaging while surveying customers about their in-store experience), and that this interfered with consumers’ privacy as it was not reasonably necessary for 7-Eleven’s functions and was conducted without sufficient notice as required by the APPs. The Commissioner found that facial images and faceprints were sensitive information under the Privacy Act as they were biometric information that was used for the purposes of automated biometric identification and the faceprints were biometric templates.
The University of Technology Human Technology Institute’s recently published report, Facial Recognition technology – Towards a Model Law, proposes a risk-based approach to the use and deployment of facial recognition technology. If enacted, the model law would require developers and deployers of facial recognition technology to evaluate human rights vulnerabilities both individually and in combination to identify the overall risk level for the particular facial recognition technology application. This reaches beyond the general privacy considerations under existing law, and compels users to think more broadly about the use and application of facial recognition technology, including factors such as where the technology is deployed, the performance of each application to produce reliable results and whether affected individuals have had the opportunity and capacity to provide free and informed consent to facial data collection. The outcome of this assessment will determine the risk rating for the relevant facial recognition application and will dictate whether the technology can be deployed and if so, the level of restrictions applied to the use of this technology. High-risk applications would be prohibited under the model law, unless special circumstances existed, such as limited law enforcement.