Privacy in the Precision Economy: The Rise of AI-Enabled Workplace Surveillance during the Pandemic

June 8, 2021
shutterstock_1617533395 (1).jpg
(Shutterstock)

The COVID-19 pandemic has caused a significant shift in the number of people working from home. Statistics Canada data reveals that by early 2021, “32% of Canadian employees aged 15 to 69 worked most of their hours from home, compared with only 4% in 2016.” The United States Census Bureau likewise indicates that more than one-third of US households reported working from home “more frequently than before the pandemic.” Although driven by the health crisis, this trend is more than a temporary shift. Canadian statistics indicate that about one-quarter of Canadian businesses may have 10 percent or more of their employees continuing to telework after the pandemic is over.

The dramatic increase in online work during the pandemic has fuelled demand for remote surveillance technologies. Employers’ concerns in this context have centred on monitoring the productivity of employees not physically present on company premises. In these cases, technology may be seen to fill a gap previously occupied by more conventional forms of monitoring.

Indeed, employee monitoring is neither new nor particular to the online context. Employees are regularly monitored and evaluated. Depending on the workplace, it is done with greater or lesser intensity — and, until recently, it was done largely by human supervisors. Nevertheless, even before the pandemic, technology companies were developing suites of technological tools to facilitate the monitoring — and performance assessment — of employees. These technologies are becoming part of what is referred to as the “precision economy” — a term that describes “a future workplace of hyper-surveillance and algorithmic optimisation, in which organisations create value by measuring and incentivising virtually every aspect of our working lives.” In fact, new technologies of this kind are one of the ways in which the artificial intelligence (AI) revolution may impact workers. The pandemic and the perceived need to use technology to monitor a remote workforce may simply serve to accelerate the overall uptake.

Employees are regularly monitored and evaluated. Depending on the workplace, it is done with greater or lesser intensity — and, until recently, it was done largely by human supervisors.

Workplace surveillance technologies are varied and diverse. Some monitor websites visited by employees, or time spent on websites or in documents. One remote work-monitoring service offers extensive cellphone monitoring, including Global Positioning System (GPS) tracking. A feature common to many services allows an employer to remotely view employees’ computer screens. Other features include viewing incoming and outgoing email, and recording audio. Employers can monitor keystrokes, social media activity and web searches. One service provides always-on video through employees’ computers, allowing their employer and other workers to connect with them with a single click, and keeping their faces visible onscreen all day. All of this data can be recorded and stored. Monitoring software can also come with behaviour analytics and other algorithmic tools to rate or assess employee activity and productivity. Some argue that these tools will be invaluable for increasing productivity in the so-called precision economy.

One key difference between conventional monitoring and remote surveillance is that technological surveillance of employees necessarily intrudes into their private spaces. For remote workers, audio and video surveillance may capture private spaces and conversations, raising the level of intrusiveness. Whether at home or in the workplace, however, remote surveillance technologies are far from analogous to onsite work observation. They are typically continuous and multifactored — going far beyond observation by a human supervisor. Because these technologies are increasingly linked with AI or other analytics tools, they do not simply surveil but also assess performance.

The link between surveillance and algorithmic assessments highlights the fact that although workplace surveillance has significant privacy dimensions, it may also have other human rights impacts. Surveillance technologies may be used to harass or intimidate workers. In addition, their design or deployment may exacerbate existing inequalities. For example, remote surveillance technologies that incorporate AI performance metrics may lead to biased treatment of employees on grounds that may include age, gender or racialization. One analyst noted that inflexible performance metrics generated from surveillance and analytics can also adversely impact employees by emphasizing quantitative rather than qualitative metrics. AI evaluations may also undermine employee morale by focusing too much on metrics and not enough on individual circumstances.

Privacy law gives considerable leeway to employers who adopt workplace surveillance technologies, so long as the surveillance can be appropriately linked to employment demands. For example, technological surveillance has long been used for safety reasons. Airline pilots have their cockpit conversations recorded. Workplaces with safety issues can use a variety of surveillance techniques — including cameras and random drug or alcohol testing — to ensure the safety of workers or the public. GPS tracking of work vehicles is also possible for safety and fleet-management issues. However, in general, data collected for one purpose, such as safety, cannot be used by the employer for other purposes.

Beyond safety concerns, employers are permitted to supervise and monitor employee performance. This is likely where the biggest challenges will lie in balancing privacy against the use of remote surveillance technologies, particularly as surveillance technologies become more and more tied to performance analytics. The problem is that what technology offers to employers in terms of enhancements to systems, productivity and employee assessment may come at a direct human cost to employees in terms of their dignity, autonomy and well-being. As these technologies begin to proliferate, we need to move quickly to provide adequate guidance and legal protection to ensure that their use is appropriate.

Beyond safety concerns, employers are permitted to supervise and monitor employee performance. This is likely where the biggest challenges will lie in balancing privacy against the use of remote surveillance technologies, particularly as surveillance technologies become more and more tied to performance analytics.

In Canada, a patchwork of laws governs workplace privacy. The Personal Information Protection and Electronic Documents Act (PIPEDA) offers some protection for employees in the federally regulated private sector. Alberta, British Columbia and Quebec have their own data protection laws for their provincial private sectors; private sector employees outside of those provinces or the federally regulated industries are less well protected. Federal and provincial government employees must rely on the public sector privacy laws of their respective jurisdictions. Employees in unionized workforces may find protection under their collective agreements; indeed, the Supreme Court of Canada has acknowledged a well-established body of arbitral jurisprudence that balances management rights with employee privacy rights.

In particular, we must find ways to integrate an analysis of the privacy dimensions of these technologies with their human rights implications, especially where observation and assessment are increasingly integrated. We need to do all of these things and very quickly, too, as the era of remote surveillance and AI-driven performance metrics has clearly arrived.

Private sector data protection laws in Alberta, British Columbia and at the federal level are relatively similar in their approach to workplace privacy. As an example, section 7.3 of PIPEDA provides that a federally regulated employer may collect, use or disclose personal information without an employee’s consent where it is necessary to “establish, manage or terminate an employment relationship” and when the individual has been informed that the information “will be or may be collected, used or disclosed for those purposes.” Nevertheless, according to section 5(3), any data collection must be “only for purposes that a reasonable person would consider are appropriate in the circumstances.” This has been an important limiting factor.

In considering whether surveillance technologies can be deployed in the workplace, the Privacy Commissioner and the courts have used section 5(3) and its provincial equivalents to balance the employer’s stated needs for the surveillance with employee privacy. In Eastmond v. Canadian Pacific Railway, the Federal Court found that video surveillance cameras installed within the workplace for safety reasons were acceptable, but that their footage could not be used for other purposes such as monitoring productivity. This ruling does not mean that monitoring productivity is not a possible goal with remote surveillance; it simply means that if a technology is deployed for this purpose, this specific use must be justifiable. More recently, an investigation report from the Office of the Information and Privacy Commissioner for British Columbia (OIPC) in relation to a public sector workplace found that technological monitoring that included screenshots, keystroke logging and tracking of online activity, although introduced to ensure the security of the employer’s informationkeyst technology (IT) systems, amounted to an excessive and unauthorized collection of personal information. The OIPC has since provided guidance to private sector organizations on implementing workplace surveillance.

Interestingly, guidance on the issue of remote work during the pandemic from privacy commissioners in Canada has focused on the protection of the personal information of the employer’s clients. Although this is a legitimate concern, guidance is also needed to address the privacy challenges that may confront a growing number of remote workers. Even within worksites, digital surveillance technologies are likely to be used in new ways. Existing guidance focuses on issues such as the use of technology to ensure employee safety or the security of IT systems. There is little guidance on what is acceptable for the monitoring of employees working from home. Similarly, there is no guidance on what is or is not acceptable when digital surveillance technologies are also used for performance metrics at home or onsite. This is yet another area where privacy law intersects with AI governance concerns — and our legal frameworks may not be as ready as they should be to address the new challenges.

In particular, we must find ways to integrate an analysis of the privacy dimensions of these technologies with their human rights implications, especially where observation and assessment are increasingly integrated. We need to do all of these things, and very quickly, too, as the era of remote surveillance and AI-driven performance metrics has clearly arrived.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Teresa Scassa is a CIGI senior fellow. She is also the Canada Research Chair in Information Law and Policy and a full professor at the University of Ottawa’s Law Faculty, where her groundbreaking research explores issues of data ownership and control.