footnotes-logo
Volume: 49
Issue: 4

Technology and Inequality, Surveillance, and Privacy during COVID-19

Denise Anthony, Professor, Sociology and Health Management and Policy, University of Michigan

Since the start of the pandemic, we have spent more time in our homes than we ever expected to—working from home (if fortunate to have that possibility); learning from home; being entertained with streaming services; and doing virtual happy hours, birthdays, and holidays. Now we even have doctor’s visits and consultations with our therapists from home.

The increasing availability of so-called Internet of Things (IoT) technology (i.e., internet-enabled devices such as smart TVs, smart speakers, video doorbells, and voice-activated virtual assistants like Amazon Alexa or Google Assistant), along with our smartphones, computers, and Wi-Fi internet access, has comforted, entertained, facilitated work and learning, and safeguarded us at home during the pandemic. Estimates suggest that roughly three-quarters of American adults have broadband internet service at home and about 69 percent of people in the U.S. have an IoT device/system in their home.

But while these computing and “smart” technologies were facilitating our interaction, work, school and health care, they were also becoming embedded into our social worlds in ways that have important sociological implications. The distinct power dynamics, social conditions, and intimate situations of the home create implications for inequality, privacy, and surveillance in the smart home.

Like so much else during COVID-19 (Henderson et al. 2020; Perry et al. 2021), these virtual activities in the home have revealed much about inequality in our society, including the gulf between technology haves and have-nots (Campos-Castillo 2014; Hargittai 2002; Puckett 2020).

 

Working from Home

Even before the pandemic, it is important to remember that homes were already sites of work. The impact of the pandemic on domestic workers for example, who historically have had few protections as employees (Maich 2020)—as well as on many of the elderly and people with disabilities who depend on them—has been devastating.

Those lucky enough to be able to work from home after the start of the pandemic often needed significant resources to do so—computers, high-bandwidth internet access, and cameras (to attend virtual meetings), not to mention a quiet room in which to work. But who is paying for all of that? Some companies made headlines by helping workers equip home offices, but many workers simply had to absorb those costs.

What workers probably didn’t know they were also getting in the bargain was the potential for their employer to monitor them in their home. Technological surveillance of workers is as old as the industrial revolution, and modern tracking of workers (also Atwell 1987; Lyon 1994) via embedded cameras (now sometimes with facial recognition software), location tracking, electronic monitoring, and even individual keystrokes, is increasing.

The relative abilities of workers to manage privacy and resist surveillance are unevenly distributed, with particularly negative consequences for individuals and communities with low status and few resources. The potential that work-from-home may be extended for some workers post-pandemic, coupled with the increasing presence of home IoT, fuels the capacity for surveillance in the home. But surveillance is not merely about technology. It not only amplifies social inequalities (Brayne 2020; Browne 2015; Benjamin 2019; Eubanks 2018), it also has long-term implications for the organization and operation of power in society (Zuboff 2019).

 

School from Home

Space constraints and required computing resources have been especially relevant for families grappling with virtual education during the pandemic (Calarco 2020; Puckett and Rafalow 2020). The long-term implications of virtual schooling will need to be studied for years to come, but some of the devastating harms, including from the invasive surveillance that technology and government enabled, are already clear, particularly for the most vulnerable students. It is important to recognize that examples of surveillance—like the student incarcerated for not completing her online schoolwork—illustrate that it is not technology alone that produces surveillance. It is technology used in specific ways by specific actors (in this case, teachers, school systems, governments) that produces surveillance (Lyon 2007, 2011, 2018).

 

Health Care from Home

In the initial period of the pandemic during spring 2020 when much of the world shut down, health care—other than ERs and ICUs treating severely ill and dying COVID patients—nearly ground to a halt as both providers and patients sought to avoid in-person contact. But chronic conditions still needed monitoring and other illness and injuries still happened.

Telehealth visits filled the void for some (Cantor et al. 2021). People who already have broadband, home Wi-Fi, necessary devices (smartphones, tablets, or laptops), and experience using technologies like online patient portals, can more easily engage in telehealth than those without them (Campos-Castillo and Anthony 2021; Reed et al. 2020). However, populations with chronic health needs are generally lower resourced (Phelan et al. 2010) and disproportionately people of color (Williams et al. 2019), but lower resourced patients and some minority racial and ethnic groups are less likely to be offered technologies like patient portals (Anthony et al. 2018).

IoT further increases the potential for health tracking in the home. We track our own health using smart watches and other wearables, like internet-enabled glucometers for diabetics. And now, smart sensors can be installed in the home to detect falls, and virtual assistants keep elders company while also enabling distant family members to check in. The socio-technical landscape of these smart homes creates potential benefits for health but also raises privacy risks. Privacy concerns can influence whether people seek care at all or disclose information to doctors if they do, potentially having consequences for relationships with providers and for family dynamics as well.

But privacy management has become increasingly complex for individuals. In part, this is because privacy management is mediated by technology and the companies that control the technology (and data) in ways that are often invisible, confusing, or uncontrollable. While I can decide whether to wear a fitness tracker or use a virtual assistant, I have very little ability to decide what data about me flows to the company or how the company uses it. But importantly, these data used to evaluate, engage, or exclude me are not exclusively about me. These kinds of data also implicate others—those who live with or near me, are connected to me, or possibly just “like” me in some categorical way decided by the company (and their algorithms). And those others can then also be evaluated, engaged, or excluded based on the data. Thus, privacy has important social implications for surveillance and social control, and also for group boundaries, inequality, cohesion, and collective action.

 

Societal Implications

The sociological impact and implications of increased technology use in the home extend far beyond these important aspects of inequality, surveillance, and privacy. Such first-order effects are important to understand in order to develop interventions and policies to ameliorate them. But sociologists studying technology also consider what Claude Fischer described as the second-order effects—the ways that social relations, institutions, and systems intersect with technology in ways that change the culture, organization, and structure of society. For example, work-from-home is likely to have long term consequences for employment relations, workplace cultures, and the structures of occupations and professions. Distance-based learning, which was well underway prior to the pandemic, may expand learning and access beyond the constraints of physical schools, but also may alter the training and practice of teachers, as well as the political dynamics of public school districts. Technologies in health care can enhance or limit access, improve or harm health, reduce or exacerbate health disparities, but also alter the doctor-patient relationship, the practice of medicine, and the delivery of health care.

This does not mean that technology drives social change. That kind of simplistic technological determinism has long been critiqued by sociologists. Rather, technology offers an entry point to observe the sociological forces that shape, for example, the production and distribution of new technologies. Think of the political economy of surveillance capitalism, so carefully detailed by Shoshana Zuboff; the institutional and professional practices of those developing technology; and the economic, regulatory, and organizational dynamics driving adoption of new devices and systems. Sociologists also study how social conditions—dynamics of social interaction, existing social norms and status structures, and systemic inequalities—shape how technologies are used, in ways both expected and unexpected. It is these dynamics and sociological forces that drive the social changes we associate with, and sometimes mistakenly attribute to, the technologies themselves.

The ongoing threat of COVID-19 may keep us in our homes a while longer, relying on existing and new technologies for work, school, health care, and more. Sociological research can help to make sense of the drivers and current impact of new technologies that have become widespread. Sociology is also necessary for understanding the deeper and long-term consequences and social shifts that have only just begun.


Any opinions expressed in the articles in this publication are those of the author and not the American Sociological Association.

(back to top)