footnotes-logo
Volume: 49
Issue: 4

Work and Transformational Technologies from Not-So-Artificial Intelligence

Sharla Alegria, Assistant Professor of Sociology, University of Toronto

For many, the pandemic deepened our reliance on technologies, increasingly integrating them into our lives. As physical distancing became imperative, technologies—from the algorithms connecting restaurants, diners, and delivery drivers (and, in some markets, delivery robots) to the virtual and blurred backgrounds that keep untidy rooms off the Zoom screen—the pandemic accelerated acceptance of computer-based technologies that mimic human intelligence. While these technologies transform everyday life in sometimes exciting ways, they also mask structural inequalities, human labor that supports the computer systems, and biases coded in software. They obscure a “race to the bottom” for overall labor costs behind whiz-bang algorithmic black boxes, while doing little to address inequalities in the design and decision-making process that impact equity and usability overall.

During the pandemic, the streets of Toronto, Canada, where I live, became host to pink delivery robots, named Geoffrey, that might be right at home with the futuristic cartoon Jetson family. These robots deliver take-out orders within Toronto’s downtown core. They are adorable, prefer they/them pronouns, and helpfully remind us all to “#SupportLocal” with a printed message on their side. They also happen to be piloted by remote operators who may be located overseas, and they connect with restaurants and diners through Uber Eats—a multinational company known for systematically evading regulation (article requires login) and using the gig-work model to mainstream large-scale denial of basic worker protections. In addition to the operators who drive the robots, the robots’ operation plan also includes an on-the-ground crew to provide maintenance and rescue deliveries that go awry, as well as phone operators to let restaurants and diners know when, where, and how to interact with them. While Geoffrey rolls down the sidewalk, at least three different human support workers make the trip appear to be an autonomous artificial intelligence marvel.

Hidden Workers

As promising as new technologies like Geoffrey may be, they are often not as polished or sophisticated as they seem. The technologies need human help to work as expected and they are often systematically less effective for already marginalized populations. Consequently, the technologies and the labor conditions that support them deepen existing inequalities. Geoffrey and his invisible human helpers represent what Mary L. Gray and Siddharth Suri call “ghost work,” the hidden, and often isolated and poorly paid, human work necessary to support AI systems.

In a less cute example of ghost work, social media platforms, under pressure to better address mis/disinformation and violent, hateful, and abusive content, rely on human content moderators. Algorithms detect some banned content, but human moderators still need to review it. Entry-level content moderators in the U.S. typically earn $15 per hour working for third-party firms that contract with social media companies. The stress and trauma of the job is so severe that Facebook recently agreed to pay a $52 million settlement to compensate them for mental health consequences of the work. While the settlement may count as a win for U.S.-based moderators, the same contractors have been moving operations to the Philippines, where the content is just as traumatic but both the compensation and the likelihood that authorities will recognize the mental health costs are lower (article requires login).

AI systems like the Geoffrey delivery robots rely on intelligence that is not at all artificial, hiding human workers and the global economic inequalities that somehow allow for a viable business model where three workers and a robot make a delivery typically made by a single worker. More so than eliminating jobs, these systems rely on a combination of technical and managerial innovations to transform and relocate jobs. The story, however, is as much about bifurcation as deteriorating working conditions. Technology accelerates managerial innovations that externalize costs to non-employee independent contractors and exploit global inequalities to relocate jobs to countries with educated populations but low wages. Meanwhile, companies have improved pay, benefits, autonomy, and working conditions for core technical and managerial staff.

News and media sources highlight stories about automation replacing workers and fan these fears with eye-catching headlines like “Are Robots Coming for Our Jobs?” and “The Robots are Coming for Phil in Accounting” (article requires login). While some workers may be displaced, as sociologist Benjamin Shestakofsky shows, automated and AI systems in workplaces are good at some tasks but bad at others, often needing help from human workers. Other times the robots are tools that humans use. So far, these systems appear most transformative when they are used as tools to manage other humans.

Diversity Deficits

It is difficult to argue that delivery driving was a stable, well-paid, protected job before delivery platform companies transformed drivers into independent-contract gig workers. Driving for DoorDash, which now holds more than 50 percent of the U.S. delivery market, may not be that much worse than delivering pizzas for restaurants that have long expected drivers to use their own cars, work part-time, and rely on tips to subsidize low wages. The difference is that, while more jobs come with precarious conditions, stable salaried jobs in the middle are hollowing out. Researchers Cynthia J. Cranford, Leah F. Vosko, and Nancy Zukewich call this process the “feminization of employment standards”: where conditions of more jobs match those of jobs previously reserved for women who, employers expected, did not need to earn a living wage, never mind a wage that would support a family.

Of course, these stable-salaried jobs in the middle only existed for a privileged group in the post-WWII era, but they were an important, and now disappearing, engine of mobility to the middle class for families who were not excluded due to race. As Tressie McMillan Cottom alludes to in her book Lower Ed: The Troubling Rise of For-Profit Colleges in the New Economy (The New Press, 2017) racial/ethnic minority workers, recent immigrants, and especially women of color are particularly likely to find that access to jobs with good pay, worker protections, predictable schedules, opportunities for advancement, and access to employment-based social safety nets is severely limited.

That tech companies are now notorious for their lack of gender and racial/ethnic diversity exacerbates inequalities, both emergent and long-standing. Beyond the unequal consequences of technology-driven managerial innovations, the lack of diversity in tech represents two different kinds of pressing social justice problems. First, as detailed above, core employees in tech companies typically benefit from stable, high-demand, well-paid work and the underrepresentation of historically marginalized groups suggests unequal access to these good jobs. Second, the lack of diversity on technical teams has implications for the useability of the products that those teams produce.

Algorithmic Bias

While the social justice imperative of equal access is worthy in its own right, the lack of diversity in tech becomes even more serious when we consider the failures of usability and bias that emerge in algorithmic and AI black boxes developed by teams lacking gender and racial diversity. Some of these examples are now famous, such as the flaw in autonomous vehicle technology, highlighted by a team of researchers at Georgia Tech, that leaves self-driving cars more likely to strike darker-skinned pedestrians. There are the criminal risk assessment algorithms used by courts to help set bail and determine sentencing that rate Black offenders as higher risk and, thus, deserving of harsher punishments than white offenders. We also know that facial recognition software, often in use by police departments, is significantly less reliable in identifying feminine and darker-skinned faces. In a systematic test of commercially available facial recognition systems, computer scientists Joy Buolamwini and Timnit Gebru found that the maximum error rate for the most misclassified group, darker-skinned women, was 34.7 percent, compared to just 0.8 percent for the least misclassified group, lighter-skinned men. These error rates mean that even simple tasks unrelated to the criminal justice system—such as unlocking a phone—may take longer, require more effort, or provide less security for darker-skinned users, especially darker-skinned women.

In a more pandemic-specific example, anthropologist Amy Moran-Thomas demonstrates a tendency for racially biased errors in pulse oximeters, a relatively inexpensive medical device that goes on a person’s finger and uses color to measure blood oxygen saturation. These widely available devices provide a key metric in determining whether those sickened by COVID-19 need to go to the hospital for care and if patients need oxygen therapy. Decades of testing on mostly light-toned fingers has resulted in devices that are systematically more likely to report higher-than-actual oxygen saturation for darker-skinned patients, resulting in delays in providing oxygen therapy—if it is provided at all.

AI systems can exhibit algorithmic bias for a variety of reasons. In the case of criminal risk assessment algorithms, the machine-learning technology is trained on a corpus of historical data that captures and reproduces a history of inequality within a proprietary technical black box. The resulting racial harm is what scholar Safiya Umoja Noble terms “algorithms of oppression.” Since they did not require active bias to produce systematic racial harm, these systems can appear race-neutral, but a combination of uncritical assumptions about the neutrality of data and profit-driven secrecy around proprietary models prevent transparency about those assumptions. In the case of sensor-based technologies, including autonomous vehicles and pulse oximeters, more testers with darker skin-tones would likely result in more effective products. Scholar Ruha Benjamin calls the tendency of seemingly color-blind technology to exclude and oppress people of color “the New Jim Code,” referencing the history of Jim Crow segregation laws in the U.S. South.

Including more Black engineers on development teams may not prevent machine-learning algorithms from being trained on data that has captured historical inequalities of contact with the criminal justice system. However, it is hard to imagine that facial recognition system error rates would continue to be orders of magnitude higher for darker-skinned women if the development teams designing and building the systems both represented and integrated women of color. More diverse design and development teams could significantly reduce the algorithmic bias currently haunting many AI systems.

The Harder Path toward a More Equitable Future

The emerging technologies that seem poised to transform our lives and work are generally more effective at implementing managerial innovations than at wholly replacing human labor with computer systems. As the pink robots rolling down Toronto’s streets demonstrate, technology is rarely as autonomous as the futuristic fiction it emulates. It may, however, be better at hiding gendered, racialized, and global inequalities that allow platform companies like Uber Eats to control market share while minimizing pay to workers and, bewilderingly, without even turning a profit. Additionally, there are many AI systems making it seem as if the science-fiction future is “now,” while dragging histories of inequality into the present with uncritical historical data and racialized product testing firmly planted inside seemingly color-blind algorithmic black boxes.

I find hope for a more socially just technological future in the way that the conversation about social justice in technology seems to have life. Speculative fiction writer and MacArthur Fellow N.K. Jemisin’s short story “The Ones Who Stay and Fight” describes the radically diverse and loving utopian city of Um-Helat as it observes a day of celebration and reflection. Jemisin’s story is a response to Ursula K. Le Guin’s “The Ones Who Walk Away from Omelas,” in which the city of Omelas is a splendid utopia but can only remain so as long as one solitary child lives in abject misery alone in a basement. While most rationalize the child’s suffering, some residents of Omelas, either upon learning about the child or after years of grappling with the knowledge, simply walk away, leaving the child in misery and the utopia intact. In Jemisin’s version, those who cannot make peace with the knowledge that a single child must suffer to sustain their utopia refuse to walk away and instead fight to win a different kind of utopia—one that is aware of its history and fully accessible to all willing and able to respect and fight for its principles.

It is not clear how a social collective transitions from Omelas, rationalizing suffering to sustain happiness, to Um-Helat, refusing to tolerate inequality or discrimination, but I hope the work to build more equitable technological futures brings us closer to that transition.


Any opinions expressed in the articles in this publication are those of the author and not the American Sociological Association.

(back to top)