93 University of Colorado Law Review 52 (2022)
Robots—machines, algorithms, artificial intelligence—play an increasingly important role in society, often supplementing or even replacing human judgment. Scholars have rightly be- come concerned with the fairness, accuracy, and humanity of these systems. Indeed, anxiety about machine bias is at a fever pitch. While these concerns are important, they nearly all run in one direction: we worry about robot bias against humans; we rarely worry about human bias against robots.
This is a mistake. Not because robots deserve, in some deonto- logical sense, to be treated fairly—although that may be true—but because our bias against nonhuman deciders is bad for us. For example, it would be a mistake to reject self-driving cars merely because they cause a single fatal accident. Yet all too often this is what we do. We tolerate enormous risk from our fellow humans but almost none from machines. A sub- stantial literature—almost entirely ignored by legal scholars concerned with algorithmic bias—suggests that we routinely prefer worse-performing humans over better-performing ro- bots. We do this on our roads, in our courthouses, in our mili- tary, and in our hospitals. Our bias against robots is costly, and it will only get more so as robots become more capable.
This Article catalogs the many different forms of antirobot bias and suggests some reforms to curtail the harmful effects of that bias. The Article’s descriptive contribution is to develop a taxonomy of robophobia. Its normative contribution is to of- fer some reasons to be less biased against robots. The stakes could hardly be higher. We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers.