Forthcoming in the Arizona State Law Journal
The rise of algorithm-driven decisionmaking enabled by Big Data has generated widespread concern among legal scholars. However, few critics have considered data on people’s existing preferences about the role of algorithms in decision systems. This Article uses empirical analysis of a novel, large dataset of consumer surveys to elucidate those preferences. The surveys explore whether people prefer to have an algorithm or a human determine an outcome affecting their welfare in a range of representative scenarios with varying stakes. The Article examines how preferences change when one type of decisionmaker produces results that are more accurate, faster, cheaper, or that incorporate private personal information. And it analyzes anchoring effects from the initial assignment of a decisionmaker, along with interactions among these variables, to test how malleable views about algorithms are.
The study’s empirical results call the conventional wisdom sharply into question. People often preferred to have an algorithm decide, especially when the mathematical models offered benefits relative to humans. In particular, consumer preferences are highly sensitive to the relative costs or benefits of the two decisionmakers—even more so than to relative accuracy. The stickiness of default settings demonstrates that preferences are often path-dependent, emphasizing the importance of sound policy choices for algorithmic governance. The Article concludes by elaborating the policy implications of its empirical findings. It contends that consumer preferences deserve greater weight in regulatory choices; that transparency efforts should concentrate on the benefits or costs of algorithms to consumers; and that policy should treat high-stakes decisions differently from less weighty ones. And its data-driven findings can help shape reforms that are both effective for and acceptable to consumers.