Please join the webinar here.
Al and Bob are very similar, with one specific exception. This exception might pertain to their age, annual salary, health, height, education or sports team preference. Whatever it is, this difference is non-trivial yet also not substantial. Al might be two inches taller, two years younger, earning 5% more money a year, or smoking 5 cigarettes more per day. Al and Bob are subjected to a similar algorithmic decision-making process that generates a score. The algorithmic process assigns scores to Al and Bob that are substantially different from each other. As smaller firms and governmental agencies join larger and more established ones in the process of incorporating machine learning and other automated processes into their practices, there is reason to believe such scenarios will become common and require close regulatory scrutiny.
Could a small change in inputs justify a substantial change in outputs? Should these scenarios be actively sought out by regulators and auditors, examined with suspicion, counteracted, and perhaps even banned? And, if a data scientist manually, or an algorithm automatically, detects and “smooths” out these types of results, would those corrections introduce problems of their own? Above all, do these situations raise crucial algorithmic fairness concerns that are either overlooked and novel or that are illuminating variations of older ones? Or perhaps such outcomes are perfectly acceptable, and their correction should be avoided.
After first clearly defining “small” and “big” differences in inputs and outputs, this article articulates, formulates and analyzes these questions, which will take us to the bleeding edge of the study of algorithmic decision-making in the fields of computer science and law. The discussion introduces a novel ex-post method to examine algorithmic fairness and efficiency. And yet, at the same time, the article’s analysis will force us to reopen discussions of fairness and equality dating all the way back to Aristotle. The article concludes with policy recommendations to be applied in situations where we will find that the noted dynamics might unfold, and their outcomes will prove unacceptable.
Tal Zarsky is a professor of law at the University of Haifa – Faculty of Law. Tal was most recently a visiting scholar and adjunct professor at University of Pennsylvania Carey Law School (2019-2020)). His research focuses on legal theory and allocations, as well as information privacy, algorithmic decisions, cybersecurity, telecommunications and media law, internet policy, and online commerce. He has published numerous articles and book chapters in the U.S. on these matters. He was a fellow at the Information Society Project at Yale Law School and a Global Hauser Fellow at New York University (NYU) Law School as well as a visiting researcher at the University of Amsterdam and the University of Ottawa. He completed his doctoral dissertation at Columbia University School of Law. He earned a joint bachelor’s degree in law and psychology at the Hebrew University with high honors and his master degree (in law) from Columbia University.
To request accommodations for a disability please contact Jean Butcher, firstname.lastname@example.org, at least one week prior to the event.