Introducing Complexity Theory: a series on ethics and technology

Opinion by Adrian Liu
Oct. 25, 2019, 1:53 a.m.

Part of “Complexity Theory,” a new column on the tangled questions of our technological age.

By some accounts, the western world is experiencing a backlash against the same technologies which have profoundly shaped it the last three decades. Some scholars, policymakers and journalists are now offering a sobering outlook on the consequences of immense technological power and influence, one that counters the straightforwardly optimistic and perhaps utopian views of the capabilities of technological progress. 

Stanford, the university most responsible for the preeminence of Silicon Valley in today’s technological world, is slowly grappling with its power and beginning to realize that it may have some responsibilities as well

I say Stanford, but of course, I’m not referring to a nebulous entity — I mean specific people who have sounded alarm bells, who said we need to pay attention to what our university’s role is and how the university is shaping the people who then shape today’s world. Not only are some students increasingly worried about the ethics of working for a large tech corporation, but scholars at the university have also begun to take a more critical eye toward tech. There’s a sweeping sense of urgency on campus that something needs to be done, even if we largely remain in the research phase and are not sure what should be done. 

Philosopher Rob Reich, computer scientist Mehran Sahami, and political scientist Jeremy Weinstein undertook a significant rehaul from 2017 to 2019 of the flagship technology ethics course CS 181 (now CS 182), and were the subject of an extended profile in this paper, as well as coverage in The New York Times, The Nation, and other outlets. The Stanford Institute for Human-Centered Artificial Intelligence, which launched with a sold-out event in Memorial Auditorium in 2019, seeks to shape a more deliberative conversation about the ends for which artificial intelligence is pursued. In September, the McCoy Center for Ethics in Society introduced a Technology Ethics minor. And, as part of Stanford’s Long Range Planning efforts, the administration is planning an Ethics, Society & Technology Hub to integrate different parts of the university in coordinated research on the topic.  

The role of Stanford’s campus newspaper in this new university-wide drive for a more reflective technological scene is, as I see it, both exhortatory and expository. It’s easy to cry that technologists need be more ethical. But learning frameworks of ethics can only take you so far, and in my opinion it is not very far. The ethicist, faced with a problem of algorithmic fairness, will not have a ready answer any more than the technologist. The ethicist cannot come in and save the day, but she can join hands with the technologist and attempt to sort out the thorns together. 

“Complexity Theory” is a series on ethics and technology, a collaboration between Stanford students with a range of backgrounds but a shared worry. We aim not to cry that technologists should be more ethical — that has been done enough. Instead, we aim to shine more light on the apparent intractabilities, the technical subtleties and the real difficulties of technology ethics.

Introducing the complexities of the ethical issues that various technologies and their implementations face is an expository task with an exhortatory valence. By pointing to issues like the power of internet platforms, or algorithmic fairness, or the responsibilities of autonomous systems, and showing that they are not solved merely by a nice injection of moral clarity (whatever that may be), we hope to make self-evident the need for serious thought — on the part of technologists, social scientists and humanists alike — on how to approach and address these issues. 

Contact Adrian Liu at adliu ‘at’ stanford.edu. 

Adrian Liu '20 was Editor of Opinions in Volumes 257 and 259.

Login or create an account