We will die eventually, and that gives us some special reflective abilities. Unlike a corporation or, say, humankind, which have no expiration dates to speak of, we know that we can expect to stop living after at most about a century, and thus can think about our lives as a whole and imagine a coherent arc that has a beginning and an end.
There are, broadly speaking, two kinds of ethical questions, and they bear different relationships to the arcs of our lives. The first is “What should I do in this situation?” and the second is “How should I live?”
The first question has a narrower, more determinate timeframe: We have encountered a given situation, and want to determine what will create the best outcomes for this situation. The outcome of this situation may spill over into other situations and influence other decisions, but that’s not the focus. We care about the given situation and the scope of its consequences, not so much how our decisions will fit into the rest of our life.
The second question has a broader, hazier, more indeterminate timeframe: It starts from our life as a whole and considers how specific choices we make will fit into it. It’s not directly concerned with what we should do in specific situations but cares instead that the choices we make and the situations we find ourselves in will collectively and collaboratively make up a life that we deem worth living.
The distinction is not sharp: “As a Facebook engineer, should I implement the Facebook Research VPN?” is a question of the first kind, and “Should I be a Facebook engineer?” is of the second kind. A question like “Should I take a principled stand against the VPN and risk losing my job?” is somewhere in the middle: it arises from a specific situation but has implications for one’s life as a whole.
But even this last question, which is not clearly of one kind or the other, involves sub-considerations, each of which fits more neatly into one category or the other.
Of the determinate kind: What advantages for Facebook users might come out of the development of such a VPN? What are the drawbacks, and how might we weight the two against each other?
Of the determinate kind: What are the implications for the development of the VPN if I do or do not participate? Will someone else just implement it in my place, or would my refusal to participate lead management to reconsider?
Of the indeterminate kind: How would saving my job by implementing the VPN fit with my values? If it does not fit, what would it mean for me to be an engineer who does work against his values?
Of the indeterminate kind: Conversely, what would it mean for me to be a person who sacrifices his job security for his values, which are “valuable” in one sense but not in the sense of “paying the bills?” — and so we see that the distinction between the first and the second kind, though not sharply drawn, is certainly present and discernable.
At our university, the distinction between the two kinds of ethical questions — the one with the determinate timeframe and the one with the indeterminate timeframe — seems roughly to track a disciplinary distinction. Namely, the indeterminate kind of question tends to be asked in the humanities, and the determinate one in the sciences and technology. Let us look at the concrete manifestation of this: the Ethical Reasoning (WAY-ER) requirement.
I’ll speak first from experience. I’ve taken ER courses in both humanities and tech, and will sketch the contours of two: The first, SLE, studies philosophy, religion, history and literature, focusing largely on questions of the indeterminate kind: Socrates’ question of how we should live, Nietzsche’s notions of fashioning oneself, Buddhism’s idea of no self. It addresses students chiefly as just humans.
The second, CS181: Computers, Ethics, and Public Policy, focuses on questions of the determinate kind: how should one create an algorithm that is fair? How should privacy rights be weighed against other interests? How do we create ethical AI? It addresses students chiefly as computer scientists and policy experts and citizens influenced by tech.
Let’s go a bit further than mere anecdote. I went through ExploreCourses yesterday and grouped the 66 classes offered this year that fulfilled the Ethical Reasoning Ways requirement into two categories based on course descriptions: those considering questions of the determinate kind (“What should I do in this situation?”) and those considering questions of the indeterminate kind (“How should I live?”).
Out of the classes that dealt with questions of how individuals, not social institutions, might function (that is, ethics rather than politics), 20 dealt with questions of the first kind, 15 with questions of the second kind, and four with both. Of the 20 that dealt with questions of the determinate kind, only one was in humanities. Of the 15 that dealt with questions of the indeterminate kind, all but two were in humanities.
The distinction makes some sense, I think. The humanities, we must admit, are less concerned with practicalities and often more concerned with the broader, grander, more amorphous questions like “How should I live?” or “What do certain choices mean in the life of a human?” Science and technology, in contrast, need ethics to answer determinate questions like “Should I continue researching gene editing?” or “How do I make a fair social media app?”
But it doesn’t make that much sense. For one, the humanities major still needs to learn how to make specific ethical choices involving questions of the determinate kind, especially if she joins the workforce rather than staying in academia — which she most likely will. The technologist needs to be able to step back and look at the bigger choices she faces using questions of the indeterminate kind, because at some point she will face apprehension about the coherence and meaning of her life, and questions of the first kind, situationally constrained as they are, will fall short.
Seen in this light, the disciplinary split does not work in our favor. Especially if we search for classes that can fulfill both a major requirement and the ER requirement, we are liable to never take a class in ethical reasoning outside the bounds of our discipline.
Perhaps I’m being overly idealistic, but it seems obvious to me that we should all want training in ethical thought that gives us means to think about questions of both the determinate kind and the indeterminate kind. We should want to be able to think about our ethical choices both in a narrower timeframe — in specific situations and specific decisions (and perhaps in specific disciplines), and in a broader timeframe — transcending the specific situations that make up our lives and thinking about how the things we do fit together to make up our lives as a whole.
And perhaps the first step to achieving this ability is to take not just the one required Ethical Reasoning class you need to graduate, but to take two — or more — WAY-ER classes: at least one in the humanities, and at least one not in the humanities. One determinate, specific, and bounded in scope, and the other indeterminate, hazy, and all-encompassing.
See my methodology and data here.
Contact Adrian Liu at adliu ‘at’ stanford.edu