Clickbait Philosophy: What you DON’T KNOW about mid-century phenomenology can KILL YOU.

Feb. 13, 2018, 4:52 p.m.

Philosophy, in the modern age, is dead. World renowned physicist and omnipresent smart-man Stephen Hawking himself has declared it. Having supplanted philosophers, “scientists have become the bearers of the torch of discovery in our quest for knowledge,” Hawking says. To this we push up our glasses, take a puff from our inhaler and say, “But Dr. Hawking! Science is philosophy! Even saying ‘philosophy is dead,’ is still, itself, philosophy!”

At this point our wheelchaired compatriot presumably gives a computerized shrug. He’s content to concede us that philosophy may be permitted as long as its duties are limited to granting all power to the physics department and promptly announcing its own demise. He then, we suppose, zooms away to continue doing physics.

Is it really the fate of philosophy to be relegated to the footstool of the sciences? Many mid-century artificial intelligence researchers seemed to think so. Philosopher Hubert Dreyfus recalls that when teaching a course on Heidegger at MIT in the early sixties, he often heard something to the effect of “You philosophers have been reflecting in your armchairs for over 2000 years and you still don’t understand intelligence. We in the AI Lab have taken over and are succeeding where you philosophers have failed.” These rather rude researchers seem entirely in line with our dear Dr. Hawking. Moreover, we’re now aware they were undoubtedly incorrect in practically all they were pursuing.

Early artificial intelligence researchers, you see, were incorrigible optimists. In 1957, Herbert A. Simon, designer of the General Problem Solver (GPS) system, predicted that within 10 years, an artificial intelligence would, among other things, be world champion at chess, prove an important new mathematical theorem and cause a revolution in psychological research. Of course, it would be nearly 40 years until Deep Blue would defeat Garry Kasparov in a chess match, and arguably Simon’s other predictions have yet to come true. What could have caused this vast overestimation, and why did it fail so spectacularly? To understand this, we must turn to Heidegger and Descartes.

Early attempts at artificial intelligence, like the GPS system of Dr. Simon, were unquestionably Cartesian, whether their designers knew it or not. These systems sought to create a thinking machine by emulating a theory of consciousness laid out by René Descartes a half millennium ago. This theory posits that thinking is fundamentally an act of manipulating internal symbols, which correspond to objects in the outside world as informed by sense perception.

When I see a chair, for example, I have a symbol in my mind corresponding to that chair, and it has some properties attached to it relating to the (symbolic representation of the) desk it sits near, or the (symbolic representation of the) jacket draped on its back. If I want to do any thinking about these objects, I merely manipulate their internal symbols in regards to their respective properties. It is worth noting, of course, that these researchers never cited Descartes as inspiration; this was merely seen as common knowledge on how human intelligence operated. In any case, this was very good news for our rude MIT grads. As it turns out, computers are exceptionally good at the manipulation of symbols! The grand prize of a general artificial intelligence seemed at last in reach.

Unfortunately, these researchers had not paid enough attention in their Heidegger classes, for Heidegger (in conjunction with other so-called phenomenologists) had already offered a compelling and authoritative refutation of Descartes’ theory of consciousness in the early twentieth century. The core of this refutation comes from the observation that we as humans are “always already” thrown into a world with which we have some interactive intuition. Descartes isn’t wrong, per se, that the world can be viewed as objects with properties (we humans do it all the time), but rather that this is a derivative way of thinking which can only be borne out of a more primitive state of being always subconsciously engaged with the world.

This is to say that we first encounter other entities in the world with some significance relative to an objective for which we’re acting, and only when there is some disruption do we revert to a more C artesian mode. For a simple example, consider a door. When walking through a door out of a classroom, our objective is to leave the room, and we encounter the door purely as a means for leaving the room and act according to this. If the door were to jam, only then would we stop and consider it as a pure object with physical properties that can be logically reasoned with. In effect, it is easy to start from this subconscious world of meaning and move to Cartesian symbol manipulation, such as in doing a mathematical calculation, but proposing to do the opposite, as suggested by Dr. Simon et. al, is folly. A more detailed explanation of Heidegger’s critique of Descartes can be found online at the Stanford Encyclopedia of Philosophy.

Our much maligned hero, Dr. Dreyfus, knew this all too well. He published “Alchemy and AI” (1965) and “What Computers Can’t Do” (1972) both of which attacked, on primarily Heideggerian grounds, the prevailing attitudes in modern artificial intelligence. Though these views were immediately greeted with derision and dismissal, by the 21st century Dreyfus had been mostly vindicated. The predictions of Dr. Simon failed to come true, and Cartesian systems like his GPS were abandoned. AI research did not cease, of course, but rather regrouped and began exploring different avenues to human consciousness, out of which came the emergence of modern research fields like machine learning.

This ultimately illustrates at least one respect in which philosophy trumps science, and that is in answering the question: What is a human being? The question is not trivial, nor is it navel-gazing, and it surely isn’t included on the syllabus for CS 229. A philosophical underpinning of the human experience has concrete and far-reaching consequences on facets of life in ways no one can expect. It will always be necessary for scientists to understand the philosophical roots of their work, lest they repeat the naive mistakes of those early AI pioneers. It’s true that philosophers have spent the past 2000 years reflecting in their arm-chairs. Perhaps they know a thing or two about reflection.

 

Contact Sam Rogers at srogers2 ‘at’ stanford.edu.



Login or create an account