Opinion | The dangers of techno-optimism

Opinion by Elizabeth Zhu
Nov. 29, 2022, 10:12 p.m.

“Move fast and break things.” Words uttered not by a burglar, but rather by Mark Zuckerberg.

It is telling that Facebook’s since-abandoned motto is still frequently quoted by billionaire CEOs, college dropout founders, and widely touted as the driving motivation behind Silicon Valley.

Perhaps this is indicative of a rising sentiment that technology can solve almost everything — often at a breakneck speed and by disrupting existing approaches. Stroll through Stanford and this sentiment is particularly strong. Every week, it seems, a new piece of software is being invented to make our lives easier. Many of these applications are spearheaded by disgruntled college students who have quickly picked up the art of mobile development.

From apps to help schedule your academic workload to machines that suck carbon directly out of the atmosphere, there is no shortage of technological solutions to the world’s seemingly endless supply of problems. In fact, much of Stanford’s global prestige can be attributed to its culture of rapid technological innovation — where the possibilities for ‘doing good’ are infinitely vast.

Beyond pursuing start-ups, many students join Big Tech companies after they graduate. Some do so because they believe they can make the most change by piggybacking on the influence of companies such as Google, Apple, and Uber. Others are in it for the six-figure starting salaries. Even those who are skeptical of the ethical implications of working in Big Tech might believe they can force change from within, inspired by the stories of outspoken whistleblowers such as Frances Haugen and Timnit Gebru.

Yet almost all of these individuals buy into the belief that technology is largely a force of good rather than evil. Even companies like Meta, which have been riddled with scandals such as data leaks and the spread of Russian disinformation, are generally perceived as lucrative workplaces with a grand vision of human connectivity.

There is a word for this sentiment: ‘techno-optimism.’ Specifically, this is defined as the belief that technology plays a vital role in solving the most pressing threats to humankind. In moderation, this is a sound belief. After all, we live in a world where almost nothing is absent from the influence of brilliant feats of engineering — the Industrial Revolution has resulted in undeniable gains in efficiency and worldwide wellbeing. Yet, when taken to an extreme, this sentiment can prove destructive, in ways that fundamentally contradict technology’s initial aims.

For one, an over-reliance on the belief that technology can solve existential problems can generate complacency that inhibits swift action. Take climate change, for example. When more people rely on the ‘all-in-one’ power of carbon sucking technologies or cloud-brightening initiatives, systemic causes of climate change such as fossil fuel mining and pollution are overlooked.

Perhaps most illustrative of this point, billionaires such as Elon Musk and Jeff Bezos have adopted new ‘pet projects’ of sending colonies to Mars in order to overcome the existential risks of living on Earth. Notably, this fails to directly address the immediate threat climate change poses to real humans on Earth, facing drought and other climate-related natural disasters, instead shifting the problem to rest on a miracle technological solution. Is this akin to giving up? Could those billions have been spent elsewhere — to alleviate direct suffering?

Unfortunately, rather than addressing the problem of climate change at its root, which would require collective action to drastically reduce the sources of pollution, many of us are waiting to be heroically saved by the Elon Musks of the world — the so-called crazy geniuses and billionaire inventors. There is even a movement called “ecomodernism” that centers on this premise: that technology is the answer to our climate crisis and technological growth can be decoupled from natural harm. 

Secondly, extreme optimism about technological progress can blind us to its potential perils. We are already rushing headfirst into developing sentient AI, mass facial recognition systems, and autonomous military drones. Many of these technologies have become terrifyingly good, terrifyingly fast. After all, AI development is fundamentally a race to the top — companies’ incentives are not to develop ethical inventions, but the most lucrative and ground-breaking.

Moreover, technological advancements tend to directly harm vulnerable groups of individuals. For example, the Industrial Revolution forced millions of workers into abject working conditions and livelihoods, particularly child laborers, while harnessing their labor to rapidly advance technologies. Predictive policing and AI-powered facial recognition systems disproportionately target people of color. The future holds even more terrifying possibilities.

Artificial supplements to human biology and gene-editing may create a new breed of transhumans, with such ‘upgrades’ only accessible to the rich. Sadly, technology often entrenches inequalities, regardless of the intentions of the creator.

Part of what might be to blame for the mass migration to Big Tech could be the illusion that if you are not in the technology space, you are missing out on the latest wave of ground-breaking research. Technologists and serial entrepreneurs are touted as ‘change-makers’ and innovators. Part of this mentality arises from the grip that capitalism has on our society – it has warped our metrics of value from what benefits all of humanity to what is most profitable for a small group of elites.

It is impossible to call for slower change without being labeled as ‘anti-innovation,’ and by extension, opposed to the technological utopia that is widely touted by companies like Meta. Capitalism, in short, relies on technological exploitation. The bigger, the better, so it goes. 

To clarify, I am not suggesting that we should put the brakes on technological progress. Rather, I suggest that we ought to think twice. Some healthy skepticism is welcome. Specifically, we should re-evaluate our metrics for what constitutes a ‘good’ innovation, shifting away from how ‘groundbreaking’ it is to considering how it will impact humanity as a collective. It’s far better to proceed with caution than to leave regulators to deal with a potentially catastrophic invention.

Rather than give technology the power to determine the course of humanity, humanity must instead determine the course of technology. To do so, all of us must play our part. For one, governments and policy-makers should not shy away from regulating technology, even if it slows down the pace of technological progress.

As an example, a “robot tax” could slow down the rate at which workers are displaced by automation while using the revenue to buff up unemployment benefits and potentially even retrain workers to find jobs in non-automated industries. Although this may cause AI to advance at a slower rate, workers have much to gain.

More broadly, startups and corporations should actively seek out more interdisciplinary input to inform their inventions. This could look like hiring more ethicists, public policy experts, and environmentalists and giving them power in advising the direction of the company. The problem with the status quo is that many corporations appoint ‘ethicists’ merely for the PR, denying them the ability to meaningfully shape the path of the corporation.

To ensure accountability, various third party organizations could be responsible for drafting up reports detailing a corporation’s estimated ethical impact on different demographic groups and on various aspects of society. In addition, NGOs such as Greenpeace or Oxfam International could release rankings of the most ‘sustainable’ or ‘net-ethical’ corporations based on the various metrics outlined in each report.

Another solution is to restrict funding to ethical ventures. For example, Stanford’s Ethics and Society Review (ESR) partners with the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to mandate that research projects address potential ethical and societal risks as a requirement to access funding, and allows these projects to iteratively mitigate these risks under the guidance of an interdisciplinary faculty panel.

Similar ‘ethical review’ initiatives should be implemented as standard procedure for venture capitalists and sources of corporate funding. By ensuring that money is not blindly funneled into unethical inventions, companies are incentivized to be more ethically accountable and fewer harmful technologies can come to life.

As Stanford students, we are in a unique position to either contribute to these technological risks or, instead, harness them to inform what we create. There is admittedly a large, almost suffocating culture of inventing for the sake of inventing — from startup culture to the pressure to work for big corporations with big agendas.

Rather than invent something because you can or because it can sell to a consumer base, invent in order to help advance humanity as a collective. One way to do this is to have critical conversations with relevant community stakeholders about their concrete needs and determine whether your invention directly addresses those needs. 

Technology alone cannot solve the problems it is responsible for creating. Algorithms are merely a supplement to human flourishing, not a prerequisite. Without meaningful accountability on all fronts, technology is “but improved means to an unimproved end,” as Thoreau remarks. Progress is a deceptive word — it blurs the boundary between pure technological advancement and creating a better society. History has made it abundantly clear: not all progress is good progress.

Login or create an account

Apply to The Daily’s High School Summer Program

deadline EXTENDED TO april 28!

Days
Hours
Minutes
Seconds