Amid the rapid advancement of generative artificial intelligence (AI), Volker Türk, the UN High Commissioner for Human Rights, urged the technology sector to utilize human rights to navigate the unknown field of AI during a panel at the Law School last Wednesday.
Turk’s keynote address opened “The Human Rights Dimensions of Generative AI,” an event sponsored by the Stanford Center for Human Rights and International Justice.
The talk featured panelists with a breadth of experience in the field, ranging from Stanford law professor Nate Persily J.D. ‘98 to the CTO of engineering at the Emerson Collective Raffi Krikorian. Joined by an audience of around 50 students and faculty alike, the panel was moderated by Ambassador Eileen Donahue J.D. ’89, Special Envoy for Digital Freedom from the U.S. State Department.
Türk’s concerns come as a wave of development in the AI field with the potential to aggravate existing problems of misinformation, election propaganda and discrimination occurring more rapidly than the industry can regulate. “Companies and countries have largely failed” to uphold human rights in the development of AI,” he said.
Türk positioned these problems within the framework of the 1948 Universal Declaration of Human Rights, which serves as the basis of international human rights, cautioning against the possibility of AI infringing on the rights to work, non-discrimination, accessing information and privacy.
He attributed the abuses of these rights to many phenomena unfolding across the world.
These events included the election year ahead, with over 60 countries holding elections in 2024. Türk reminded the audience that AI’s creation of cheap but “powerful propaganda” in these elections can “deeply undermine the functioning of democratic institutions.”
With access to true information hindered, Türk expressed his worry about the powerful “clouding the minds and hearts of people,” disrupting elections. Law professor Persily repeated the dark reality of this risk.
“It’s not just that people will be believing in false stuff, it is that they will be disbelieving in true stuff,” Persily said. “Artificial content… then gives credibility to all of those who want to deny reality.”
Türk and Persily highlighted the impact that deep fakes — digitally altered media that misleadingly (and typically maliciously) — edit a person into a situation, will have on elections and, more importantly, people’s privacy and agency.
“We have already seen this phenomenon… disrupt elections, deceive people, but also spread hatred and misogyny,” Türk said.
Discrimination can also be found in the models generative AI is “trained on…almost inevitably contain hateful and discriminatory ideas that can infect our societies,” Türk said.
Healthy regulation of AI development is critical in curbing these risks, which Türk stated can be guided by human rights principles and law.
“Human rights provide a governance model that is long-term… intergenerational and ensuring our future,” Türk said.
Conversation between each sector in regulating AI development is also important, Krikorian said. While private companies are “doing the work” in the absence of government regulation, he said, he urges the social sector consuming the technologies to join.
“The civil society actors also need to have a voice in this conversation and… have this multi-stakeholder, cross-accountability situation that they can come to an answer correctly for all of us in society,” Krikorian said.
Global Head of Human Rights at Google Alex Walden added that this adherence to human rights can be seen through the application of the UN Guiding Principles on Business and Human Rights by many companies, including Google.
Peggy Hicks, the Director of Thematic Engagement and Special Procedures at the UN Human Rights Office, stressed that the discussion of AI cannot be simply a Silicon Valley or Western-world exchange.
“We cannot have a fragmented conversation about a global issue of this sort,” Hicks said. “Human rights… can bring us together and give us a grounding about how we address some of the real challenges, but also the real opportunities that we will see with generative AI.”