HAI 2019 Fall Conference: Conversation, with Reid Hoffman and DJ Patil

Oct. 28, 2019, 10:45 p.m.

[ubergrid id=1159436]

LinkedIn co-founder Reid Hoffman and former U.S. chief data scientist DJ Patil gathered for the afternoon plenary session and discussed the importance of ethical decision-making in data management in the era of AI technology. 

The two technologists, in conversation with Hoover Institution fellow Amy Zegart, shared visions for ethical education, concerns for data privacy and the need for integrating technology in policy-making — bridging “the suits and the hoodies,” as Hoffman jokingly said — while acknowledging that the misuse of technology could lead to unintended harm. 

Hoffman noted that “blitz-scaling” — the expectation that start-ups expand rapidly — comes with an ethical challenge that must be managed through systematic risk identification. 

“As you move your organization … one of the things you can do is add in some threads and say, ‘Let’s try to identify what the serious risks would be,’ and ‘Let’s try to identify what the things we wished to be fixed in advance versus afterwards,” Hoffman said. 

However, technology ethics at its core is not about “having a 0% chance of bad outcomes,” but rather balancing the risks to avoid the worse result, he said. 

Patil, who worked on former President Obama’s science and technology policy from 2015 to 2017 and helped coin the term “data science” as a researcher in the private sector, noted that proper data use should “help people make smarter and better decisions.” 

As “stewards of data,” Patil said, people working in tech also need a broad education rooted in the humanities that prepares them for sound judgment in complex ethical questions like private data-sharing.

“If you don’t have ethics and the liberal arts as part of the undergraduate core curriculum, you’re at a disadvantage for dealing with these [ethical] aspects of the challenge,” Patil contended. 

From the policy perspective, Patil suggested that the U.S. should increase its investment in technological education to meet the demands for tomorrow’s competitive workplace.

“When we say that China’s accelerating, we’re also saying that we’re decelerating our investment [in technology],” Patil said. “Why aren’t we not investing more aggressively? Why aren’t we not taking the leadership role that we know can happen because of our leading institutions? We’re not investing as we need to as a country.” 

Warning that a potential tug of war between a democratic and an autocratic framework of technology use might be in play, Hoffman said that countries should think of AI strategy as a “non-zero sum game.” 

“One of the perspectives that’s frequently held is that China says, ‘Look, we know that manufacturing jobs of the future are in AI-enabled factories, so we’re going to get there first,’” Hoffman said. But cooperation, rather than competition, is ultimately more productive, he said. 

While acknowledging AI technology’s geopolitical implications, both panelists agreed that economic collaboration and a global data-sharing framework would help mitigate risks in “species challenges” like climate change. 

“We have climate change, the potential for pandemics — what we need is better international frameworks, treaty mechanisms to share data across regional lines so that we can actually work on human problems,” Patil said. 

HAI will host the second day of discussions on the security, political and legal implications of AI technology. 

Contact Won Gi Jung at jwongi ‘at’ stanford.edu, Max Hampel at mhampel ‘at’ stanford.edu, Marianne Lu at mlu23 ‘at’ stanford.edu, Daniel Yang at danieljhyang ‘at’ stanford.edu and Trishiet Ray at trishiet ‘at’ stanford.edu.

Daniel Yang is a staff writer interested in studying History. Contact him at danieljy 'at' stanford.edu.

Login or create an account

Apply to The Daily’s High School Summer Program

deadline EXTENDED TO april 28!

Days
Hours
Minutes
Seconds