Eric Horvitz Ph.D. ’91 M.D. ’94, Chief Scientific Officer at Microsoft, anticipated the current era of artificial intelligence (AI) will go beyond a technological shift, describing fundamental changes to the “trajectory of human existence” in a Tuesday talk.
As part of Stanford Graduate School of Business’s (GSB) applied AI initiative, Horvitz addressed the intersection of AI with societal norms, healthcare and the concept of “human flourishing” in conversation with GSB dean Sarah Soule.
Horvitz, who has been at Microsoft for over 30 years, argued that the full societal effects of AI have yet to be fully understood and will not be for decades. He drew parallels to the introduction of steam power and electricity, noting that it often takes decades to reorganize society around new capabilities.
While the technology is moving fast, Horovitz warned that the integration into business and society faces an “impedance mismatch.”
“I think looking back, we’ll say, ‘Wow, that’s where it all started,’” Horvitz said. “But we’ll still be in a time, even 20 years from now, of pretty fast-paced transformation.”
During the event, Horovitz and Soule focused on the erosion of truth amid the rise of deepfakes. Horvitz has long warned about the difficulty of discerning fact and fiction, working to develop technologies that place a cryptographic “wax seal” on content to verify its origin.
Still, he noted that technical solutions are only half the battle, as validated videos are still left for users to decide what to believe.
“We have to also red team it and attack it to make sure that the solution itself doesn’t become a problem,” Horvitz said, referencing a recent Microsoft study exploring how actors might weaponize verification tools to cast doubt on legitimate footage.
Despite the challenges, Horvitz was optimistic about AI’s application in the biosciences. He predicted that within the current generation’s lifetime, humanity would see “AI breakthroughs” leading to cures or chronic management for neurodegenerative conditions like Alzheimer’s and amyotrophic lateral sclerosis (ALS).
Soule steered the conversation toward the culture of the GSB, asking about the role of human mentorship in an automated world. Horvitz offered a reassuring perspective for the students and faculty in the room.
“I will always value mentoring others,” Horvitz said. “AI will not take that away.” He argued that as automation handles routine cognitive tasks, the “care economy” and the mastery of human craft will become even more valuable.
During a Q&A, Serena M. Lee M.S. ’25 pressed Horvitz on the future of AI governance, asking what the “most important open questions” were regarding safety assessments.
Horvitz suggested that safety is moving beyond the control of the creators of AI models alone. “At some point, the companies producing these models become like electric power companies,” he said, implying that they cannot guarantee safety in every downstream application.
Students also raised questions about AI advancements in healthcare, to which Horvitz highlighted a need to develop universally applicable medical AI models. He noted how a model trained at one hospital often fails when it moves to another.
“You need to not just look at the general performance on the set of metrics for clinical medicine, but also how well it is performing on your own datasets, on your own demographic.”
Horvitz argued that in real-world settings, society will not accept “safety issues at the edges,” regardless of average performance improvements.
For attendees, the event offered a counter-narrative to the doomsday scenarios often associated with advanced AI.
Simran Mohnani M.B.A. ’26 enjoyed Horovitz’s discussion of new proteins for cancer research and biomedical modules. “That’s what gives me hope,” she said.
Mohnani also found the event a welcome change of pace from how AI discussions typically trend.
“In a world where we’re hearing quite a lot of… this alarmist nature within Silicon Valley, it was good to see that he pivoted back to more humane applications,” she said.