Sam Altman, OpenAI’s co-founder and CEO, faced a crowd of students in NVIDIA Auditorium who met him with questions and an enthusiastic rendition of happy birthday.
Over one thousand people lined up outside of Jen-Hsun Huang Engineering Center to get a chance to attend Altman’s speaker event, which was hosted by the Stanford Engineering Entrepreneurship Center (STVP) as part of the Entrepreneurial Thought Leaders (ETL) Seminar on Wednesday — two days after the 39 year old’s birthday.
For 22 years, ETL has been both a public content series and the MS&E 472 course. Last spring, MS&E 472 was the fifth-most enrolled course, with over 400 undergraduate and graduate students. But the crowded event was a unique one, even for the popular class.
Altman, who dropped out of Stanford in 2005, is most widely known for leading OpenAI, the AI research company that developed ChatGPT and DALL-E. First founded as a nonprofit research lab in 2015, the mission of OpenAI is to “ensure that artificial general intelligence benefits all of humanity.”
Over the years, OpenAI’s mission hasn’t changed, but the structure has been, and will continue to be, adapted, Altman told the audience. The current structure of OpenAI includes a for-profit subsidiary that is capable of issuing equity to raise capital, but is still bound by the nonprofit’s mission.
“I think making money is a good thing. I think capitalism is a good thing,” Altman said. “My co-founders on the board have had financial interests and I’ve never once seen them not take the gravity of the mission seriously. But we’ve put a structure in place that we think is a way to get incentives aligned.”
The for-profit subsidiary arose from a need for capital beyond donations, due to the high cost of computational power and talent necessary for OpenAI’s research. “Whether we burn 500 million a year, or 5 billion or 50 billion a year, I genuinely don’t care — as long as we stay on the trajectory where eventually we create way more value for society than that and as long as we can figure out a way to pay the bills,” Altman said.
The core of OpenAI’s research lies in the development of Artificial General Intelligence (AGI), which OpenAI defines as “a highly autonomous system that outperforms humans at most economically valuable work” on the company’s website. ChatGPT is one example of OpenAI’s movement towards AGI.
“ChatGPT is mildly embarrassing at best,” Altman said. “GPT-4 is the dumbest model any of you will ever have to use again, by a lot. But, it’s important to ship early and often, and we believe in iterative deployment.”
Altman believes in preparing society for technological advancement, which relies on responsible and iterative deployment, even with imperfect models. “If we go build AGI in the basement, and then the world is blissfully walking blindfolded along, I don’t think that makes us very good neighbors,” Altman said. “Let society co-evolve [with] the technology. Let society tell us what it collectively, and people individually, want from the technology.”
Although the prospect of AGI can be scary, Altman believes that it will become a scaffolding for society to achieve greater heights. “I’m actually not worried at all about the stifling of human innovation,” Altman said. “People will just surprise us on the upside with better tools. I think all of history suggests when you give people more leverage, they do more amazing things.”
In November of 2023, OpenAI’s board of directors removed Altman as CEO following a review that determined he “was not consistently candid in his communications with the board.” Afterwards, over 700 OpenAI employees signed an open letter to the board that threatened to leave OpenAI and join Microsoft unless Altman was reinstated as CEO.
In response to a question about the overwhelming support from OpenAI employees, Altman said that it was motivated by a “deep sense of purpose and loyalty to the mission,” which he believes is “the strongest force for success, at least that I’ve seen, among startups.”
After the ETL event, Altman joined his former professor Mehran Sahami ’92 Ph.D. ’99, chair of the computer science department, at the Gates Computer Science building for a RSVP-only fireside chat. The talk began with jokes about Altman dropping out of Stanford, followed by a series of questions from Samahi about topics like AI development, ethics and safety and the prospect of AGI.
Altman believes that AGI could be capable of bringing society high-quality education, cures for disease, entertainment and space exploration.
“We owe ourselves and the people of the future a better world,” Altman said. He believes AGI could replace lawyers and doctors to create accessible access to legal and medical services, helping “the poorest half of the world more than the richest half of the world.”
Altman believes that although new technology can be shocking initially, society adapts quickly.
GPT-4, which came out a year ago, was met with “two weeks of freaking out” and people believed it was “this crazy thing and the world had changed forever,” Altman said. “Now people are like, ‘Oh, it’s horrible. Where is GPT-5?’”