President and co-founder of Anthropic Daniela Amodei told a packed room at Stanford Graduate School of Business (GSB) that building a successful AI company and doing good for the world are not mutually exclusive, and the current generation of founders is uniquely positioned to prove that.
“This concept that being in business doesn’t have to be in tension with doing good, I think that is a very new idea and I think it is really special,” Amodei said during the latest installment of GSB’s “View From The Top” speaker series, which brings.
Anthropic is a frontier AI company primarily known for Claude, a range of large language models (LLMs). Amodei, who co-founded Anthropic in 2021 alongside her brother Dario Amodei and five other former OpenAI researchers, spoke about the company’s origins, the realities of building an AI safety-focused enterprise and what she sees as the most pressing questions facing the technology industry.
Before Anthropic, Amodei worked at Stripe and then OpenAI, where she first immersed herself in AI research.
The decision to leave OpenAI in 2021 was not easy, Amodei said. When she ultimately did, she had a shared vision among the seven co-founders of Anthropic to build an organization where safety and responsibility were foundational rather than ancillary.
“It felt like it was easier to start a new company and structure it in a new format,” she said, noting that Anthropic’s incorporation as a public benefit corporation was a deliberate choice that took time to formulate.
Amodei defined AI safety as “a form of radical responsibility for the technology that we’re developing,” contrasting Anthropic’s proactiveness to anticipate the impact and harm of their products.
For her, the commitment to safety extends from large-scale risks, such as the potential for AI to assist in the development of weapons, to more granular concerns around child safety, misinformation and user wellness. She referenced Anthropic’s recent decision to delay the release of its most capable model over cybersecurity concerns.
When asked by the event moderator Gintare Zukauskaite M.B.A. ’26 whether safety and revenue generation are in tension, Amodei pushed back on the premise. “Most businesses are not looking to have models that are unsafe,” she said. “It’s actually really good for business to be safe.”
She acknowledged, however, that a new kind of tension has emerged as models grow more capable over the pace of deployment. “It’s uncomfortable to say to your customer, ‘We understand that desire, we want to get this technology to use as quickly as possible, but it is irresponsible of us to release it until we are confident that all of the patching that needs to be done is done.'”
Amodei offered a nuanced take on AI’s impact on employment, pushing back against both utopian and dystopian narratives. She cited Anthropic’s economic index, which tracks how people use AI tools, and said the data currently shows AI functioning primarily as a complement to human work rather than a replacement.
When discussing the question of AI adoption more broadly, Amodei cautioned against assuming the technology has already reached saturation. She noted that current usage skews toward college-educated men in higher-income countries, and that people in the Global South tend to be far more optimistic about AI’s potential.
“Humans have an inherent desire to learn, to be curious, to want to expand the aperture of things that they know about,” she said. “AI in some ways enables that, but if used incorrectly … it has a cognitive effect.”
She pointed to Anthropic’s work with universities on a “learning mode” for Claude as one effort to address this.
“It is exciting to hear how much cloud compute and storage will be needed for the AI future,” Jared Morrison M.S. ’27 said, reflecting on the event.
When asked about how to mitigate concerns of AI accessing personal data, Amodei said the primary burden should fall on companies, not users. “It is the company’s job to use and protect user data with care,” she said, pointing to Anthropic’s decision not to run ads in Claude as partly motivated by the deeply personal nature of AI conversations. “People have conversations with AI tools that are much more personal than even what you would put on an Instagram account.”
On the specific question of AI medical use, one of Claude’s most common casual applications, according to Amodei, she said, Amodei urged caution. “In my own experience, Claude has been right more often than my doctors about complex medical cases. And I would never do something without checking it with an independent licensed medical professional,” she said. “But the models make things up sometimes. They get confused. They don’t know you.”
She closed with a message for the students in the room interested in an entrepreneurial career.
“There will be times where you’re like, ‘This is not the most fun time,'” she said. “Being able to relate it back to why you decided to do this in the first place and why it matters to you is going to be really important.”
When asked what she’d study if she were back in college, Daniela said she’d major in English literature again because of her love for reading.
“In a room full of people anxiously recalculating their career bets around AI, that kind of conviction about following your passion felt very real and grounding,” Varun Vobilisetty M.B.A. ’26 said.