Sam Altman projects AGI development, AI integration at TreeHacks

Published Feb. 15, 2026, 11:10 p.m., last updated Feb. 16, 2026, 12:13 a.m.

OpenAI CEO and Stanford dropout Sam Altman returned to campus Saturday evening to speak at the opening ceremony of TreeHacks, Stanford’s annual student-run hackathon. Addressing a packed audience of over 1,000 student hackers, Altman said artificial general intelligence (AGI) is likely to arrive within the next few years.

“If you are a sophomore now, you will graduate into a world with AGI in it,” Altman said. AGI, an advanced form of AI, can learn and apply knowledge at a level equal to or exceeding human intelligence.

Altman began by reflecting on his time as an undergraduate, recalling late nights in the Gates Computer Science building and hacking on dorm-room projects in Donner before leaving Stanford to join Y Combinator’s first batch of founders.

Previously, Altman had spoken at Stanford in 2024 as part of the Entrepreneurial Thought Leaders (ETL) Seminar.

TreeHacks social chair Hannah Shu ’26 said inviting Altman reflected Stanford’s central role in AI and startup culture. “OpenAI is one of the leading companies shaping this space,” Shu said. “We wanted hackers to hear directly from someone building at the frontier.”

Altman reflected on OpenAI’s origins, where the organization was originally intended to be a research lab, not a company building a product. “The degree to which OpenAI was not meant to be a company is very hard to overstate,” he said. “We thought we were just going to write research papers.”

He described OpenAI’s early conviction that increasing compute and model size would continue to unlock new capabilities, a bet that was “incredibly unpopular” in 2014.

“We had this one idea,” Altman said. “Scaling deep learning seems to matter. Let’s push on it as far as we can and see what happens.”

At the time, many researchers believed AGI was “100 years away” and that existing approaches would plateau. Instead, “it was miracle after miracle,” Altman said. “At some point, something about the underlying approach felt like we were discovering a new law of physics.”

Altman pointed to GPT-2, OpenAI’s second generation language model, as the first model that made clear the scaling hypothesis was fundamentally correct. “That first night that I got to play with the model,” he said, “it was doing something I had never seen a computer do before.”

Maddie Bernheim, a representative from TreeHacks sponsor Neo, said she was eager to hear Altman address “the current state of AI and the future of the world,” adding that his remarks were “inspiring for this group of students.”

On the financial side, OpenAI is not publicly traded and does not regularly disclose detailed financials, but outside reporting has underscored just how capital-intensive the race to build and run frontier AI systems has become. In September of 2025, Reuters reported that the company boosted its projected cash burn through 2029 to roughly $115 billion, reflecting the massive compute costs behind today’s leading AI models.

Altman addressed these concerns about OpenAI’s financial position, arguing that current rapid cash burn simply reflects rapid growth potential. “If we can spend a billion dollars this year to make three billion next year, there’s a lot of capital in the world that wants to do that,” Altman said.

Last week, OpenAI also began testing ads in ChatGPT for free users in hopes of increasing revenue — a controversial shift in their business model. Previously, Altman had expressed concerns that ads in ChatGPT could erode user trust and that he saw ads as a “last resort” at a Harvard event in May 2024.

On the product side, Altman urged hackers to move beyond layering AI onto existing workflows and instead design products around AI as the core primitive. “I would just pick a big area and ask: what is possible now with AI that wasn’t possible at all before?” he said.

Rather than incremental productivity gains, Altman suggested the opportunity lies in rebuilding entire categories from scratch. Within two years, he predicted, companies will begin “hiring AI coworkers,” fundamentally changing how teams operate and how quickly products ship.

For many attendees, the message was motivating. Tejasvi Tyagi ’28 said Altman’s remarks “reinforced the importance of continuing to build and experiment while the technical frontier is open.”

Altman suggested the next phase of AI adoption may not feel like a sudden economic shock, but like growth constraints steadily disappearing: research cycles compressing, startups scaling faster and small teams producing output that once required far larger organizations.

Still, he acknowledged unresolved societal questions. Altman posed a hypothetical: if AI could make better decisions than a human president or CEO, would society accept it?

“If the AI is just much better,” he said, “people will feel increasingly like we either tolerate worse or we put the AI in more influential positions.”

At the same time, Altman pushed back against more alarmist interpretations of AI. While many jobs may change or disappear, he said, he rejected the idea that automation will eliminate human purpose. “Human desire for being useful to each other, to compete, to create — that is not going anywhere,” Altman said.

Altman closed by reminding students that the current moment is rare. “The iPhone was one of these moments,” he said. “This is another one.”

“This is the best time ever so far,” Altman added. “What you can do now is so crazy exciting that you should probably try to figure out how to take advantage of this moment.”



Login or create an account