Research Roundup: ‘mini brains’ and mindful design

Nov. 12, 2025, 11:50 p.m.

The Science & Technology desk gathers a weekly digest of impactful and interesting research publications and developments at Stanford. Read the latest in this week’s Research Roundup.

Modeling human brain development with assembloids

Stanford neuroscientist Sergiu Pasca, a professor of psychiatry and behavioral sciences and director of the Stanford Brain Organogenesis Program, is pioneering the use of lab-grown “mini brains,” also known as assembloids, to explore how the human brain develops and functions. 

“The human brain largely builds itself,” Pasca said, describing how various brain organoids will self-organize and form connections when placed together in a dish. When organoids fuse in this way, they become assembloids, which researchers can use to model specific brain regions. 

By creating assembloids, Pasca and his lab team can study how different brain regions form connections, a process that was once impossible to observe directly. 

Pasca emphasizes that these models are tools, not complete brains. “An assembloid is not a brain — it’s just a working model of some of the circuitry resident in a living, functioning human brain,” he said.

Researchers at the Pasca Lab use assembloids to study neurodevelopmental disorders such as Timothy syndrome, a rare and life-threatening genetic disorder that causes neurological disorders and autism. These studies are helping researchers understand how genetic mutations alter brain development.

While questions remain regarding the ethics of creating complex human neural systems, including how far scientists can go in modeling consciousness, Pasca’s research marks a major step toward understanding and potentially treating the human brain’s most elusive disorders.

Sweet spot for online loading

Nobody likes to wait — and especially online. But new research from the Graduate School of Business (GSB) suggests that the right kind of animation can make digital delays feel shorter and keep users from clicking away. 

In a series of studies led by Yu Ding, assistant professor of marketing at the GSB, and Ellie Kyung, a researcher at Babson College, researchers found that moderate-speed animations, as opposed to static images or rapid animations, can significantly reduce perceived page load wait time and improve user engagement. 

The findings were inspired by a moment of irritation. When Ding noticed how a lingering logo on his TV made him want to change the channel, he became curious about how brands could keep users engaged during unavoidable pauses online.

Across experiments involving more than 1,400 participants, moderate-speed animations consistently led to better outcomes. In one test, Facebook users were more likely to wait through a 20-second page load if shown a moderately moving image. In another, participants who saw moderate animations during a short wait were more likely to complete a task and reported higher satisfaction with products they viewed afterward.

The results point to a simple, low-cost way for companies to improve digital experiences.

“If something is moving too slowly, people just don’t pay attention… The middle point is the sweet spot,” Ding said. 

AI rivals humans in political persuasion

Artificial intelligence (AI) may be just as convincing as humans when it comes to politics, according to new research from the GSB.

Robb Willer, professor of sociology and organizational behavior at the GSB (by courtesy), and Zakary Tormala, the Laurence W. Lane professor of behavioral science and marketing at the GSB, led complementary studies examining how people respond to political messages written by AI compared to those written by humans. 

Willer’s team found that AI-generated messages were equally persuasive across a range of policy issues, including gun control, carbon taxes and voter registration. Participants’ opinions shifted similarly regardless of whether messages came from people or machines.

Tormala’s research revealed a striking twist: when participants were told that a message came from AI, they were more open to opposing viewpoints. People perceived AI as less biased and more informed than human messengers, making them more receptive to counterarguments and more likely to share or seek out information that challenged their beliefs.

The findings suggest that AI could be used to reduce political polarization by presenting information in ways people trust more. 

“We were thinking AI might sometimes do better than human sources in this context because people believe AI has no persuasive intent, is not inherently biased and has access to a wealth of information,” Tormala told the Stanford Report. However, the same psychological effect could also enable the spread of misinformation, Tormala warned.



Login or create an account