When OpenAI launched Sora 2, it didn’t just release another text-to-video model. It introduced a new kind of social platform, one that doesn’t just recommend media for you but creates it with you — and sometimes even as you.
The app, now rolling out in the U.S. and Canada, looks friendly enough. You can type a prompt, film a cameo and generate a high-fidelity scene with synchronized audio. Friends can invite you into their clips through a verification feature that confirms your voice and face. Every video also shows a visible artificial intelligence (AI) marker and includes a tag titled “Content Credentials” with details on how the content was made.
At Stanford, where classes double as startup incubators and students receive access to the latest AI models, this shift won’t stay underground for long. That combination of content generation, social feed and consent-gated likeness marks the start of what I’d call “conjured media.”

For more than a decade, feeds have been tuned to show us what we want. TikTok’s “For You” page and YouTube’s recommendation engine defined the last era of social media: content for you.
Sora 2 begins the next one: content as you.
This matters because it collapses three roles — viewer, creator and subject — into one. The video you scroll past might star your roommate’s AI avatar dancing through the Quad or your own digital double giving a class presentation you never actually gave. The line between “posted” and “conjured” is disappearing.
Platforms know this shift is coming. TikTok’s Symphony program already offers stock AI avatars to marketers, while YouTube’s Dream Screen tools generate synthetic backgrounds and B-roll.
On the platform, Sora gives users control. You must grant permission before someone can include your cameo, and you can revoke it anytime. But off-platform, that control fades fast. That’s the paradox: the more control platforms promise, the less control users actually retain once content leaves the platform.
Provenance metadata, tags that designate videos as made by Sora, and invisible watermarks (like Google’s SynthID) help trace AI media, but they aren’t foolproof. Screenshots and re-uploads often strip this metadata. Researchers have already shown that watermarks can be weakened by regeneration, the re-creation of the clip through another AI model.
So yes, Sora’s videos are labeled. But once they hit WhatsApp, Discord or Reddit, those signals can vanish. What happens when someone reposts a deepfake of you without consent?
California’s updated right-of-publicity law now covers “digital replicas” of living and deceased people, yet enforcement is tricky when clips cross platforms and borders. The Washington Post has already reported on Sora videos that reanimate public figures without permission.
At Stanford, this technology won’t feel abstract for long. It’ll reach different parts of the community, from student creators to campus elections.
For student creators: Imagine producing a dance-team promo or explainer video in hours instead of weeks. Sora’s speed democratizes filmmaking, but it also means your next competitor might be an algorithm-trained to imitate your style.
For student journalists: Verifying authenticity just got harder. Newsrooms are beginning to allow readers to inspect media for Content Credentials and attach “AI-generated” labels to synthetic visuals. The Stanford Daily could be next in line to adopt these standards.
For athletes navigating Name, Image and Likeness (NIL) deals: A sponsor could soon ask to feature your “AI twin” in an ad. Stanford’s own NIL resources encourage disclosure and contract clarity. Going forward, those contracts should also spell out how “AI twins” can be used and ensure they aren’t reused without permission, either by the contracting party or anyone else.
For campus elections: Deepfake endorsements are inevitable. The best defense isn’t just technology — it’s norms. Don’t use someone’s likeness for satire or advocacy without clear, verifiable consent. On campus, the ASSU Elections Commission already investigates campaign violations, but the current Joint Bylaws do not mention AI or deepfakes. A simple addition to the Elections Handbook could fill that gap: require each campaign to log consent for anyone appearing as an “AI twin,” display a visible “AI-generated” label on synthetic content and remove non-compliant media within 24 hours of notice.
AI lowers the cost of creation, but abundance changes incentives. While YouTube does not penalize synthetic content that is properly disclosed, it has tightened its monetization language to curb inauthentic and mass-produced uploads, signaling a shift toward rewarding “authenticity.”
As feeds flood with conjured media, trust becomes the scarce resource. That’s an opportunity for student creators here. Originality and transparency will matter more than visual realism.
These forces point toward a future where authenticity isn’t optional. And as Sora-style tools let anyone produce something that looks real, that built-in trust may be the only thing that keeps the online world grounded.
The feed ahead will be thrilling and treacherous. We’ll see videos that look real, sound real and feature people we know — all conjured in seconds. That’s creative power worth celebrating, but only if we pair it with equally strong consent norms and technical seatbelts.
At Stanford, where the future gets prototyped first, we face a choice: wait for the deepfake media crises that will demand reaction or build our community of trust before they arrive. And that foundation won’t come from setting rules alone: it will come from the people at Stanford deciding together what ethical creation is — starting with you.
Utsav Gupta is pursuing a master’s of liberal arts at Stanford. A former patent litigator turned Silicon Valley entrepreneur, he now works on AI and spatial computing.