Key Points:
- Sora App Goes Viral: OpenAI’s video app hits No. 1 on U.S. App Store.
- Copyright Shift: Moves to “opt-in” model for using protected content.
- Ethical Concerns: Realistic videos spark deepfake and safety worries.
OpenAI’s latest creation, Sora, led by CEO Sam Altman, has taken the tech world by storm. Within days of its invite-only launch, the AI-powered video app soared to the No. 1 spot on Apple’s U.S. App Store. The platform allows users to generate short, ten-second videos from simple text prompts, instantly transforming written ideas into vivid visual stories.
Sora’s “cameo” feature — enabling users to embed their own likeness into generated clips — has added a social twist, fueling viral engagement. Despite being available only in the U.S. and Canada during its early rollout, demand has skyrocketed. Invitation codes have even surfaced for resale online, some fetching up to $45, reflecting the frenzy around the app’s exclusivity.
However, alongside its rapid success, Sora has faced early backlash. Feeds on the app have been flooded with videos featuring copyrighted and trademarked characters — from SpongeBob and Pokémon to South Park — often in inappropriate or satirical contexts. Several users have also reported videos containing violent or politically sensitive content, prompting concerns over Sora’s safety guardrails and its potential misuse for deepfakes or misinformation.
OpenAI’s Policy Pivot to Protect Creators’ Rights
In response to mounting criticism, OpenAI has announced a major shift in how Sora will handle intellectual property. The company is transitioning from its initial “opt-out” model to an “opt-in” system, meaning copyrighted material cannot be used in AI-generated videos unless rights holders give explicit permission.
Sam Altman has said the company is committed to working closely with studios, artists, and brands to ensure greater control over how their characters and creations are represented. OpenAI also plans to introduce monetisation options that would allow rights holders to earn compensation when their authorised material is used in Sora-generated content — a move aimed at balancing innovation with fair use and accountability.
The company has further promised to strengthen content moderation and expand its copyright dispute process, allowing creators to report misuse more efficiently. While some major entertainment companies have already opted out of the system, others are reportedly exploring partnerships to test Sora’s monetisation features in controlled environments.
Innovation Meets Ethical Crossroads
Despite these swift adjustments, scepticism remains high. Within hours of launch, several alarming videos — depicting violence, war scenes, and political figures in fabricated settings — circulated online, highlighting the continuing challenges of AI content moderation. Experts warn that the growing realism of such videos could amplify misinformation risks and reshape how audiences perceive truth in the digital age.
Still, supporters argue that Sora represents a new era of creative freedom. By enabling anyone to produce cinematic-quality clips without technical expertise, OpenAI has effectively democratized video storytelling. Sam Altman himself has described Sora as “a ChatGPT for creativity,” acknowledging that the tool’s rapid adoption will involve both breakthroughs and mistakes.
As OpenAI refines its policies and safeguards, Sora stands as both a technological marvel and a moral test case for AI-driven media. Its next phase will determine whether it becomes a regulated creative revolution — or another flashpoint in the debate over synthetic content and digital responsibility.