Sora Feed Philosophy: How Ranking, Controls and Safety Work
Sora Feed Philosophy: How Ranking, Controls and Safety Work
OpenAI
Feb 4, 2026


Not sure what to do next with AI?
Assess readiness, risk, and priorities in under an hour.
Not sure what to do next with AI?
Assess readiness, risk, and priorities in under an hour.
➔ Download Our Free AI Readiness Pack
The Sora feed is designed to inspire creation, not maximise scrolling. It ranks for creativity and connections, lets users steer the feed, and gives parents controls for teen accounts. Personalisation can use Sora activity, optional ChatGPT history and engagement signals. Safety layers block harmful content at generation, filter feeds and add human review.
OpenAI’s Sora combines powerful video generation with a social feed. Rather than chasing watch‑time, the company says the feed is tuned to spark creativity and help people connect. Here’s what that means in practice, and what organisations should do to deploy or engage with Sora responsibly.
The four principles, translated for decision‑makers
Optimise for creativity, not passive scroll.
Ranking favours original creation, participation and remixing over endless consumption. For brands and educators, this implies campaigns and assignments should invite making (e.g., remix challenges) rather than just views.Put users in control.
The feed ships with steerable ranking—people can tell the algorithm what they want to see. Parents can switch off personalisation and control continuous scroll for teen accounts. Build your onboarding and comms to highlight these controls.Prioritise connection.
Connected content (between people who know each other or opt into interactions such as Cameos) is favoured over global, unconnected posts. Design interactive experiences—co‑creation prompts, safe Cameo flows and reply‑with‑a‑remix mechanics.Balance safety and freedom.
Sora combines proactive blocks at generation time with feed filtering and human review. Your internal policy should mirror this: prevent risky generations up‑front, filter what’s eligible for distribution, and keep a clear report‑and‑takedown path.
How personalisation works
Sora may consider:
Your activity on Sora: posts, follows, likes/comments, remixes and approximate location (e.g., city) based on IP.
Optional ChatGPT history: can be switched off in Sora’s data controls.
Engagement signals: views, likes, comments, “see less of this”, remixes.
Author signals: creator’s follower count and past engagement.
Safety signals: eligibility vs. policy and distribution rules.
What this means for organisations
Expect feeds to reflect participation more than passive viewing.
Teach users how to adjust data controls (especially disabling ChatGPT history if desired).
Use “see less” guidance in your digital citizenship materials so teens can tune their experience.
Safety model (proactive + reactive)
Creation‑time guardrails: Unsafe prompts or outputs are blocked within Sora before a post exists.
Distribution filtering: Content that’s age‑inappropriate or policy‑violating is filtered from feeds, galleries and side‑character surfaces; teen accounts get stricter defaults.
Human review & reporting: Users can report content; moderation teams proactively check activity to catch edge cases.
Balance principle: Too many blocks dampen creativity; too few undermine trust. The approach aims for both safety and room to create.
Examples of content not eligible for feed
Graphic sexual content, graphic violence, extremist propaganda, hate, self‑harm and disordered‑eating content, unhealthy dieting/exercise behaviours, appearance‑based shaming, bullying, dangerous challenges, content glorifying depression, age‑restricted goods, engagement‑bait, non‑consensual likeness of living people, disallowed uses of deceased public figures, and potential IP infringement.
Practical playbooks
For schools and youth programmes
Turn on teen defaults; show parents how to disable personalisation and continuous scroll.
Build a “co‑create, don’t copy” culture with remix challenges that credit original prompts.
Include a short briefing on prompt injection and scams when browsing or importing prompts from others.
For brands and agencies
Plan remixable campaigns: supply base prompts, safe characters, and clear consent flows for Cameos.
Treat synthetic likeness carefully: get written consent for any real‑person likeness; log approvals.
Establish IP review before publishing; train teams to spot likely copyrighted elements in prompts.
For public sector and charities
Use steerable ranking to surface local initiatives and co‑creation.
Publish a transparent moderation policy and turnaround times for takedowns.
Provide easy “see less of this” guidance in community comms.
Governance & compliance checklist (UK‑ready)
Age‑appropriate design: Defaults for teens; parental control guidance in onboarding packs.
Data controls: Document how optional chat history is handled and how users can disable it.
DLP & privacy: If staff post from work accounts, ensure no personal/sensitive data is embedded in prompts, captions or assets.
Consent & likeness: No use of a living individual’s likeness without explicit consent; special care for minors and deceased public figures.
IP diligence: Maintain a checklist for copyrighted names, logos, characters and music.
Moderation ops: Clear reporting channels, SLAs and escalation to platform support where applicable.
Measurement: what “good” looks like
Creation rate (remixes/posts per active user) > views.
Safe participation (reports per 1,000 views trending down).
Control usage (percentage of users who adjusted feed settings).
Time to takedown for violative content.
Common questions
Is Sora a typical social feed? No—ranking seeks creativity and connection over watch‑time.
Can parents limit it? Yes—parental controls can disable personalisation and continuous scroll for teens.
What powers personalisation? Sora activity, optional ChatGPT history, engagement, author and safety signals.
How is safety enforced? Blocks at generation, feed filtering for age/policy, plus human review and reporting.
FAQs
What signals does the Sora feed use?
Sora may use in‑app activity, optional ChatGPT history, engagement and author metrics, plus safety signals, to predict content you’ll like and want to remix.
Can teens use Sora safely?
Teen accounts have stricter defaults; parents can disable personalisation and continuous scroll. Teach teens to use “see less of this” and report features.
What kinds of content are not allowed in the feed?
Content involving graphic sex/violence, hate/extremism, self‑harm or disordered‑eating, bullying, dangerous challenges, age‑restricted goods, engagement‑bait, non‑consensual likeness of living people, disallowed uses of deceased public figures, or likely IP infringement.
Does Sora optimise for watch‑time?
No—the stated aim is to inspire creation and connection, not time‑spent.
The Sora feed is designed to inspire creation, not maximise scrolling. It ranks for creativity and connections, lets users steer the feed, and gives parents controls for teen accounts. Personalisation can use Sora activity, optional ChatGPT history and engagement signals. Safety layers block harmful content at generation, filter feeds and add human review.
OpenAI’s Sora combines powerful video generation with a social feed. Rather than chasing watch‑time, the company says the feed is tuned to spark creativity and help people connect. Here’s what that means in practice, and what organisations should do to deploy or engage with Sora responsibly.
The four principles, translated for decision‑makers
Optimise for creativity, not passive scroll.
Ranking favours original creation, participation and remixing over endless consumption. For brands and educators, this implies campaigns and assignments should invite making (e.g., remix challenges) rather than just views.Put users in control.
The feed ships with steerable ranking—people can tell the algorithm what they want to see. Parents can switch off personalisation and control continuous scroll for teen accounts. Build your onboarding and comms to highlight these controls.Prioritise connection.
Connected content (between people who know each other or opt into interactions such as Cameos) is favoured over global, unconnected posts. Design interactive experiences—co‑creation prompts, safe Cameo flows and reply‑with‑a‑remix mechanics.Balance safety and freedom.
Sora combines proactive blocks at generation time with feed filtering and human review. Your internal policy should mirror this: prevent risky generations up‑front, filter what’s eligible for distribution, and keep a clear report‑and‑takedown path.
How personalisation works
Sora may consider:
Your activity on Sora: posts, follows, likes/comments, remixes and approximate location (e.g., city) based on IP.
Optional ChatGPT history: can be switched off in Sora’s data controls.
Engagement signals: views, likes, comments, “see less of this”, remixes.
Author signals: creator’s follower count and past engagement.
Safety signals: eligibility vs. policy and distribution rules.
What this means for organisations
Expect feeds to reflect participation more than passive viewing.
Teach users how to adjust data controls (especially disabling ChatGPT history if desired).
Use “see less” guidance in your digital citizenship materials so teens can tune their experience.
Safety model (proactive + reactive)
Creation‑time guardrails: Unsafe prompts or outputs are blocked within Sora before a post exists.
Distribution filtering: Content that’s age‑inappropriate or policy‑violating is filtered from feeds, galleries and side‑character surfaces; teen accounts get stricter defaults.
Human review & reporting: Users can report content; moderation teams proactively check activity to catch edge cases.
Balance principle: Too many blocks dampen creativity; too few undermine trust. The approach aims for both safety and room to create.
Examples of content not eligible for feed
Graphic sexual content, graphic violence, extremist propaganda, hate, self‑harm and disordered‑eating content, unhealthy dieting/exercise behaviours, appearance‑based shaming, bullying, dangerous challenges, content glorifying depression, age‑restricted goods, engagement‑bait, non‑consensual likeness of living people, disallowed uses of deceased public figures, and potential IP infringement.
Practical playbooks
For schools and youth programmes
Turn on teen defaults; show parents how to disable personalisation and continuous scroll.
Build a “co‑create, don’t copy” culture with remix challenges that credit original prompts.
Include a short briefing on prompt injection and scams when browsing or importing prompts from others.
For brands and agencies
Plan remixable campaigns: supply base prompts, safe characters, and clear consent flows for Cameos.
Treat synthetic likeness carefully: get written consent for any real‑person likeness; log approvals.
Establish IP review before publishing; train teams to spot likely copyrighted elements in prompts.
For public sector and charities
Use steerable ranking to surface local initiatives and co‑creation.
Publish a transparent moderation policy and turnaround times for takedowns.
Provide easy “see less of this” guidance in community comms.
Governance & compliance checklist (UK‑ready)
Age‑appropriate design: Defaults for teens; parental control guidance in onboarding packs.
Data controls: Document how optional chat history is handled and how users can disable it.
DLP & privacy: If staff post from work accounts, ensure no personal/sensitive data is embedded in prompts, captions or assets.
Consent & likeness: No use of a living individual’s likeness without explicit consent; special care for minors and deceased public figures.
IP diligence: Maintain a checklist for copyrighted names, logos, characters and music.
Moderation ops: Clear reporting channels, SLAs and escalation to platform support where applicable.
Measurement: what “good” looks like
Creation rate (remixes/posts per active user) > views.
Safe participation (reports per 1,000 views trending down).
Control usage (percentage of users who adjusted feed settings).
Time to takedown for violative content.
Common questions
Is Sora a typical social feed? No—ranking seeks creativity and connection over watch‑time.
Can parents limit it? Yes—parental controls can disable personalisation and continuous scroll for teens.
What powers personalisation? Sora activity, optional ChatGPT history, engagement, author and safety signals.
How is safety enforced? Blocks at generation, feed filtering for age/policy, plus human review and reporting.
FAQs
What signals does the Sora feed use?
Sora may use in‑app activity, optional ChatGPT history, engagement and author metrics, plus safety signals, to predict content you’ll like and want to remix.
Can teens use Sora safely?
Teen accounts have stricter defaults; parents can disable personalisation and continuous scroll. Teach teens to use “see less of this” and report features.
What kinds of content are not allowed in the feed?
Content involving graphic sex/violence, hate/extremism, self‑harm or disordered‑eating, bullying, dangerous challenges, age‑restricted goods, engagement‑bait, non‑consensual likeness of living people, disallowed uses of deceased public figures, or likely IP infringement.
Does Sora optimise for watch‑time?
No—the stated aim is to inspire creation and connection, not time‑spent.
Get weekly AI news and advice delivered to your inbox
By subscribing you consent to Generation Digital storing and processing your details in line with our privacy policy. You can read the full policy at gend.co/privacy.
Upcoming Workshops and Webinars


Operational Clarity at Scale - Asana
Virtual Webinar
Weds 25th February 2026
Online


Working With AI Teammates - Asana
Working With AI Teammates - Asana
In-Person Workshop
Thurs 26th February 2026
London, UK
Generation
Digital

UK Office
Generation Digital Ltd
33 Queen St,
London
EC4R 1AP
United Kingdom
Canada Office
Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canada
USA Office
Generation Digital Americas Inc
77 Sands St,
Brooklyn, NY 11201,
United States
EU Office
Generation Digital Software
Elgee Building
Dundalk
A91 X2R3
Ireland
Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia
Company No: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy
Generation
Digital

UK Office
Generation Digital Ltd
33 Queen St,
London
EC4R 1AP
United Kingdom
Canada Office
Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canada
USA Office
Generation Digital Americas Inc
77 Sands St,
Brooklyn, NY 11201,
United States
EU Office
Generation Digital Software
Elgee Building
Dundalk
A91 X2R3
Ireland
Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia









