Sora Feed Philosophy: How Ranking, Controls and Safety Work

Sora Feed Philosophy: How Ranking, Controls and Safety Work

OpenAI

4 feb 2026

Four people are gathered in a modern office with large windows, engaged in a presentation featuring a digital screen displaying an underwater scene, with one person pointing and others smiling attentively, embodying a collaborative work environment.
Four people are gathered in a modern office with large windows, engaged in a presentation featuring a digital screen displaying an underwater scene, with one person pointing and others smiling attentively, embodying a collaborative work environment.

¿No está seguro de qué hacer a continuación con IA?
Evalúe su preparación, riesgos y prioridades en menos de una hora.

¿No está seguro de qué hacer a continuación con IA?
Evalúe su preparación, riesgos y prioridades en menos de una hora.

➔ Descarga nuestro paquete gratuito de preparación para IA

The Sora feed is designed to inspire creation, not maximise scrolling. It ranks for creativity and connections, lets users steer the feed, and gives parents controls for teen accounts. Personalisation can use Sora activity, optional ChatGPT history and engagement signals. Safety layers block harmful content at generation, filter feeds and add human review.

OpenAI’s Sora combines powerful video generation with a social feed. Rather than chasing watch‑time, the company says the feed is tuned to spark creativity and help people connect. Here’s what that means in practice, and what organisations should do to deploy or engage with Sora responsibly.

The four principles, translated for decision‑makers

  1. Optimise for creativity, not passive scroll.
    Ranking favours original creation, participation and remixing over endless consumption. For brands and educators, this implies campaigns and assignments should invite making (e.g., remix challenges) rather than just views.

  2. Put users in control.
    The feed ships with steerable ranking—people can tell the algorithm what they want to see. Parents can switch off personalisation and control continuous scroll for teen accounts. Build your onboarding and comms to highlight these controls.

  3. Prioritise connection.
    Connected content (between people who know each other or opt into interactions such as Cameos) is favoured over global, unconnected posts. Design interactive experiences—co‑creation prompts, safe Cameo flows and reply‑with‑a‑remix mechanics.

  4. Balance safety and freedom.
    Sora combines proactive blocks at generation time with feed filtering and human review. Your internal policy should mirror this: prevent risky generations up‑front, filter what’s eligible for distribution, and keep a clear report‑and‑takedown path.

How personalisation works

Sora may consider:

  • Your activity on Sora: posts, follows, likes/comments, remixes and approximate location (e.g., city) based on IP.

  • Optional ChatGPT history: can be switched off in Sora’s data controls.

  • Engagement signals: views, likes, comments, “see less of this”, remixes.

  • Author signals: creator’s follower count and past engagement.

  • Safety signals: eligibility vs. policy and distribution rules.

What this means for organisations

  • Expect feeds to reflect participation more than passive viewing.

  • Teach users how to adjust data controls (especially disabling ChatGPT history if desired).

  • Use “see less” guidance in your digital citizenship materials so teens can tune their experience.

Safety model (proactive + reactive)

  • Creation‑time guardrails: Unsafe prompts or outputs are blocked within Sora before a post exists.

  • Distribution filtering: Content that’s age‑inappropriate or policy‑violating is filtered from feeds, galleries and side‑character surfaces; teen accounts get stricter defaults.

  • Human review & reporting: Users can report content; moderation teams proactively check activity to catch edge cases.

  • Balance principle: Too many blocks dampen creativity; too few undermine trust. The approach aims for both safety and room to create.

Examples of content not eligible for feed

Graphic sexual content, graphic violence, extremist propaganda, hate, self‑harm and disordered‑eating content, unhealthy dieting/exercise behaviours, appearance‑based shaming, bullying, dangerous challenges, content glorifying depression, age‑restricted goods, engagement‑bait, non‑consensual likeness of living people, disallowed uses of deceased public figures, and potential IP infringement.

Practical playbooks

For schools and youth programmes

  • Turn on teen defaults; show parents how to disable personalisation and continuous scroll.

  • Build a “co‑create, don’t copy” culture with remix challenges that credit original prompts.

  • Include a short briefing on prompt injection and scams when browsing or importing prompts from others.

For brands and agencies

  • Plan remixable campaigns: supply base prompts, safe characters, and clear consent flows for Cameos.

  • Treat synthetic likeness carefully: get written consent for any real‑person likeness; log approvals.

  • Establish IP review before publishing; train teams to spot likely copyrighted elements in prompts.

For public sector and charities

  • Use steerable ranking to surface local initiatives and co‑creation.

  • Publish a transparent moderation policy and turnaround times for takedowns.

  • Provide easy “see less of this” guidance in community comms.

Governance & compliance checklist (UK‑ready)

  • Age‑appropriate design: Defaults for teens; parental control guidance in onboarding packs.

  • Data controls: Document how optional chat history is handled and how users can disable it.

  • DLP & privacy: If staff post from work accounts, ensure no personal/sensitive data is embedded in prompts, captions or assets.

  • Consent & likeness: No use of a living individual’s likeness without explicit consent; special care for minors and deceased public figures.

  • IP diligence: Maintain a checklist for copyrighted names, logos, characters and music.

  • Moderation ops: Clear reporting channels, SLAs and escalation to platform support where applicable.

Measurement: what “good” looks like

  • Creation rate (remixes/posts per active user) > views.

  • Safe participation (reports per 1,000 views trending down).

  • Control usage (percentage of users who adjusted feed settings).

  • Time to takedown for violative content.

Common questions

  • Is Sora a typical social feed? No—ranking seeks creativity and connection over watch‑time.

  • Can parents limit it? Yes—parental controls can disable personalisation and continuous scroll for teens.

  • What powers personalisation? Sora activity, optional ChatGPT history, engagement, author and safety signals.

  • How is safety enforced? Blocks at generation, feed filtering for age/policy, plus human review and reporting.

FAQs

What signals does the Sora feed use?
Sora may use in‑app activity, optional ChatGPT history, engagement and author metrics, plus safety signals, to predict content you’ll like and want to remix.

Can teens use Sora safely?
Teen accounts have stricter defaults; parents can disable personalisation and continuous scroll. Teach teens to use “see less of this” and report features.

What kinds of content are not allowed in the feed?
Content involving graphic sex/violence, hate/extremism, self‑harm or disordered‑eating, bullying, dangerous challenges, age‑restricted goods, engagement‑bait, non‑consensual likeness of living people, disallowed uses of deceased public figures, or likely IP infringement.

Does Sora optimise for watch‑time?
No—the stated aim is to inspire creation and connection, not time‑spent.

The Sora feed is designed to inspire creation, not maximise scrolling. It ranks for creativity and connections, lets users steer the feed, and gives parents controls for teen accounts. Personalisation can use Sora activity, optional ChatGPT history and engagement signals. Safety layers block harmful content at generation, filter feeds and add human review.

OpenAI’s Sora combines powerful video generation with a social feed. Rather than chasing watch‑time, the company says the feed is tuned to spark creativity and help people connect. Here’s what that means in practice, and what organisations should do to deploy or engage with Sora responsibly.

The four principles, translated for decision‑makers

  1. Optimise for creativity, not passive scroll.
    Ranking favours original creation, participation and remixing over endless consumption. For brands and educators, this implies campaigns and assignments should invite making (e.g., remix challenges) rather than just views.

  2. Put users in control.
    The feed ships with steerable ranking—people can tell the algorithm what they want to see. Parents can switch off personalisation and control continuous scroll for teen accounts. Build your onboarding and comms to highlight these controls.

  3. Prioritise connection.
    Connected content (between people who know each other or opt into interactions such as Cameos) is favoured over global, unconnected posts. Design interactive experiences—co‑creation prompts, safe Cameo flows and reply‑with‑a‑remix mechanics.

  4. Balance safety and freedom.
    Sora combines proactive blocks at generation time with feed filtering and human review. Your internal policy should mirror this: prevent risky generations up‑front, filter what’s eligible for distribution, and keep a clear report‑and‑takedown path.

How personalisation works

Sora may consider:

  • Your activity on Sora: posts, follows, likes/comments, remixes and approximate location (e.g., city) based on IP.

  • Optional ChatGPT history: can be switched off in Sora’s data controls.

  • Engagement signals: views, likes, comments, “see less of this”, remixes.

  • Author signals: creator’s follower count and past engagement.

  • Safety signals: eligibility vs. policy and distribution rules.

What this means for organisations

  • Expect feeds to reflect participation more than passive viewing.

  • Teach users how to adjust data controls (especially disabling ChatGPT history if desired).

  • Use “see less” guidance in your digital citizenship materials so teens can tune their experience.

Safety model (proactive + reactive)

  • Creation‑time guardrails: Unsafe prompts or outputs are blocked within Sora before a post exists.

  • Distribution filtering: Content that’s age‑inappropriate or policy‑violating is filtered from feeds, galleries and side‑character surfaces; teen accounts get stricter defaults.

  • Human review & reporting: Users can report content; moderation teams proactively check activity to catch edge cases.

  • Balance principle: Too many blocks dampen creativity; too few undermine trust. The approach aims for both safety and room to create.

Examples of content not eligible for feed

Graphic sexual content, graphic violence, extremist propaganda, hate, self‑harm and disordered‑eating content, unhealthy dieting/exercise behaviours, appearance‑based shaming, bullying, dangerous challenges, content glorifying depression, age‑restricted goods, engagement‑bait, non‑consensual likeness of living people, disallowed uses of deceased public figures, and potential IP infringement.

Practical playbooks

For schools and youth programmes

  • Turn on teen defaults; show parents how to disable personalisation and continuous scroll.

  • Build a “co‑create, don’t copy” culture with remix challenges that credit original prompts.

  • Include a short briefing on prompt injection and scams when browsing or importing prompts from others.

For brands and agencies

  • Plan remixable campaigns: supply base prompts, safe characters, and clear consent flows for Cameos.

  • Treat synthetic likeness carefully: get written consent for any real‑person likeness; log approvals.

  • Establish IP review before publishing; train teams to spot likely copyrighted elements in prompts.

For public sector and charities

  • Use steerable ranking to surface local initiatives and co‑creation.

  • Publish a transparent moderation policy and turnaround times for takedowns.

  • Provide easy “see less of this” guidance in community comms.

Governance & compliance checklist (UK‑ready)

  • Age‑appropriate design: Defaults for teens; parental control guidance in onboarding packs.

  • Data controls: Document how optional chat history is handled and how users can disable it.

  • DLP & privacy: If staff post from work accounts, ensure no personal/sensitive data is embedded in prompts, captions or assets.

  • Consent & likeness: No use of a living individual’s likeness without explicit consent; special care for minors and deceased public figures.

  • IP diligence: Maintain a checklist for copyrighted names, logos, characters and music.

  • Moderation ops: Clear reporting channels, SLAs and escalation to platform support where applicable.

Measurement: what “good” looks like

  • Creation rate (remixes/posts per active user) > views.

  • Safe participation (reports per 1,000 views trending down).

  • Control usage (percentage of users who adjusted feed settings).

  • Time to takedown for violative content.

Common questions

  • Is Sora a typical social feed? No—ranking seeks creativity and connection over watch‑time.

  • Can parents limit it? Yes—parental controls can disable personalisation and continuous scroll for teens.

  • What powers personalisation? Sora activity, optional ChatGPT history, engagement, author and safety signals.

  • How is safety enforced? Blocks at generation, feed filtering for age/policy, plus human review and reporting.

FAQs

What signals does the Sora feed use?
Sora may use in‑app activity, optional ChatGPT history, engagement and author metrics, plus safety signals, to predict content you’ll like and want to remix.

Can teens use Sora safely?
Teen accounts have stricter defaults; parents can disable personalisation and continuous scroll. Teach teens to use “see less of this” and report features.

What kinds of content are not allowed in the feed?
Content involving graphic sex/violence, hate/extremism, self‑harm or disordered‑eating, bullying, dangerous challenges, age‑restricted goods, engagement‑bait, non‑consensual likeness of living people, disallowed uses of deceased public figures, or likely IP infringement.

Does Sora optimise for watch‑time?
No—the stated aim is to inspire creation and connection, not time‑spent.

Recibe noticias y consejos sobre IA cada semana en tu bandeja de entrada

Al suscribirte, das tu consentimiento para que Generation Digital almacene y procese tus datos de acuerdo con nuestra política de privacidad. Puedes leer la política completa en gend.co/privacy.

Próximos talleres y seminarios web

A diverse group of professionals collaborating around a table in a bright, modern office setting.
A diverse group of professionals collaborating around a table in a bright, modern office setting.

Claridad Operacional a Gran Escala - Asana

Webinar Virtual
Miércoles 25 de febrero de 2026
En línea

A diverse group of professionals collaborating around a table in a bright, modern office setting.
A diverse group of professionals collaborating around a table in a bright, modern office setting.

Trabajando con Compañeros de IA - Asana

Trabajando con Compañeros de IA - Asana

Taller Presencial
Jueves 26 de febrero de 2026
Londres, Reino Unido

Generación
Digital

Oficina en el Reino Unido
33 Queen St,
Londres
EC4R 1AP
Reino Unido

Oficina en Canadá
1 University Ave,
Toronto,
ON M5J 1T1,
Canadá

Oficina NAMER
77 Sands St,
Brooklyn,
NY 11201,
Estados Unidos

Oficina EMEA
Calle Charlemont, Saint Kevin's, Dublín,
D02 VN88,
Irlanda

Oficina en Medio Oriente
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Arabia Saudita

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)

Número de la empresa: 256 9431 77 | Derechos de autor 2026 | Términos y Condiciones | Política de Privacidad

Generación
Digital

Oficina en el Reino Unido
33 Queen St,
Londres
EC4R 1AP
Reino Unido

Oficina en Canadá
1 University Ave,
Toronto,
ON M5J 1T1,
Canadá

Oficina NAMER
77 Sands St,
Brooklyn,
NY 11201,
Estados Unidos

Oficina EMEA
Calle Charlemont, Saint Kevin's, Dublín,
D02 VN88,
Irlanda

Oficina en Medio Oriente
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Arabia Saudita

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)


Número de Empresa: 256 9431 77
Términos y Condiciones
Política de Privacidad
Derechos de Autor 2026