Inside OpenAI’s In‑House Data Agent and What Enterprises Can Learn
Inside OpenAI’s In‑House Data Agent and What Enterprises Can Learn
IA
30 janv. 2026


Not sure what to do next with AI?
Assess readiness, risk, and priorities in under an hour.
Not sure what to do next with AI?
Assess readiness, risk, and priorities in under an hour.
➔ Réservez une consultation
OpenAI built an in‑house data agent that answers high‑impact questions across 600+ PB and ~70k datasets via natural language. It layers table‑level knowledge with product and organisational context to produce trustworthy, auditable answers—fast. Here’s what it is, why it matters, and a blueprint to replicate the pattern inside your company.
Why this matters now
Most organisations have the data to make better decisions—but not the context. Analysts spend time finding tables, decoding business logic, and re‑writing the same queries. OpenAI’s post shows a practical model: an agent that reasons over data + metadata + institutional context to deliver usable answers, not just charts.
What OpenAI says they built
A data agent that employees query in natural language; it then plans, runs analysis, and returns answers with the right context.
It reasons across ~600+ petabytes and ~70k datasets, using internal product and organisational knowledge to avoid classic “wrong join” mistakes.
It maintains table‑level knowledge and uses an agent loop to plan, call tools, and verify before responding.
It’s designed for trustworthiness: provenance, constraints, and context are first‑class citizens.
Source: OpenAI engineering blog, “Inside OpenAI’s in‑house data agent,” 29 Jan 2026.
Enterprise translation: capabilities you should copy
Unified semantic layer
Map business terms to tables, fields and policies (owners, PII flags, freshness). Store it where the agent can reason over it.Agent loop with guardrails
Break problems into steps: understand → plan → fetch → analyse → validate → answer. Impose limits (cost, row counts, PII handling).Provenance & verification
Return links to datasets, query fragments and assumptions. Prefer retrieval/derivation over free‑form generation for numbers.Context packer
Include product concepts, KPI definitions, and organisational nuances (e.g., “active user” rules by region) to prevent subtle errors.Human‑in‑the‑loop
Let analysts correct mappings, label good answers, and add commentary that becomes reusable context.
Reference architecture
Interface: chat + forms; prompt starters for common tasks (KPI checks, trend analysis, cohort diffs).
Brain: an agent runtime that can plan and call tools; orchestrates SQL engines, notebooks, and metric stores.
Knowledge: semantic/metrics layer (owners, KPIs, lineage, policies), plus documentation and code snippets.
Data plane: your warehouses/lakes; read‑only by default; sandbox for heavy jobs.
Safety: query cost caps, row‑level policies, PII masking, audit logs.
Observability: success rates, correction loops, freshness and drift monitors.
Playbooks to ship in 30 days
Playbook 1 — KPI truth service
Scope: 10 canonical KPIs; encode definitions + owners.
Build: prompts + metric store; agent verifies freshness and returns figures with links and caveats.
Win: stop “duelling dashboards”.
Playbook 2 — Growth experiment assistant
Scope: analyse A/B outcomes and cohort impacts.
Build: templated queries + guardrails (power, MDE, sample alerts).
Win: faster, safer experiment reads.
Playbook 3 — Support & ops insights
Scope: ticket drivers, time‑to‑resolution, deflection opportunities.
Build: ingestion from CRM/helpdesk with taxonomy mapping; agent suggests next actions.
Win: weekly exec brief in minutes.
Governance & risk controls (non‑negotiables)
Least privilege: read‑only roles; scoped datasets; approval for writing back.
Data minimisation: restrict PII; tokenise where possible; log access.
Evaluation: test sets for correctness, bias, and leakage; regression gates on each release.
Change control: model/metric versioning; lineage snapshots; rollback plans.
Cost management: per‑query budgets; sampling rules; batch heavy jobs.
Build vs buy
Buy if your stack is standard and you need velocity; choose vendors that expose the semantic/metrics layer and give you export + BYO‑model options.
Build if you have complex definitions, strict sovereignty, or need deep tool customisation.
Hybrid: buy the runtime, own the knowledge layer.
Measuring impact
Answer time (p50/p95), correctness (expert review), and query cost per answer.
KPI reconciliation rate and dashboard sprawl reduction.
Experiment readout cycle time.
Executive reliance (weekly brief usage), with spot‑checks for accuracy.
FAQs
Will this replace analysts?
No—good agents remove the busywork (table hunt, boilerplate SQL) so analysts spend time on design and decisions.
How do we avoid hallucinated numbers?
Treat numbers as computed from trusted sources with visible queries; block generation unless provenance is attached.
Can we use open‑weight models?
Yes. Keep the knowledge layer model‑agnostic and swap models behind an orchestration API.
What about SaaS data?
Snapshot into your lake; map fields to business terms; control schema drift with tests.
Next Steps
Want an internal data agent with guardrails?
We’ll blueprint your semantic layer, wire an agent runtime, and ship two playbooks (KPI truth + experiments) with governance and cost controls.
OpenAI built an in‑house data agent that answers high‑impact questions across 600+ PB and ~70k datasets via natural language. It layers table‑level knowledge with product and organisational context to produce trustworthy, auditable answers—fast. Here’s what it is, why it matters, and a blueprint to replicate the pattern inside your company.
Why this matters now
Most organisations have the data to make better decisions—but not the context. Analysts spend time finding tables, decoding business logic, and re‑writing the same queries. OpenAI’s post shows a practical model: an agent that reasons over data + metadata + institutional context to deliver usable answers, not just charts.
What OpenAI says they built
A data agent that employees query in natural language; it then plans, runs analysis, and returns answers with the right context.
It reasons across ~600+ petabytes and ~70k datasets, using internal product and organisational knowledge to avoid classic “wrong join” mistakes.
It maintains table‑level knowledge and uses an agent loop to plan, call tools, and verify before responding.
It’s designed for trustworthiness: provenance, constraints, and context are first‑class citizens.
Source: OpenAI engineering blog, “Inside OpenAI’s in‑house data agent,” 29 Jan 2026.
Enterprise translation: capabilities you should copy
Unified semantic layer
Map business terms to tables, fields and policies (owners, PII flags, freshness). Store it where the agent can reason over it.Agent loop with guardrails
Break problems into steps: understand → plan → fetch → analyse → validate → answer. Impose limits (cost, row counts, PII handling).Provenance & verification
Return links to datasets, query fragments and assumptions. Prefer retrieval/derivation over free‑form generation for numbers.Context packer
Include product concepts, KPI definitions, and organisational nuances (e.g., “active user” rules by region) to prevent subtle errors.Human‑in‑the‑loop
Let analysts correct mappings, label good answers, and add commentary that becomes reusable context.
Reference architecture
Interface: chat + forms; prompt starters for common tasks (KPI checks, trend analysis, cohort diffs).
Brain: an agent runtime that can plan and call tools; orchestrates SQL engines, notebooks, and metric stores.
Knowledge: semantic/metrics layer (owners, KPIs, lineage, policies), plus documentation and code snippets.
Data plane: your warehouses/lakes; read‑only by default; sandbox for heavy jobs.
Safety: query cost caps, row‑level policies, PII masking, audit logs.
Observability: success rates, correction loops, freshness and drift monitors.
Playbooks to ship in 30 days
Playbook 1 — KPI truth service
Scope: 10 canonical KPIs; encode definitions + owners.
Build: prompts + metric store; agent verifies freshness and returns figures with links and caveats.
Win: stop “duelling dashboards”.
Playbook 2 — Growth experiment assistant
Scope: analyse A/B outcomes and cohort impacts.
Build: templated queries + guardrails (power, MDE, sample alerts).
Win: faster, safer experiment reads.
Playbook 3 — Support & ops insights
Scope: ticket drivers, time‑to‑resolution, deflection opportunities.
Build: ingestion from CRM/helpdesk with taxonomy mapping; agent suggests next actions.
Win: weekly exec brief in minutes.
Governance & risk controls (non‑negotiables)
Least privilege: read‑only roles; scoped datasets; approval for writing back.
Data minimisation: restrict PII; tokenise where possible; log access.
Evaluation: test sets for correctness, bias, and leakage; regression gates on each release.
Change control: model/metric versioning; lineage snapshots; rollback plans.
Cost management: per‑query budgets; sampling rules; batch heavy jobs.
Build vs buy
Buy if your stack is standard and you need velocity; choose vendors that expose the semantic/metrics layer and give you export + BYO‑model options.
Build if you have complex definitions, strict sovereignty, or need deep tool customisation.
Hybrid: buy the runtime, own the knowledge layer.
Measuring impact
Answer time (p50/p95), correctness (expert review), and query cost per answer.
KPI reconciliation rate and dashboard sprawl reduction.
Experiment readout cycle time.
Executive reliance (weekly brief usage), with spot‑checks for accuracy.
FAQs
Will this replace analysts?
No—good agents remove the busywork (table hunt, boilerplate SQL) so analysts spend time on design and decisions.
How do we avoid hallucinated numbers?
Treat numbers as computed from trusted sources with visible queries; block generation unless provenance is attached.
Can we use open‑weight models?
Yes. Keep the knowledge layer model‑agnostic and swap models behind an orchestration API.
What about SaaS data?
Snapshot into your lake; map fields to business terms; control schema drift with tests.
Next Steps
Want an internal data agent with guardrails?
We’ll blueprint your semantic layer, wire an agent runtime, and ship two playbooks (KPI truth + experiments) with governance and cost controls.
Recevez des conseils pratiques directement dans votre boîte de réception
En vous abonnant, vous consentez à ce que Génération Numérique stocke et traite vos informations conformément à notre politique de confidentialité. Vous pouvez lire la politique complète sur gend.co/privacy.
Génération
Numérique

Bureau au Royaume-Uni
33 rue Queen,
Londres
EC4R 1AP
Royaume-Uni
Bureau au Canada
1 University Ave,
Toronto,
ON M5J 1T1,
Canada
Bureau NAMER
77 Sands St,
Brooklyn,
NY 11201,
États-Unis
Bureau EMEA
Rue Charlemont, Saint Kevin's, Dublin,
D02 VN88,
Irlande
Bureau du Moyen-Orient
6994 Alsharq 3890,
An Narjis,
Riyad 13343,
Arabie Saoudite
Numéro d'entreprise : 256 9431 77 | Droits d'auteur 2026 | Conditions générales | Politique de confidentialité
Génération
Numérique

Bureau au Royaume-Uni
33 rue Queen,
Londres
EC4R 1AP
Royaume-Uni
Bureau au Canada
1 University Ave,
Toronto,
ON M5J 1T1,
Canada
Bureau NAMER
77 Sands St,
Brooklyn,
NY 11201,
États-Unis
Bureau EMEA
Rue Charlemont, Saint Kevin's, Dublin,
D02 VN88,
Irlande
Bureau du Moyen-Orient
6994 Alsharq 3890,
An Narjis,
Riyad 13343,
Arabie Saoudite
Numéro d'entreprise : 256 9431 77
Conditions générales
Politique de confidentialité
Droit d'auteur 2026









