BNY Eliza Platform: Scaling AI Agents with OpenAI
BNY Eliza Platform: Scaling AI Agents with OpenAI
OpenAI
Feb 4, 2026


Not sure where to start with AI?
Assess readiness, risk, and priorities in under an hour.
Not sure where to start with AI?
Assess readiness, risk, and priorities in under an hour.
➔ Schedule a Consultation
BNY’s Eliza platform uses OpenAI technology to help employees build governed AI agents for specific tasks. With more than 20,000 staff enabled to create and use agents, BNY aims to automate routine work, improve consistency, and free teams to focus on higher-value client outcomes—while maintaining strong controls.
Enterprise AI has moved past pilots. The real differentiator now is whether an organisation can scale adoption safely—across teams, geographies, and risk profiles—without creating chaos.
BNY’s approach is a useful signal for any large enterprise: build a platform (Eliza), embed governance, and enable people closest to the work to create AI agents that solve real problems. According to OpenAI’s case study and BNY’s own materials, Eliza is designed as an enterprise AI platform to enhance client service, operations, and cultural transformation—while supporting an “AI for everyone” model of adoption.
What is the Eliza platform at BNY?
Eliza is BNY’s enterprise AI platform, built to provide reusable AI capabilities across the organisation. In BNY’s words, it’s designed to enhance client service, improve company operations, and drive cultural transformation using AI.
What makes Eliza stand out isn’t just the tooling—it’s the combination of:
Widespread enablement (large numbers of employees can build and use agents), and
Guardrails (identity, access controls, workflow boundaries, and governance suitable for regulated work).
What’s new: “AI agents” and BNY’s “digital employees”
In OpenAI’s write-up, BNY describes advanced agents as “digital employees”—AI agents with identities, access controls, and dedicated workflows. These agents can be shaped to specific processes, shifting humans from doing every first draft of work to supervising, training, and improving the agent’s output over time.
This framing matters because it changes how you plan adoption:
You’re not just rolling out a chatbot.
You’re introducing agentic workflows that can touch data, decisions, and operational steps—so governance must be designed in, not bolted on later.
Why this matters for efficiency (and client outcomes)
Efficiency gains don’t come from “using AI”. They come from redesigning workflows so that:
routine steps are automated,
quality checks are consistent, and
people spend more time on judgement, exceptions, and client-facing value.
OpenAI’s case study cites examples of agents supporting work such as payment instruction validation and code security enhancements. These are exactly the kinds of tasks where automation can reduce friction while improving consistency—two ingredients that clients feel quickly (fewer delays, fewer avoidable errors, faster turnaround).
There’s also evidence that BNY has pushed broad AI literacy: Fortune reported that a BNY spokesperson said 98% of employees were trained on generative AI, with many using Eliza daily (as of September 2025).
How Eliza supports enterprise-wide adoption
Most AI programmes stall because adoption is treated as a comms initiative rather than an operating model. Eliza’s approach—enablement plus guardrails—maps to what actually works in large organisations:
1) Make building “allowed” (and supported)
If only one central team can build, the backlog explodes. By enabling large numbers of employees (reported as 20,000+) to build agents, BNY expands capacity dramatically—while keeping development close to real operational needs.
2) Standardise the platform layer
A shared platform avoids dozens of disconnected tools, prompts, and shadow workflows. BNY positions Eliza as a foundational set of reusable capabilities across the enterprise.
3) Add identity and access controls for agents
Once agents become “digital employees”, you need clear boundaries: what they can access, what they can do, and how activity is monitored. OpenAI highlights identities and access controls as core to BNY’s concept.
4) Use “deep research” and structured reasoning where it fits
OpenAI notes that select teams are experimenting with ChatGPT Enterprise capabilities like deep research for multi-step reasoning across internal and external data—useful for areas like scenario planning and strategic analysis.
(Practical takeaway: don’t force agents into every task. Use them where multi-step work or repetitive processing creates meaningful leverage.)
Practical examples (the kind that typically work first)
If you’re trying to replicate this in your own organisation, start with tasks that are:
repetitive and rules-based,
high-volume,
low-to-medium risk with strong review loops, and
painful enough that teams actually want change.
Common starting points we see in enterprise workflow tools include:
Client reporting drafts and summaries (human-reviewed before sending)
Request intake and triage (categorise, route, assign, set SLAs)
Knowledge Q&A (grounded in internal policies and playbooks)
Document checking (flag missing fields, inconsistencies, exceptions)
In regulated environments, the pattern that sticks is “AI does the first pass; humans handle exceptions and approval”.
What other enterprises can learn from BNY’s playbook
BNY’s story is compelling because it demonstrates three principles that generalise well:
Platform thinking beats point solutions
A single enterprise AI platform creates consistency, shared controls, and reusable building blocks.Adoption scales when you treat employees as builders
Enable people who know the work to shape agents—paired with training and governance.Governance is the feature, not the footnote
Identity, access, auditability, and policy matter more as you move from “chat” to “do”.
How Generation Digital helps you apply this (without the hype)
If you like the pattern (platform + governance + adoption) and want to apply it to the tools your teams already use, Generation Digital can help you design and roll out AI-enabled workflows safely and effectively—particularly across Asana, Miro, Notion, and Glean.
Where we typically start:
AI readiness and roadmap: prioritise use cases that will land, define owners, and set success measures.
Workflow design and automation: reduce operational drag with clear handoffs and automation patterns (intake → triage → delivery).
Trust and governance: align data residency, security controls, and admin policies so adoption doesn’t stall at “is this compliant?”.
Summary and next steps
BNY’s Eliza platform is a strong example of what enterprise AI looks like when it’s treated as an operating model: a shared platform, strong controls, and thousands of empowered employees building agents that improve real workflows.
Next steps:
If you’re exploring enterprise AI adoption, identify 2–3 workflows where automation will remove meaningful friction in the next 90 days.
Put governance in place early (identity, access, audit trail, data boundaries).
Then scale by enabling the teams closest to the work—supported with training and repeatable patterns.
FAQs
Q1: What is the Eliza platform?
Eliza is BNY’s enterprise AI platform designed to provide reusable AI capabilities across the organisation. It supports building and using AI agents to improve operations and client service with governance controls.
Q2: How does Eliza help employees build AI agents?
BNY’s model enables a large population of employees to create task-specific agents, while standardising controls such as identity and access boundaries—so teams can automate work without losing oversight.
Q3: How many employees use Eliza?
OpenAI’s case study describes Eliza enabling 20,000+ employees to build AI agents across BNY.
Q4: What are “digital employees” at BNY?
In the OpenAI case study, BNY describes certain advanced agents as “digital employees” with identities, access controls, and dedicated workflows—designed to handle specific tasks under governance.
Q5: What should regulated organisations copy first?
Start with high-volume, rules-based workflows where AI can draft, summarise, classify, or validate—then keep humans in the loop for approval and exceptions. Governance should be designed in from day one.
BNY’s Eliza platform uses OpenAI technology to help employees build governed AI agents for specific tasks. With more than 20,000 staff enabled to create and use agents, BNY aims to automate routine work, improve consistency, and free teams to focus on higher-value client outcomes—while maintaining strong controls.
Enterprise AI has moved past pilots. The real differentiator now is whether an organisation can scale adoption safely—across teams, geographies, and risk profiles—without creating chaos.
BNY’s approach is a useful signal for any large enterprise: build a platform (Eliza), embed governance, and enable people closest to the work to create AI agents that solve real problems. According to OpenAI’s case study and BNY’s own materials, Eliza is designed as an enterprise AI platform to enhance client service, operations, and cultural transformation—while supporting an “AI for everyone” model of adoption.
What is the Eliza platform at BNY?
Eliza is BNY’s enterprise AI platform, built to provide reusable AI capabilities across the organisation. In BNY’s words, it’s designed to enhance client service, improve company operations, and drive cultural transformation using AI.
What makes Eliza stand out isn’t just the tooling—it’s the combination of:
Widespread enablement (large numbers of employees can build and use agents), and
Guardrails (identity, access controls, workflow boundaries, and governance suitable for regulated work).
What’s new: “AI agents” and BNY’s “digital employees”
In OpenAI’s write-up, BNY describes advanced agents as “digital employees”—AI agents with identities, access controls, and dedicated workflows. These agents can be shaped to specific processes, shifting humans from doing every first draft of work to supervising, training, and improving the agent’s output over time.
This framing matters because it changes how you plan adoption:
You’re not just rolling out a chatbot.
You’re introducing agentic workflows that can touch data, decisions, and operational steps—so governance must be designed in, not bolted on later.
Why this matters for efficiency (and client outcomes)
Efficiency gains don’t come from “using AI”. They come from redesigning workflows so that:
routine steps are automated,
quality checks are consistent, and
people spend more time on judgement, exceptions, and client-facing value.
OpenAI’s case study cites examples of agents supporting work such as payment instruction validation and code security enhancements. These are exactly the kinds of tasks where automation can reduce friction while improving consistency—two ingredients that clients feel quickly (fewer delays, fewer avoidable errors, faster turnaround).
There’s also evidence that BNY has pushed broad AI literacy: Fortune reported that a BNY spokesperson said 98% of employees were trained on generative AI, with many using Eliza daily (as of September 2025).
How Eliza supports enterprise-wide adoption
Most AI programmes stall because adoption is treated as a comms initiative rather than an operating model. Eliza’s approach—enablement plus guardrails—maps to what actually works in large organisations:
1) Make building “allowed” (and supported)
If only one central team can build, the backlog explodes. By enabling large numbers of employees (reported as 20,000+) to build agents, BNY expands capacity dramatically—while keeping development close to real operational needs.
2) Standardise the platform layer
A shared platform avoids dozens of disconnected tools, prompts, and shadow workflows. BNY positions Eliza as a foundational set of reusable capabilities across the enterprise.
3) Add identity and access controls for agents
Once agents become “digital employees”, you need clear boundaries: what they can access, what they can do, and how activity is monitored. OpenAI highlights identities and access controls as core to BNY’s concept.
4) Use “deep research” and structured reasoning where it fits
OpenAI notes that select teams are experimenting with ChatGPT Enterprise capabilities like deep research for multi-step reasoning across internal and external data—useful for areas like scenario planning and strategic analysis.
(Practical takeaway: don’t force agents into every task. Use them where multi-step work or repetitive processing creates meaningful leverage.)
Practical examples (the kind that typically work first)
If you’re trying to replicate this in your own organisation, start with tasks that are:
repetitive and rules-based,
high-volume,
low-to-medium risk with strong review loops, and
painful enough that teams actually want change.
Common starting points we see in enterprise workflow tools include:
Client reporting drafts and summaries (human-reviewed before sending)
Request intake and triage (categorise, route, assign, set SLAs)
Knowledge Q&A (grounded in internal policies and playbooks)
Document checking (flag missing fields, inconsistencies, exceptions)
In regulated environments, the pattern that sticks is “AI does the first pass; humans handle exceptions and approval”.
What other enterprises can learn from BNY’s playbook
BNY’s story is compelling because it demonstrates three principles that generalise well:
Platform thinking beats point solutions
A single enterprise AI platform creates consistency, shared controls, and reusable building blocks.Adoption scales when you treat employees as builders
Enable people who know the work to shape agents—paired with training and governance.Governance is the feature, not the footnote
Identity, access, auditability, and policy matter more as you move from “chat” to “do”.
How Generation Digital helps you apply this (without the hype)
If you like the pattern (platform + governance + adoption) and want to apply it to the tools your teams already use, Generation Digital can help you design and roll out AI-enabled workflows safely and effectively—particularly across Asana, Miro, Notion, and Glean.
Where we typically start:
AI readiness and roadmap: prioritise use cases that will land, define owners, and set success measures.
Workflow design and automation: reduce operational drag with clear handoffs and automation patterns (intake → triage → delivery).
Trust and governance: align data residency, security controls, and admin policies so adoption doesn’t stall at “is this compliant?”.
Summary and next steps
BNY’s Eliza platform is a strong example of what enterprise AI looks like when it’s treated as an operating model: a shared platform, strong controls, and thousands of empowered employees building agents that improve real workflows.
Next steps:
If you’re exploring enterprise AI adoption, identify 2–3 workflows where automation will remove meaningful friction in the next 90 days.
Put governance in place early (identity, access, audit trail, data boundaries).
Then scale by enabling the teams closest to the work—supported with training and repeatable patterns.
FAQs
Q1: What is the Eliza platform?
Eliza is BNY’s enterprise AI platform designed to provide reusable AI capabilities across the organisation. It supports building and using AI agents to improve operations and client service with governance controls.
Q2: How does Eliza help employees build AI agents?
BNY’s model enables a large population of employees to create task-specific agents, while standardising controls such as identity and access boundaries—so teams can automate work without losing oversight.
Q3: How many employees use Eliza?
OpenAI’s case study describes Eliza enabling 20,000+ employees to build AI agents across BNY.
Q4: What are “digital employees” at BNY?
In the OpenAI case study, BNY describes certain advanced agents as “digital employees” with identities, access controls, and dedicated workflows—designed to handle specific tasks under governance.
Q5: What should regulated organisations copy first?
Start with high-volume, rules-based workflows where AI can draft, summarise, classify, or validate—then keep humans in the loop for approval and exceptions. Governance should be designed in from day one.
Receive practical advice directly in your inbox
By subscribing, you agree to allow Generation Digital to store and process your information according to our privacy policy. You can review the full policy at gend.co/privacy.
AI Integration Resources & How-To Guides for Canadian Businesses


Maximizing the Benefits of Miro AI for Canadian Businesses
In-Person Workshop
November 5, 2025
Toronto, Canada


Work With AI Teammates - Asana
In-Person Workshop
Thurs 26th February 2026
London, UK


From Idea to Prototype - AI in Miro
Virtual Webinar
Weds 18th February 2026
Online
Generation
Digital

Business Number: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy
Generation
Digital










