Model Context Protocol (MCP): The Adoption Guide
Model Context Protocol (MCP): The Adoption Guide
Notion
Glean
Asana
Miro
Nov 20, 2025


Model Context Protocol (MCP): What It Is and How to Adopt It
If your roadmap includes AI assistants that act on real systems, raising tickets, posting to Slack, querying Snowflake, MCP is the fastest, least-risky way to wire that up. Born in late 2024 and hardened through 2025, MCP standardises how LLM apps connect to tools and data so you don’t rebuild integrations for every model or vendor.
A 60-second definition
MCP is an open protocol for connecting AI applications (the “host”) to external capabilities via a client–server pattern. The host embeds an MCP client; your tools/data live behind one or more MCP servers. The client speaks a well-specified protocol (JSON-RPC 2.0) so the LLM can discover tools, call functions, and retrieve context consistently.
Think of MCP as USB-C for AI, one port that works with many peripherals. Swap Claude for ChatGPT or vice-versa without rewriting every connector.
Why MCP matters to SaaS teams in 2025
Tames the N×M problem. Instead of building N bespoke integrations for M models, MCP abstracts the interface so one server can work across multiple LLM hosts. That cuts complexity and time-to-market.
Broad ecosystem momentum. MCP started with Anthropic and now appears across IDEs, Claude Desktop, and OpenAI’s connectors/Agents SDK—with early support in ChatGPT’s Developer Mode. This cross-vendor energy is why many CIOs see MCP as the default path to “agent-ready” SaaS.
Enterprise-grade patterns emerging. Vendors are releasing “defend for MCP” security layers and encryption-first patterns aligned to regulated sectors, speeding safe adoption.
Core architecture (simple mental model)
Host: The AI app (e.g., Claude Desktop, ChatGPT) running an MCP client.
MCP client: Translates user intent/tool calls into protocol messages.
MCP server(s): Your side—APIs, databases, or workflows exposed with schema-driven tool definitions and responses over JSON-RPC 2.0.
This separation lets platform teams publish a catalogue of safe capabilities (e.g., “create Jira issue”, “query BigQuery”) that any compliant LLM can use—subject to policy and auth.
What platforms currently support MCP?
Anthropic Claude / Claude Desktop: First-party MCP reference with numerous example servers. anthropic.com
OpenAI: Connectors and remote MCP servers via the OpenAI API/Agents SDK; broader client support is emerging in Developer Mode. OpenAI Platform
Developer tooling: Official and community servers for GitHub, Buildkite and more; thriving open-source lists to jump-start integration. GitHub
Industry interest: Microsoft has publicly endorsed industry standards like MCP to help agent ecosystems interoperate.
Our partner ecosystem (MCP-ready)
Asana — Official MCP server lets AI tools create/read tasks and interact with the Work Graph via standard tools. (Asana)
Miro — MCP server available (currently labelled beta/waitlist in some materials) to query board context and trigger actions from AI tools. (developers.miro.com)
Notion — Hosted Notion MCP enables secure read/write to workspace objects; works with Claude, ChatGPT and Cursor. (developers.notion.com)
Glean — Remote MCP server built into the platform to expose permission-aware enterprise knowledge to any MCP-compatible host. (developers.glean.com)
Partner | MCP status | Docs |
|---|---|---|
Asana | GA: official MCP server | “MCP Server” docs & integration guide. Asana |
Miro | Beta / waitlist noted in site copy | Developer guides + public waitlist page. developers.miro.com |
Notion | GA: hosted MCP | Dev docs + Help Center overview. developers.notion.com |
Glean | GA: remote MCP server | Admin & user guides. developers.glean.com |
Security: what MCP solves—and what it doesn’t
MCP is not a silver bullet. It gives you a consistent conduit; you still need enterprise guardrails:
Threats: Prompt injection, over-privileged servers, and untrusted outputs can lead to data leakage or unintended actions (e.g., “MCP-UPD”).
Controls to add:
Strong authentication/authorisation at the server boundary (tokens, mTLS, scoped RBAC).
Policy filters to restrict tool arguments and outputs.
Audit/recording of every tool call and response.
Data security patterns (application-layer encryption / hold-your-own-key) for sensitive stores.
Build vs buy: MCP servers
You can build simple servers quickly (many teams start with a “read-only analytics” server, then add write actions). Community examples and templates exist for common backends and languages. For speed, you can also adopt vendor-maintained servers (GitHub, CI/CD, comms).
A pragmatic 6-step rollout for SaaS platforms
Pick one high-value, low-risk flow. E.g., “Create/read incidents” or “Read dashboards”. Keep scope tight for Week 1 wins.
Stand up an MCP server for that flow with least-privilege credentials; expose a small, well-typed toolset and validate arguments.
Integrate a host (Claude Desktop or OpenAI Agents) in a dev tenant. Wire in secrets via your standard vault and rotate.
Add guardrails: schema validation, allow-lists, output checks, audit logging. Map every tool to a named policy.
Pilot with real users inside Slack or VS Code. Track accuracy, action failure rates, and time-to-resolution versus your baseline.
Harden & scale: introduce mTLS, per-tool scopes, and encryption patterns for regulated data; then add more servers to your catalogue.
Typical use cases we see
Customer support & ops: Raise tickets, summarise cases, and query CRM with auditable tool calls.
Developer productivity: Manage repos/CI from chat; code search with controlled write access.
Data access: Natural-language queries against warehouses via read-only servers, with row-level policy.
Governed automation: Orchestrate multi-step workflows across SaaS apps while keeping a single audit trail.
How MCP compares to bespoke tool integrations
Dimension | MCP approach | Point-to-point tools |
|---|---|---|
Integration speed | Standard schema; reuse across hosts | Rebuild per model/vendor |
Governance | Policy at server boundary | Scattered across bots/apps |
Portability | Works across compliant hosts | Vendor-locked |
Security | Centralise auth, audit, scopes | Often duplicated/inconsistent |
(Highlights derived from spec and platform docs.) Model Context Protocol
The road ahead
With Microsoft and others backing interoperability, and OpenAI/Anthropic shipping client support, MCP looks set to underpin an “agentic web” where compliant tools interoperate like web services did post-HTTP 1.1. Expect stronger schemas, richer discovery, and enterprise extensions (governance, rate limits, and identity).
Call to action: If you’re planning AI features in your product, now is the time to prototype on MCP so you can switch hosts later without re-platforming.
Model Content Protocol (MCP) FAQ
Q1: Is MCP proprietary to Anthropic?
No. Anthropic initiated it, but MCP is an open standard with a public spec and multi-vendor support.
Q2: Does OpenAI support MCP?
Yes—through connectors/remote MCP servers in the API/Agents SDK and early support in ChatGPT Developer Mode.
Q3: What risks should security teams watch?
Prompt injection, mis-scoped permissions, and data leakage; pair MCP with strict auth, policy, and audit.
Q4: How do we get started?
Start with a reference server and SDK, wire up a simple tool (e.g., a search or database query), then connect via a supported client such as ChatGPT. Add logging and auth early, and iterate toward real workflows as you validate value and risk.
Q5: What is the Model Context Protocol (MCP)?
A: MCP is an open standard that lets AI applications securely connect to external data, tools, and workflows—so models can fetch the right context and call the right tool without bespoke integrations.
Q6: How does MCP work in practice?
A: MCP uses a client–server model. An MCP server exposes tools or data; an MCP-capable client connects to that server and lets the model list and call those tools during a conversation.
Q7: Does ChatGPT support MCP?
A: Yes. ChatGPT (and compatible APIs) can connect to remote MCP servers so models can discover and call server tools directly—for example, to search knowledge bases, query databases, or trigger workflows.
Q8: What are common MCP use cases?
A: Enterprise search and retrieval, analytics queries, code/repo operations, ticketing and IT automations, knowledge management, and multi-tool agent workflows.
Q9: How is MCP different from plugins or one-off APIs?
A: Plugins or direct APIs are bespoke per app. MCP standardises the interface so any MCP-capable client can connect to any MCP server, reducing integration overhead and making tools portable across models and hosts.
Q10: How do I connect a remote MCP server to my AI app?
A: Point your MCP-capable client or SDK to the server endpoint (or run it locally), provide credentials, and allow the model to enumerate tools. The model can then call tools with parameters and receive structured outputs.
Q11: What about security and governance with MCP?
A: Treat MCP servers like production integrations: use strong authentication, least-privilege access, and secret rotation; log tool calls; defend against prompt injection. Keep sensitive servers private and apply data-loss prevention.
Q12: Is MCP open and vendor-neutral?
A: Yes. MCP is an open, vendor-neutral specification with community implementations and example servers, designed for interoperability across models and platforms.
Model Context Protocol (MCP): What It Is and How to Adopt It
If your roadmap includes AI assistants that act on real systems, raising tickets, posting to Slack, querying Snowflake, MCP is the fastest, least-risky way to wire that up. Born in late 2024 and hardened through 2025, MCP standardises how LLM apps connect to tools and data so you don’t rebuild integrations for every model or vendor.
A 60-second definition
MCP is an open protocol for connecting AI applications (the “host”) to external capabilities via a client–server pattern. The host embeds an MCP client; your tools/data live behind one or more MCP servers. The client speaks a well-specified protocol (JSON-RPC 2.0) so the LLM can discover tools, call functions, and retrieve context consistently.
Think of MCP as USB-C for AI, one port that works with many peripherals. Swap Claude for ChatGPT or vice-versa without rewriting every connector.
Why MCP matters to SaaS teams in 2025
Tames the N×M problem. Instead of building N bespoke integrations for M models, MCP abstracts the interface so one server can work across multiple LLM hosts. That cuts complexity and time-to-market.
Broad ecosystem momentum. MCP started with Anthropic and now appears across IDEs, Claude Desktop, and OpenAI’s connectors/Agents SDK—with early support in ChatGPT’s Developer Mode. This cross-vendor energy is why many CIOs see MCP as the default path to “agent-ready” SaaS.
Enterprise-grade patterns emerging. Vendors are releasing “defend for MCP” security layers and encryption-first patterns aligned to regulated sectors, speeding safe adoption.
Core architecture (simple mental model)
Host: The AI app (e.g., Claude Desktop, ChatGPT) running an MCP client.
MCP client: Translates user intent/tool calls into protocol messages.
MCP server(s): Your side—APIs, databases, or workflows exposed with schema-driven tool definitions and responses over JSON-RPC 2.0.
This separation lets platform teams publish a catalogue of safe capabilities (e.g., “create Jira issue”, “query BigQuery”) that any compliant LLM can use—subject to policy and auth.
What platforms currently support MCP?
Anthropic Claude / Claude Desktop: First-party MCP reference with numerous example servers. anthropic.com
OpenAI: Connectors and remote MCP servers via the OpenAI API/Agents SDK; broader client support is emerging in Developer Mode. OpenAI Platform
Developer tooling: Official and community servers for GitHub, Buildkite and more; thriving open-source lists to jump-start integration. GitHub
Industry interest: Microsoft has publicly endorsed industry standards like MCP to help agent ecosystems interoperate.
Our partner ecosystem (MCP-ready)
Asana — Official MCP server lets AI tools create/read tasks and interact with the Work Graph via standard tools. (Asana)
Miro — MCP server available (currently labelled beta/waitlist in some materials) to query board context and trigger actions from AI tools. (developers.miro.com)
Notion — Hosted Notion MCP enables secure read/write to workspace objects; works with Claude, ChatGPT and Cursor. (developers.notion.com)
Glean — Remote MCP server built into the platform to expose permission-aware enterprise knowledge to any MCP-compatible host. (developers.glean.com)
Partner | MCP status | Docs |
|---|---|---|
Asana | GA: official MCP server | “MCP Server” docs & integration guide. Asana |
Miro | Beta / waitlist noted in site copy | Developer guides + public waitlist page. developers.miro.com |
Notion | GA: hosted MCP | Dev docs + Help Center overview. developers.notion.com |
Glean | GA: remote MCP server | Admin & user guides. developers.glean.com |
Security: what MCP solves—and what it doesn’t
MCP is not a silver bullet. It gives you a consistent conduit; you still need enterprise guardrails:
Threats: Prompt injection, over-privileged servers, and untrusted outputs can lead to data leakage or unintended actions (e.g., “MCP-UPD”).
Controls to add:
Strong authentication/authorisation at the server boundary (tokens, mTLS, scoped RBAC).
Policy filters to restrict tool arguments and outputs.
Audit/recording of every tool call and response.
Data security patterns (application-layer encryption / hold-your-own-key) for sensitive stores.
Build vs buy: MCP servers
You can build simple servers quickly (many teams start with a “read-only analytics” server, then add write actions). Community examples and templates exist for common backends and languages. For speed, you can also adopt vendor-maintained servers (GitHub, CI/CD, comms).
A pragmatic 6-step rollout for SaaS platforms
Pick one high-value, low-risk flow. E.g., “Create/read incidents” or “Read dashboards”. Keep scope tight for Week 1 wins.
Stand up an MCP server for that flow with least-privilege credentials; expose a small, well-typed toolset and validate arguments.
Integrate a host (Claude Desktop or OpenAI Agents) in a dev tenant. Wire in secrets via your standard vault and rotate.
Add guardrails: schema validation, allow-lists, output checks, audit logging. Map every tool to a named policy.
Pilot with real users inside Slack or VS Code. Track accuracy, action failure rates, and time-to-resolution versus your baseline.
Harden & scale: introduce mTLS, per-tool scopes, and encryption patterns for regulated data; then add more servers to your catalogue.
Typical use cases we see
Customer support & ops: Raise tickets, summarise cases, and query CRM with auditable tool calls.
Developer productivity: Manage repos/CI from chat; code search with controlled write access.
Data access: Natural-language queries against warehouses via read-only servers, with row-level policy.
Governed automation: Orchestrate multi-step workflows across SaaS apps while keeping a single audit trail.
How MCP compares to bespoke tool integrations
Dimension | MCP approach | Point-to-point tools |
|---|---|---|
Integration speed | Standard schema; reuse across hosts | Rebuild per model/vendor |
Governance | Policy at server boundary | Scattered across bots/apps |
Portability | Works across compliant hosts | Vendor-locked |
Security | Centralise auth, audit, scopes | Often duplicated/inconsistent |
(Highlights derived from spec and platform docs.) Model Context Protocol
The road ahead
With Microsoft and others backing interoperability, and OpenAI/Anthropic shipping client support, MCP looks set to underpin an “agentic web” where compliant tools interoperate like web services did post-HTTP 1.1. Expect stronger schemas, richer discovery, and enterprise extensions (governance, rate limits, and identity).
Call to action: If you’re planning AI features in your product, now is the time to prototype on MCP so you can switch hosts later without re-platforming.
Model Content Protocol (MCP) FAQ
Q1: Is MCP proprietary to Anthropic?
No. Anthropic initiated it, but MCP is an open standard with a public spec and multi-vendor support.
Q2: Does OpenAI support MCP?
Yes—through connectors/remote MCP servers in the API/Agents SDK and early support in ChatGPT Developer Mode.
Q3: What risks should security teams watch?
Prompt injection, mis-scoped permissions, and data leakage; pair MCP with strict auth, policy, and audit.
Q4: How do we get started?
Start with a reference server and SDK, wire up a simple tool (e.g., a search or database query), then connect via a supported client such as ChatGPT. Add logging and auth early, and iterate toward real workflows as you validate value and risk.
Q5: What is the Model Context Protocol (MCP)?
A: MCP is an open standard that lets AI applications securely connect to external data, tools, and workflows—so models can fetch the right context and call the right tool without bespoke integrations.
Q6: How does MCP work in practice?
A: MCP uses a client–server model. An MCP server exposes tools or data; an MCP-capable client connects to that server and lets the model list and call those tools during a conversation.
Q7: Does ChatGPT support MCP?
A: Yes. ChatGPT (and compatible APIs) can connect to remote MCP servers so models can discover and call server tools directly—for example, to search knowledge bases, query databases, or trigger workflows.
Q8: What are common MCP use cases?
A: Enterprise search and retrieval, analytics queries, code/repo operations, ticketing and IT automations, knowledge management, and multi-tool agent workflows.
Q9: How is MCP different from plugins or one-off APIs?
A: Plugins or direct APIs are bespoke per app. MCP standardises the interface so any MCP-capable client can connect to any MCP server, reducing integration overhead and making tools portable across models and hosts.
Q10: How do I connect a remote MCP server to my AI app?
A: Point your MCP-capable client or SDK to the server endpoint (or run it locally), provide credentials, and allow the model to enumerate tools. The model can then call tools with parameters and receive structured outputs.
Q11: What about security and governance with MCP?
A: Treat MCP servers like production integrations: use strong authentication, least-privilege access, and secret rotation; log tool calls; defend against prompt injection. Keep sensitive servers private and apply data-loss prevention.
Q12: Is MCP open and vendor-neutral?
A: Yes. MCP is an open, vendor-neutral specification with community implementations and example servers, designed for interoperability across models and platforms.
Get practical advice delivered directly to your inbox
By subscribing you consent to Generation Digital storing and processing your details in line with our privacy policy. You can read the full policy at gend.co/privacy.
Generation
Digital

UK Office
33 Queen St,
London
EC4R 1AP
United Kingdom
Canada Office
1 University Ave,
Toronto,
ON M5J 1T1,
Canada
NAMER Office
77 Sands St,
Brooklyn,
NY 11201,
United States
EMEA Office
Charlemont St, Saint Kevin's, Dublin,
D02 VN88,
Ireland
Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia
Company No: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy
Generation
Digital

UK Office
33 Queen St,
London
EC4R 1AP
United Kingdom
Canada Office
1 University Ave,
Toronto,
ON M5J 1T1,
Canada
NAMER Office
77 Sands St,
Brooklyn,
NY 11201,
United States
EMEA Office
Charlemont St, Saint Kevin's, Dublin,
D02 VN88,
Ireland
Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia










