Model Context Protocol (MCP): A Guide for Adoption
Model Context Protocol (MCP): A Guide for Adoption
Conceptual
Gather
Asana
Miro
Nov 20, 2025


Model Context Protocol (MCP): What It Is and How to Implement It
If your roadmap includes AI assistants that interact with real systems, creating tickets, posting to Slack, querying Snowflake, MCP is the fastest and safest way to connect those processes. Launched in late 2024 and refined through 2025, MCP standardizes how LLM apps connect to tools and data so you don't have to reinvent integrations for each model or vendor.
A 60-second definition
MCP is an open protocol designed to connect AI applications (the “host”) to external capabilities using a client–server model. The host includes an MCP client; your tools/data are managed behind one or more MCP servers. The client uses a well-defined protocol (JSON-RPC 2.0) to consistently discover tools, execute functions, and retrieve context.
Think of MCP as USB-C for AI, a single connection that works with numerous peripherals. Switch from Claude to ChatGPT or vice versa without needing to rewrite every connector.
Why MCP is important for SaaS teams in 2025
Simplifies the N×M problem. Instead of building N custom integrations for M models, MCP abstracts the interface allowing one server to function across multiple LLM hosts, reducing complexity and accelerating time-to-market.
Widespread ecosystem momentum. MCP started with Anthropic and is now available in IDEs, Claude Desktop, and OpenAI’s connectors/Agents SDK—with early support in ChatGPT’s Developer Mode. This cross-industry adoption is why many CIOs consider MCP the standard route to “agent-ready” SaaS.
Enterprise-grade standards emerging. Vendors are issuing “defend for MCP” security layers and encryption-first patterns aligned with regulated sectors, facilitating secure adoption.
Core architecture (simple mental concept)
Host: The AI app (e.g., Claude Desktop, ChatGPT) running an MCP client.
MCP client: Transforms user intents/tool requests into protocol messages.
MCP server(s): Your side—APIs, databases, or workflows revealed with schema-driven tool definitions and responses over JSON-RPC 2.0.
This division allows platform teams to release a catalogue of secure capabilities (e.g., “create Jira issue”, “query BigQuery”) that any compliant LLM can utilize, controlled by policy and authorization.
Which platforms currently support MCP?
Anthropic Claude / Claude Desktop: Primary MCP reference with numerous example servers. anthropic.com
OpenAI: Connectors and remote MCP servers via the OpenAI API/Agents SDK; expanded client support coming in Developer Mode. OpenAI Platform
Developer tools: Official and community servers for GitHub, Buildkite, and others; vibrant open-source lists to initiate integration. GitHub
Industry interest: Microsoft has publicly backed industry standards like MCP to facilitate ecosystem interoperability.
Our partner ecosystem (MCP-ready)
Asana — Official MCP server allows AI tools to create/read tasks and interact with the Work Graph using standard tools. (Asana)
Miro — MCP server available (currently listed as beta/waitlist in some materials) to query board context and activate actions from AI tools. (developers.miro.com)
Notion — Hosted Notion MCP allows secure read/write to workspace objects; compatible with Claude, ChatGPT, and Cursor. (developers.notion.com)
Glean — Remote MCP server integrated into the platform to expose permission-aware enterprise knowledge to any MCP-compatible host. (developers.glean.com)
Partner | MCP status | Docs |
|---|---|---|
Asana | GA: official MCP server | “MCP Server” documentation & integration guide. Asana |
Miro | Beta / waitlist noted in site information | Developer guides + public waitlist page. developers.miro.com |
Notion | GA: hosted MCP | Developer documentation + Help Center overview. developers.notion.com |
Glean | GA: remote MCP server | Administrator & user guides. developers.glean.com |
Security: what MCP solves—and what it doesn’t
MCP is not a complete solution. It provides a consistent conduit; enterprise guardrails are still needed:
Threats: Prompt injection, over-privileged servers, and untrusted outputs can result in data breaches or unintended actions (e.g., “MCP-UPD”).
Controls to implement:
Robust authentication/authorization at the server boundary (tokens, mTLS, scoped RBAC).
Policy filters to limit tool arguments and outputs.
Audit/recording of every tool call and response.
Data security practices (application-layer encryption / hold-your-own-key) for sensitive storage.
Build vs. buy: MCP servers
You can build simple servers quickly (many teams start with a “read-only analytics” server, then add write actions). Community examples and templates exist for prevalent backends and languages. For speed, you can also adopt vendor-maintained servers (GitHub, CI/CD, communications).
A practical 6-step rollout for SaaS platforms
Select one high-value, low-risk process. E.g., “Create/read incidents” or “Read dashboards”. Keep the scope narrow for week one achievements.
Set up an MCP server for that process using least-privilege credentials; reveal a small, well-defined toolset and validate arguments.
Integrate a host (Claude Desktop or OpenAI Agents) in a development environment. Securely integrate secrets via your vault and rotate.
Implement guardrails: schema validation, allow-lists, output checks, audit logging. Associate every tool with a named policy.
Conduct a pilot with real users in Slack or VS Code. Monitor accuracy, action failure rates, and time-to-resolution versus your baseline.
Strengthen & expand: introduce mTLS, per-tool scopes, and encryption practices for regulated data, then expand your server catalogue.
Typical use cases we observe
Customer support & operations: Create tickets, summarize cases, and query CRM with auditable tool calls.
Developer productivity: Manage repos/CI from chat; perform code searches with controlled write access.
Data access: Perform natural-language queries on databases via read-only servers, with row-level policies.
Governed automation: Orchestrate multi-step workflows across SaaS applications while maintaining a single audit trail.
How MCP compares to customized tool integrations
Dimension | MCP approach | Point-to-point tools |
|---|---|---|
Integration speed | Standard schema; reuse across hosts | Rebuild per model/vendor |
Governance | Policy at server boundary | Scattered across bots/apps |
Portability | Works across compliant hosts | Vendor-locked |
Security | Centralize auth, audit, scopes | Often duplicated/inconsistent |
(Highlights derived from spec and platform documentation.) Model Context Protocol
The future
With Microsoft and others advocating interoperability, and OpenAI/Anthropic deploying client support, MCP seems poised to support an “agentic web” where compliant tools interoperate as web services did post-HTTP 1.1. Look for stronger schemas, richer discovery, and enterprise extensions (governance, rate limits, and identity).
Call to action: If you're planning AI features in your product, now is the time to begin prototyping on MCP so you can switch hosts later without re-platforming.
Model Content Protocol (MCP) FAQ
Q1: Is MCP proprietary to Anthropic?
No. While initiated by Anthropic, MCP is an open standard with a public specification and support from multiple vendors.
Q2: Does OpenAI support MCP?
Yes—through connectors/remote MCP servers in the API/Agents SDK and initial support in ChatGPT Developer Mode.
Q3: What risks should security teams monitor?
Prompt injection, mis-scoped permissions, and data leakage; combine MCP with stringent authentication, policies, and auditing.
Q4: How to get started?
Start with a reference server and an SDK, set up a simple tool (e.g., a search or database query), then connect via a supported client like ChatGPT. Establish logging and authentication early and gradually proceed to real workflows as you assess value and risk.
Q5: What is the Model Context Protocol (MCP)?
A: MCP is an open standard that enables AI applications to securely connect to external data, tools, and workflows—allowing models to fetch the necessary context and utilize the appropriate tool without custom integrations.
Q6: How does MCP operate in practice?
A: MCP employs a client–server model. An MCP server exposes tools or data; an MCP-capable client connects to that server and enables the model to list and call those tools during interactions.
Q7: Does ChatGPT support MCP?
A: Yes. ChatGPT (and compatible APIs) can connect to remote MCP servers to enable models to discover and use server tools directly—such as searching knowledge bases, querying databases, or triggering workflows.
Q8: What are common MCP use cases?
A: Enterprise search and retrieval, analytics queries, code/repo management, ticketing and IT automations, knowledge management, and multi-tool agent workflows.
Q9: How does MCP differ from plugins or one-off APIs?
A: Plugins or direct APIs are customized for each app. MCP standardizes the interface so any MCP-capable client can connect to any MCP server, reducing integration overhead and allowing tools to be portable across models and hosts.
Q10: How do I connect a remote MCP server to my AI app?
A: Direct your MCP-capable client or SDK to the server endpoint (or run it locally), furnish credentials, and allow the model to enumerate tools. The model can then call tools with parameters and receive structured outputs.
Q11: What about security and governance with MCP?
A: Treat MCP servers like production integrations: employ strong authentication, least-privilege access, and secret rotation; log tool interactions; guard against prompt injection. Keep sensitive servers private and incorporate data-loss prevention measures.
Q12: Is MCP open and vendor-neutral?
A: Yes. MCP is an open, vendor-neutral specification with community implementations and sample servers, intended for cross-model and platform interoperability.
Model Context Protocol (MCP): What It Is and How to Implement It
If your roadmap includes AI assistants that interact with real systems, creating tickets, posting to Slack, querying Snowflake, MCP is the fastest and safest way to connect those processes. Launched in late 2024 and refined through 2025, MCP standardizes how LLM apps connect to tools and data so you don't have to reinvent integrations for each model or vendor.
A 60-second definition
MCP is an open protocol designed to connect AI applications (the “host”) to external capabilities using a client–server model. The host includes an MCP client; your tools/data are managed behind one or more MCP servers. The client uses a well-defined protocol (JSON-RPC 2.0) to consistently discover tools, execute functions, and retrieve context.
Think of MCP as USB-C for AI, a single connection that works with numerous peripherals. Switch from Claude to ChatGPT or vice versa without needing to rewrite every connector.
Why MCP is important for SaaS teams in 2025
Simplifies the N×M problem. Instead of building N custom integrations for M models, MCP abstracts the interface allowing one server to function across multiple LLM hosts, reducing complexity and accelerating time-to-market.
Widespread ecosystem momentum. MCP started with Anthropic and is now available in IDEs, Claude Desktop, and OpenAI’s connectors/Agents SDK—with early support in ChatGPT’s Developer Mode. This cross-industry adoption is why many CIOs consider MCP the standard route to “agent-ready” SaaS.
Enterprise-grade standards emerging. Vendors are issuing “defend for MCP” security layers and encryption-first patterns aligned with regulated sectors, facilitating secure adoption.
Core architecture (simple mental concept)
Host: The AI app (e.g., Claude Desktop, ChatGPT) running an MCP client.
MCP client: Transforms user intents/tool requests into protocol messages.
MCP server(s): Your side—APIs, databases, or workflows revealed with schema-driven tool definitions and responses over JSON-RPC 2.0.
This division allows platform teams to release a catalogue of secure capabilities (e.g., “create Jira issue”, “query BigQuery”) that any compliant LLM can utilize, controlled by policy and authorization.
Which platforms currently support MCP?
Anthropic Claude / Claude Desktop: Primary MCP reference with numerous example servers. anthropic.com
OpenAI: Connectors and remote MCP servers via the OpenAI API/Agents SDK; expanded client support coming in Developer Mode. OpenAI Platform
Developer tools: Official and community servers for GitHub, Buildkite, and others; vibrant open-source lists to initiate integration. GitHub
Industry interest: Microsoft has publicly backed industry standards like MCP to facilitate ecosystem interoperability.
Our partner ecosystem (MCP-ready)
Asana — Official MCP server allows AI tools to create/read tasks and interact with the Work Graph using standard tools. (Asana)
Miro — MCP server available (currently listed as beta/waitlist in some materials) to query board context and activate actions from AI tools. (developers.miro.com)
Notion — Hosted Notion MCP allows secure read/write to workspace objects; compatible with Claude, ChatGPT, and Cursor. (developers.notion.com)
Glean — Remote MCP server integrated into the platform to expose permission-aware enterprise knowledge to any MCP-compatible host. (developers.glean.com)
Partner | MCP status | Docs |
|---|---|---|
Asana | GA: official MCP server | “MCP Server” documentation & integration guide. Asana |
Miro | Beta / waitlist noted in site information | Developer guides + public waitlist page. developers.miro.com |
Notion | GA: hosted MCP | Developer documentation + Help Center overview. developers.notion.com |
Glean | GA: remote MCP server | Administrator & user guides. developers.glean.com |
Security: what MCP solves—and what it doesn’t
MCP is not a complete solution. It provides a consistent conduit; enterprise guardrails are still needed:
Threats: Prompt injection, over-privileged servers, and untrusted outputs can result in data breaches or unintended actions (e.g., “MCP-UPD”).
Controls to implement:
Robust authentication/authorization at the server boundary (tokens, mTLS, scoped RBAC).
Policy filters to limit tool arguments and outputs.
Audit/recording of every tool call and response.
Data security practices (application-layer encryption / hold-your-own-key) for sensitive storage.
Build vs. buy: MCP servers
You can build simple servers quickly (many teams start with a “read-only analytics” server, then add write actions). Community examples and templates exist for prevalent backends and languages. For speed, you can also adopt vendor-maintained servers (GitHub, CI/CD, communications).
A practical 6-step rollout for SaaS platforms
Select one high-value, low-risk process. E.g., “Create/read incidents” or “Read dashboards”. Keep the scope narrow for week one achievements.
Set up an MCP server for that process using least-privilege credentials; reveal a small, well-defined toolset and validate arguments.
Integrate a host (Claude Desktop or OpenAI Agents) in a development environment. Securely integrate secrets via your vault and rotate.
Implement guardrails: schema validation, allow-lists, output checks, audit logging. Associate every tool with a named policy.
Conduct a pilot with real users in Slack or VS Code. Monitor accuracy, action failure rates, and time-to-resolution versus your baseline.
Strengthen & expand: introduce mTLS, per-tool scopes, and encryption practices for regulated data, then expand your server catalogue.
Typical use cases we observe
Customer support & operations: Create tickets, summarize cases, and query CRM with auditable tool calls.
Developer productivity: Manage repos/CI from chat; perform code searches with controlled write access.
Data access: Perform natural-language queries on databases via read-only servers, with row-level policies.
Governed automation: Orchestrate multi-step workflows across SaaS applications while maintaining a single audit trail.
How MCP compares to customized tool integrations
Dimension | MCP approach | Point-to-point tools |
|---|---|---|
Integration speed | Standard schema; reuse across hosts | Rebuild per model/vendor |
Governance | Policy at server boundary | Scattered across bots/apps |
Portability | Works across compliant hosts | Vendor-locked |
Security | Centralize auth, audit, scopes | Often duplicated/inconsistent |
(Highlights derived from spec and platform documentation.) Model Context Protocol
The future
With Microsoft and others advocating interoperability, and OpenAI/Anthropic deploying client support, MCP seems poised to support an “agentic web” where compliant tools interoperate as web services did post-HTTP 1.1. Look for stronger schemas, richer discovery, and enterprise extensions (governance, rate limits, and identity).
Call to action: If you're planning AI features in your product, now is the time to begin prototyping on MCP so you can switch hosts later without re-platforming.
Model Content Protocol (MCP) FAQ
Q1: Is MCP proprietary to Anthropic?
No. While initiated by Anthropic, MCP is an open standard with a public specification and support from multiple vendors.
Q2: Does OpenAI support MCP?
Yes—through connectors/remote MCP servers in the API/Agents SDK and initial support in ChatGPT Developer Mode.
Q3: What risks should security teams monitor?
Prompt injection, mis-scoped permissions, and data leakage; combine MCP with stringent authentication, policies, and auditing.
Q4: How to get started?
Start with a reference server and an SDK, set up a simple tool (e.g., a search or database query), then connect via a supported client like ChatGPT. Establish logging and authentication early and gradually proceed to real workflows as you assess value and risk.
Q5: What is the Model Context Protocol (MCP)?
A: MCP is an open standard that enables AI applications to securely connect to external data, tools, and workflows—allowing models to fetch the necessary context and utilize the appropriate tool without custom integrations.
Q6: How does MCP operate in practice?
A: MCP employs a client–server model. An MCP server exposes tools or data; an MCP-capable client connects to that server and enables the model to list and call those tools during interactions.
Q7: Does ChatGPT support MCP?
A: Yes. ChatGPT (and compatible APIs) can connect to remote MCP servers to enable models to discover and use server tools directly—such as searching knowledge bases, querying databases, or triggering workflows.
Q8: What are common MCP use cases?
A: Enterprise search and retrieval, analytics queries, code/repo management, ticketing and IT automations, knowledge management, and multi-tool agent workflows.
Q9: How does MCP differ from plugins or one-off APIs?
A: Plugins or direct APIs are customized for each app. MCP standardizes the interface so any MCP-capable client can connect to any MCP server, reducing integration overhead and allowing tools to be portable across models and hosts.
Q10: How do I connect a remote MCP server to my AI app?
A: Direct your MCP-capable client or SDK to the server endpoint (or run it locally), furnish credentials, and allow the model to enumerate tools. The model can then call tools with parameters and receive structured outputs.
Q11: What about security and governance with MCP?
A: Treat MCP servers like production integrations: employ strong authentication, least-privilege access, and secret rotation; log tool interactions; guard against prompt injection. Keep sensitive servers private and incorporate data-loss prevention measures.
Q12: Is MCP open and vendor-neutral?
A: Yes. MCP is an open, vendor-neutral specification with community implementations and sample servers, intended for cross-model and platform interoperability.
Receive practical advice directly in your inbox
By subscribing, you agree to allow Generation Digital to store and process your information according to our privacy policy. You can review the full policy at gend.co/privacy.
Generation
Digital

Business Number: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy
Generation
Digital











