The Model Context Protocol is an open spec for exposing tools and resources to LLM-powered clients over JSON-RPC 2.0. The protocol decouples the tool authors from the client authors: an MCP server you write today works in Claude Desktop, Claude Code, Cursor, Zed, and Continue without per-client integration.
Why it matters
Before MCP, every developer-tools company had to integrate separately with each AI client. Cursor wanted a plugin. Claude wanted a different API. The N-by-M integration matrix punished smaller vendors.
MCP is the Open Web equivalent for AI tool use. One server, every client.
What MCP servers do
A server exposes a list of tools (functions the model can call) and optionally resources (content the client can pull into context). The client lists tools at session start, the model decides at inference time which to invoke, the client makes the JSON-RPC call, the server returns a result.
Cognia's MCP server exposes five tools: cognia_search, cognia_get_memory, cognia_list_memories, cognia_action_plan, cognia_action_execute. The plan/execute split is a pattern we built on top of the spec; we recommend it for any tool that mutates external state.