Secoda brings structure to AI with the Model Context Protocol (MCP)

AI is transforming how teams work with data, but without the right context, even the best models fall short. That’s why we’re introducing support for the Model Context Protocol (MCP), an open standard that defines how AI systems interact with structured metadata in a secure and consistent way.
With MCP, your existing AI tools like Claude and Cursor can connect to your Secoda catalog and access trusted metadata like lineage, glossary terms, documentation, and SQL context, right where you work.
MCP is an open protocol that standardizes how external AI tools and agents can securely interact with metadata systems like Secoda. Think of MCP like HTTP for AI. Just as HTTP offers a protocol for communication on the internet, MCP provides a consistent method for tools like Cursor, Claude, or custom agents to access information like lineage, documentation, glossary definitions, and other metadata from Secoda.
MCP uses a client-server architecture. AI tools connect to MCP server endpoints hosted by Secoda and are authenticated based on workspace-level permissions. Once connected, these tools can search your catalog, retrieve documentation, run SQL queries, and explore lineage. They also respect your access controls and governance policies.
Organizations today have a rich layer of metadata spread across dashboards, warehouses, and documentation. But that context is often locked inside individual tools. MCP makes it possible to bring that context into the AI tools your teams already use.
By connecting to Secoda’s MCP server, your AI tools can:
MCP makes it possible to embed trusted metadata into any AI workflow, extending the reach of your data catalog without duplicating logic or breaking governance.
Once connected to Secoda’s MCP server, external AI tools can:
This enables your AI tools to act with the same context and confidence as your data team, without ever leaving their native environments.
Once MCP is connected, you can use natural language to interact with your Secoda catalog. Test it out by asking questions like:
Here’s how teams are already seeing value from using MCP to bring Secoda’s trusted metadata into their AI tools:
MCP enables AI systems to act with the same context, standards, and structure your team relies on.
Every operation via MCP follows your existing Secoda governance framework. AI tools only access metadata that the authenticated user or service is authorized to see, and all interactions are logged for full traceability.
With MCP, you get:
This means your organization can scale its use of AI with confidence, knowing that governance isn’t sacrificed for speed.
If you're already using Secoda AI, MCP is already live in your workspace. You don't need to take any additional steps to activate it.
When an AI tool connects to Secoda:
To use MCP, all you need is an active Secoda workspace with AI features enabled and an MCP-compatible AI assistant (like Claude, Cursor, VS Code, etc.).
Want to connect to a specific tool like Claude or VS Code? Check out our detailed setup guide.
The future of AI in data isn’t just about smarter models. It’s about structured context, open standards, and systems that reflect how teams actually work. MCP is a major step in that direction, not only for Secoda but for the broader ecosystem of AI applications.
To learn more about the open standard behind MCP, visit modelcontextprotocol.io, or reach out to our team to see how you can get started.
Secoda CEO Etai Mizrahi shares a major update on how Secoda AI is evolving with a new context-first foundation, powered by metadata, Claude 4, and the Model Context Protocol (MCP). This letter outlines how Secoda is redefining AI for data teams—prioritizing trust, accuracy, and alignment with real-world workflows.
Discover how data teams can move from AI experimentation to reliable outcomes. This guide explores the challenges of using LLMs with business data, why context matters, and how tools like Secoda embed governance, metadata, and lineage to deliver trustworthy AI outputs.