We are hitting an invisible wall in the transition from chatbots to autonomous agents. That wall is trust.
While the Model Context Protocol (MCP) has successfully standardized how LLMs discover tools and data, it left a critical question unanswered: how do we safely give an AI agent access to our most sensitive systems? Today, connecting an LLM to a production database or a SaaS API requires handing over long-lived credentials, effectively giving the model—and by extension, the provider—keys to the castle.
This security bottleneck is preventing the deployment of high-value agents. To solve it, we are announcing the Dedalus MCP Trust Layer, a secure authentication and credential isolation framework designed to give LLMs wings without compromising security.
The Auth Bottleneck
MCP acts as the universal interface between models and the world, exposing functions like "search flights" or "query customer database" to AI agents. However, the current state of authorization in the ecosystem is fragmented.
The new MCP authorization specification moves the ecosystem forward by leveraging OAuth 2.1 and protected resource metadata. But this places a massive burden on tool developers. To build a compliant, secure MCP server today, a developer effectively needs to become an identity provider—implementing OAuth flows, managing token storage, rotating secrets, and handling scope negotiation.
This complexity blocks most internal infrastructure and SaaS APIs from ever being exposed to agents. Security teams cannot justify the risk of scattering credentials across dozens of "mini-auth" implementations in various tools.
The Universal Trust Layer
Our thesis is simple: The blocker for agents is not reasoning capability; it is trust.
Dedalus is building the universal trust layer between models, context, and skills. We centralize the hard problems of authentication and secret management so that tool developers don't have to. Our goal is to enable a world where you can safely expose your capabilities to any model, knowing that your credentials never leave a secure boundary and that every action is strictly scoped to user intent.
Architecture: The Networkless Vault
We designed the Dedalus Trust Layer around a core guarantee: MCP servers never see raw user secrets.
Instead of passing API keys or access tokens directly to the model or the tool, we introduce an isolation layer. Here is how it works:
1. Intent-Based Access
When an agent needs to perform an action, it doesn't request "the GitHub API key." It declares an intent, such as github:create_issue. This intent is a structured, permissioned request that maps to a specific capability.
2. The Credential Vault
Secrets are stored in a networkless vault backed by AWS Nitro Enclaves. This vault is:
- Isolated from the internet: It has no direct network access.
- Attestation-gated: Only code that cryptographically proves it is the unmodified, authorized Dedalus enclave binary can decrypt the secrets.
- Ephemeral: Secrets are zeroized in memory immediately after use.
3. The Execution Flow
When a model invokes an MCP tool via Dedalus:
- The request (intent + arguments) is sent to the Dedalus platform.
- We validate the intent against the user's scoped permissions.
- The request is forwarded into the enclave.
- Inside the secure boundary, the enclave decrypts the necessary credentials, signs the downstream API request, and executes it.
- Only the API response (not the credentials) is returned to the agent.
Crucially, the communication path involves a vsock bridge that only ever sees TLS-encrypted ciphertext. The host machine managing the enclave cannot inspect the traffic or access the keys. All key management is backed by Elliptic Curve Cryptography (ECC) to ensure strong security properties.
Native Compatibility with MCP Auth
We built this architecture to be fully compatible with the emerging MCP authorization standards.
Dedalus acts as the centralized Authorization Server that mediates between agents (clients) and tools (resource servers). Our core invariant is simple: Every MCP server on our marketplace is automatically a compliant OAuth 2.1 Resource Server.
We abstract away the fragmented reality of downstream APIs:
- Unified Interface: Agents authenticate with Dedalus using standard OAuth 2.1, regardless of how the underlying tool works.
- Credential Management: Whether the downstream API requires complex OAuth dances (like GitHub) or static keys (like Stripe), our vault handles the storage, rotation, and usage.
This means developers can stop hand-rolling "bring-your-own-API-key" flows or implementing OAuth clients from scratch. You focus on the capability; we handle the trust.
The "Shopify Moment" for Agents
We see this as the "Shopify moment" for the AI ecosystem. Just as e-commerce exploded when trust was standardized—users didn't have to trust every small merchant with their credit card number—the agent ecosystem will explode when users don't have to trust every tool with their credentials.
Dedalus is building the multi-tenant marketplace for MCP. We envision a future where:
- Functions are packaged securely: Developers publish MCP servers once.
- Tenancy is isolated: The same server code can serve thousands of distinct users and organizations, with Dedalus enforcing strict data and credential isolation.
- Discovery is safe: Hosts (like Claude Desktop or enterprise agent platforms) can route tasks through a marketplace of verified, sandboxed tools.
Roadmap
This release is just the foundation. We are rolling out the Dedalus SDK and platform to a closed beta group of partners today.
Our roadmap includes:
- Multi-tenant MCP hosting: Run your MCP servers on our secure infrastructure.
- Granular Policy Engine: Define policies like "Allow read-only access to CRM for all engineers, but write access only for senior staff."
- The Dedalus Marketplace: A registry of trusted, pre-verified MCP servers ready to be plugged into any agent.
We are building the infrastructure that gives LLMs safe wings. If you are a platform team or developer ready to move beyond chatbots, we want to talk to you.