Right now, developers inside your enterprise are spinning up MCP servers in places nobody is tracking — on laptops, in personal cloud accounts, as VS Code extensions, in random Docker containers. These servers expose tools: file system access, database queries, API calls, internal service integrations.
An AI agent pointed at one of these servers can execute actions — not just read data. This is fundamentally different from traditional shadow IT.
The blast radius problem: A rogue SaaS app accesses data. A rogue MCP server lets an AI agent execute actions at scale — file writes, API calls, database mutations — with no audit trail and no governance.
The platform doesn't automatically discover rogue servers outside it. What it does is give you a governed chokepoint — a single gateway that all approved MCP traffic flows through, combined with network controls to enforce that nothing bypasses it.
North-south traffic is what most people think about first — an external AI agent trying to reach an MCP server inside your perimeter, or an internal MCP server calling out to unauthorized external services.
Internal MCP Server → Firewall → External APIs (unrestricted)
Direct MCP access → Blocked at firewall
How to enforce it
Your network team configures the perimeter firewall to allow MCP traffic only to and from the platform's Envoy gateway IP. All other MCP traffic is dropped. Same pattern enterprises use for SaaS allowlists.
- All external AI agents point at your gateway hostname (e.g.
mcp.company.com) - Firewall blocks all other inbound MCP traffic
- Egress filtering prevents MCP servers calling unauthorized external APIs
- Every request is logged, governed, and auditable
East-west traffic never crosses your perimeter. A developer's AI agent on their laptop talks directly to an internal MCP server — bypassing the firewall, bypassing the gateway, bypassing every control you have.
Perimeter controls don't help here. The traffic is entirely internal — it looks like normal developer activity from a network perspective.
Never touches the gateway. Never logged. Never governed.
Direct pod-to-pod MCP access → Blocked by Kubernetes NetworkPolicy
Why it's the harder problem
Internal MCP servers often have access to the most sensitive resources — databases, private APIs, file systems, messaging platforms. An AI agent with access to an ungoverned internal server can exfiltrate data and mutate state entirely within your network perimeter.
How to enforce it
- Internal network proxy routes all MCP traffic through the gateway
- AI agent configurations managed centrally — all agents point to the gateway
- Kubernetes NetworkPolicy blocks direct pod-to-pod MCP connections
- Port-level controls block direct access to MCP server ports from non-gateway sources
Layer 3 is automatic. Kubernetes NetworkPolicy ships with every installation — MCP pods reject connections from any source except the Envoy gateway. Layers 1 and 2 require your network team, and we provide the reference architecture.
Three layers of control
Complete MCP traffic governance requires enforcement at three distinct layers. The platform handles Layer 3 automatically — Layers 1 and 2 are implemented by your network team using standard enterprise tooling.
Allow inbound MCP traffic only to the platform's Envoy gateway IP. Block all other MCP traffic at the perimeter. Configure egress filtering to prevent MCP servers from calling unauthorized external services.
- Add allowlist rule for gateway IP on HTTPS/443
- Drop all other inbound MCP traffic at the firewall
- Configure egress filtering for MCP server pods
- Same pattern enterprises use for SaaS allowlisting
Route all internal MCP traffic through the gateway via proxy or NAC enforcement. Manage AI agent configurations centrally so agents resolve MCP addresses through the gateway, not directly.
- Internal proxy routes MCP traffic through the gateway
- Centralize AI agent configs — all point to gateway hostname
- Block direct access to MCP server ports (3000, 8080, 8443) internally
- NAC enforcement prevents rogue direct connections
Deployed automatically with every installation. MCP server pods accept connections only from the Envoy gateway pod — direct pod-to-pod access is dropped at the Kubernetes network layer regardless of what Layers 1 and 2 allow.
- NetworkPolicy deployed automatically — no manual config
- MCP pods reject all connections except from the Envoy gateway pod
- Every request reaching a server has been authenticated and logged
- Works independently of perimeter and internal network controls
End to End Management
Every MCP request through the Envoy gateway is subject to these controls — no configuration required beyond deployment:
- Authentication — Every request carries a JWT token. Unauthenticated requests are rejected at the gateway.
- RBAC authorization — Roles define which users can access which servers in which namespaces.
- Namespace isolation — Production, staging, and dev MCP servers are isolated with NetworkPolicy enforcement.
- Governance policies — Deployment-time policies block non-compliant servers before they go live.
- Audit trail — Every deployment, scale, restart, and config change is logged with actor, timestamp, and full state snapshot.
- Webhook notifications — Real-time alerts to Slack, email, or any HTTP endpoint when servers change state.
Best Practices for Network Management
1. Update your firewall rules
Add an allowlist rule permitting MCP traffic (HTTPS/443) only to the platform's Envoy gateway IP. This IP is static — assigned by MetalLB for on-premises deployments, or your cloud provider's LB for cloud.
Don't block before you're ready. Audit existing MCP traffic for 30 days before enforcing. Use your proxy in logging-only mode first to identify all MCP endpoints currently in use.
2. Centralize AI agent configuration
Update your AI agent deployment standard to resolve all MCP server addresses through the gateway. The gateway's hostname (e.g. mcp.company.com) becomes the single MCP endpoint in all agent configs.
3. Block direct MCP server ports internally
Once agents are pointed at the gateway, block direct access to common MCP server ports (3000, 8080, 8443) from non-gateway sources on the internal network.
We provide a network reference architecture with firewall rule templates, proxy config examples, and a 30-day rollout plan. Contact us for the enterprise onboarding package.
Close these gaps in your own Kubernetes cluster.
Magertron runs in your cluster, not ours. No data leaves your perimeter. OSS Free up to 20 servers — no signup, no credit card. Try it in 5 minutes.
$ helm repo add magertron https://magertron.com/charts $ helm repo update $ helm install magertron magertron/orchestratorSchedule a Demo → View on GitHub →