About

What MCP Trap does, and what it does not do.

MCP Trap is a free, single-page analyzer for the Model Context Protocol attack surface. You paste a tools/list response or a tool-call JSON Schema, you get back a research-cited findings list and a portable test pack you can run against your own agent.

What does MCP Trap do?

  • Runs nine deterministic static rules over your schema. Each cites a primary source (Invariant Labs, Trail of Bits, Hou et al., Equixly, OWASP, ToolEmu).
  • Runs four LLM-derived analyses for things deterministic rules cannot do well: capability composition across tools, semantic mismatch between name and parameters, untrusted-source to trusted-sink trust-flow edges, and a plain-English explainer over every finding.
  • Generates a vendor-neutral test pack of adversarial inputs parameterized to your tools, drawn from published research. Markdown and JSON formats.

What does MCP Trap not do?

  • Score you. There is no security grade. Findings only.
  • Scan arbitrary URLs from our infrastructure. Our servers are a JSON sink, never an outbound fetcher. Live probing always runs from your machine via the open-source mcptrap-probe CLI; you can read the source before you run it.
  • Generate attacks freeform with an LLM. The test pack parameterizes patterns from the published literature; the LLM is used for analysis, not attack invention.
  • Store your schema. We hash it for cache lookup and analyze it in-request. The original text is not persisted.

How does MCP Trap relate to Progress Observability?

MCP Trap tells you what to send. The Progress Observability Platform shows you what your agent did with each input: which MCP tool got called, what the LLM did with the poisoned description, where context leaked. You run the test pack, you observe in Progress.

Built by

Lyubomir Atanasov. Sibling tool to judgekit, a research-grounded LLM-as-a-Judge prompt generator.