Deconvolute SDK
Advanced Architecture & Best Practices

Performance Characteristics

Understand the latency, memory, and CPU footprint of Deconvolute in production.

Security layers must not become bottlenecks. Deconvolute is designed to operate in high-throughput production environments with minimal overhead. Because the SDK relies entirely on deterministic, local evaluations rather than secondary LLM calls, the performance costs are highly predictable.

MCP Firewall

The firewall is designed for minimal overhead to ensure it does not bottleneck your agentic workflows.

OperationComplexityTypical Cost
Discovery (Snapshot computation)O(n) where n = number of tools~1ms per tool
Execution (Snapshot verification)O(1) per tool call~0.1ms per call
MemoryLinear in number of tools~32 bytes per sealed tool

For a typical MCP server exposing 10 to 20 tools, the discovery overhead is under 20ms, and execution overhead is strictly negligible.

State Management

The internal state tracker is kept entirely in-memory and does not persist between sessions. This ephemeral design ensures:

  • No disk I/O overhead during runtime.
  • No stale hashes from previous sessions where the legitimate tool definition may have changed.
  • Clean state on every new connection.

Network Overhead (Strict Mode)

  • Snapshot Integrity (Default): Adds zero network overhead during tool execution.
  • Strict Integrity: Adds at least one network round trip to the MCP server per tool call to re-fetch and verify the live tool definition. For servers with hundreds of tools, this may require multiple paginated requests if the requested tool is not on the first page. The latency cost is directly tied to your network ping and the server's pagination depth.

Content Scanners

Content scanners perform local string matching and evaluation, avoiding the latency of external API calls.

CPU Compute

  • SignatureScanner (YARA): YARA is an industry standard heavily optimized for speed. Evaluating a standard chunk of text against the built-in ruleset typically takes sub-millisecond compute time.
  • CanaryScanner: Operates using basic string injection and substring verification. The CPU cost is effectively zero.

Memory Footprint (LanguageScanner)

The LanguageScanner uses statistical models to detect ISO 639-1 languages.

  • Default Mode: Loads models for all supported languages into memory. This consumes approximately 100MB of RAM.
  • Optimized Mode: In memory-constrained container environments, you can drastically reduce this footprint by explicitly passing a list to languages_to_load (e.g. ["en", "es"]). The scanner will only load the specific statistical models requested, dropping the footprint to just a few megabytes.

On this page