Security Clients
Client integrations.
These classes handle the direct connection to your LLM providers.
deconvolute.clients
deconvolute.clients.openai
OpenAIProxy Objects
class OpenAIProxy(BaseLLMProxy)Synchronous Proxy for the OpenAI client.
This wrapper serves as a transparent middleware for the OpenAI SDK. It intercepts
calls to client.chat.completions.create to inject security defenses (Canaries)
and validate outputs (Content Scanning).
All other calls (e.g. client.embeddings, client.images, client.models) are
transparently delegated to the underlying client via the BaseLLMProxy mechanism,
ensuring full compatibility with the original SDK.
chat
@property
def chat() -> "ChatProxy"Access the intercepted 'chat' namespace.
Returns:
ChatProxy- A wrapper around the originalclient.chatobject that enables interception of completion creation calls.
AsyncOpenAIProxy Objects
class AsyncOpenAIProxy(BaseLLMProxy)Asynchronous Proxy for the OpenAI client.
Mirrors the behavior of OpenAIProxy but handles await calls for use in
asyncio event loops. It intercepts await client.chat.completions.create.
chat
@property
def chat() -> "AsyncChatProxy"Access the intercepted async 'chat' namespace.
Returns:
AsyncChatProxy- A wrapper around the originalclient.chatobject.
ChatProxy Objects
class ChatProxy()Helper proxy to navigate the client.chat namespace.
This class exists to bridge the gap between client and client.chat.completions.
It forwards all unknown attributes to the real OpenAI chat object.
completions
@property
def completions() -> "CompletionsProxy"Access the intercepted 'completions' namespace.
Returns:
CompletionsProxy- The core interceptor that wraps thecreatemethod.
CompletionsProxy Objects
class CompletionsProxy()The core interceptor for Synchronous Chat Completions.
This class handles the lifecycle of the security checks:
- Input: Modifies the 'messages' list (e.g. Canary injection).
- Execution: Calls the real OpenAI client.
- Output: Validates and sanitizes the response content.
create
def create(*args: Any, **kwargs: Any) -> AnyWraps the standard openai.chat.completions.create method.
This method intercepts the request to inject security payloads and intercepts
the response to validate content. If stream=True is detected, output
validation is currently skipped to preserve streaming behavior.
Arguments:
*args- Positional arguments forwarded to OpenAI.**kwargs- Keyword arguments (messages, model, tools, etc.) forwarded to OpenAI.
Returns:
ChatCompletion- The original OpenAI response object, but with its content sanitized (e.g. tokens removed) and validated.
Raises:
SecurityResultError- If a scanner (e.g. Language, Canary) identifies a threat in any of the generated choices.DeconvoluteError- If integrity checks are enabled but the request configuration is invalid (e.g. missing a 'system' message).
AsyncChatProxy Objects
class AsyncChatProxy()Helper proxy to navigate the Async client.chat namespace.
completions
@property
def completions() -> "AsyncCompletionsProxy"Access the intercepted async 'completions' namespace.
AsyncCompletionsProxy Objects
class AsyncCompletionsProxy()The core interceptor for Asynchronous Chat Completions.
Identical in logic to CompletionsProxy but uses await for execution
and validation calls.
create
async def create(*args: Any, **kwargs: Any) -> AnyWraps the standard await openai.chat.completions.create method.
Arguments:
*args- Positional arguments forwarded to OpenAI.**kwargs- Keyword arguments (messages, model, etc.).
Returns:
ChatCompletion- The sanitized OpenAI response object.
Raises:
SecurityResultError- If a threat is detected in the response.DeconvoluteError- If the request configuration is invalid.
deconvolute.clients.mcp
MCPProxy Objects
class MCPProxy()Transparent proxy for mcp.ClientSession that enforces security policies.
This proxy sits between the Application and the MCP Client. It intercepts:
- list_tools(): To filter out tools that are blocked by policy.
- call_tool(): To block execution of unsafe tools or detect tampering.
All other method calls (e.g. list_resources) are delegated directly to the underlying session.
__init__
def __init__(session: ClientSession,
firewall: MCPFirewall,
integrity_mode: IntegrityLevel = "snapshot",
transport_origin: TransportOrigin | None = None) -> NoneArguments:
session- The original connected mcp.ClientSession.firewall- The configured enforcement engine.integrity_mode- 'snapshot' (default) or 'strict'.
initialize
async def initialize(*args: Any, **kwargs: Any) -> AnyIntercepts session initialization to dynamically extract the server's identity.
__aenter__
async def __aenter__() -> "MCPProxy"Allow using the guarded session directly in 'async with'. We enter the underlying session, but return 'self' (the Proxy).
__aexit__
async def __aexit__(exc_type: Any, exc_value: Any, traceback: Any) -> NonePass context exit to the underlying session.
__getattr__
def __getattr__(name: str) -> AnyDelegate any unknown methods (like list_resources) to the real session.
list_tools
async def list_tools(*args: Any, **kwargs: Any) -> types.ListToolsResultIntercepts tool discovery to hide blocked tools.
- Fetches all tools from the server.
- Passes them through the Firewall filter.
- Registers allowed tools in the SessionRegistry (snapshotting).
- Returns a ListToolsResult containing ONLY the allowed tools.
call_tool
async def call_tool(name: str,
arguments: dict[str, Any] | None = None,
*args: Any,
**kwargs: Any) -> types.CallToolResultIntercepts tool execution to enforce policy.
- Checks Firewall for Policy (Is this allowed?) and Integrity (Is this known?).
- If UNSAFE, returns a fake Error Result (prevents network call).
- If SAFE/WARNING, proceeds with the real network call.
secure_stdio_session_impl
@asynccontextmanager
async def secure_stdio_session_impl(
server_parameters: Any,
policy_path: str,
integrity: IntegrityLevel = "snapshot",
audit_log: str | None = None) -> AsyncIterator[Any]Implementation for the secure stdio transport wrapper.
secure_sse_session_impl
@asynccontextmanager
async def secure_sse_session_impl(
url: str,
policy_path: str,
integrity: IntegrityLevel = "snapshot",
audit_log: str | None = None) -> AsyncIterator[Any]Implementation for the secure sse transport wrapper.
deconvolute.clients.base
BaseLLMProxy Objects
class BaseLLMProxy()Abstract Base Class for all Client Proxies.
This class provides the core infrastructure for storing state (client, keys, scanners) and transparently delegating attributes. It organizes scanners based on their capabilities (Injecting vs. Scanning).
Attributes:
_clientAny - The original, wrapped LLM client instance.api_keystr | None - The Deconvolute API Key._all_scannerslist[BaseScanner] - The full list of active security scanners._injectorslist[BaseScanner] - Scanners that implement 'inject'. Used to modify the input prompt (e.g. Canary)._scannerslist[BaseScanner] - Scanners that implement 'check'. Used to scan the output response (e.g. Language, Canary).
__init__
def __init__(client: Any,
scanners: list[BaseScanner],
api_key: str | None = None)Initializes the proxy infrastructure.
Arguments:
client- The original LLM client instance.scanners- A strict list of scanners. The factory (llm_guard) is responsible for resolving defaults before calling this.api_key- Optional Deconvolute API key.
__getattr__
def __getattr__(name: str) -> AnyDelegates attribute access to the underlying client.
This enables 'transparency': if the user calls a method we don't explicitly intercept, it passes through to the real client.