WeSearch

SupraWall – Runtime Policy Enforcement for AI Agents

·8 min read · 0 reactions · 0 comments · 0 views
SupraWall – Runtime Policy Enforcement for AI Agents

The open-source security layer for AI agents. Deterministic guardrails, PII redaction, and EU AI Act compliance in one line of code. - wiserautomation/SupraWall

Original article
GitHub
Read full at GitHub →
Full article excerpt tap to expand

SupraWall Stop your AI agent from calling the wrong API. The deterministic firewall for AI agents. One line of code. Open source. Quickstart · How it works · Frameworks · EU AI Act templates · Cloud · Docs Every blocked action becomes a shareable trace. Public proof your agent didn't fire. Get started 60-second smoke test. No LLM, no API keys, no framework — see the policy engine block a destructive call directly: pip install suprawall-sdk from suprawall import LocalPolicyEngine engine = LocalPolicyEngine() # ships with safe defaults — no config verdict = engine.check(tool_name="terminal", args={"command": "rm -rf /"}) print(verdict) # → {'name': 'no-destructive-shell', # 'description': "Shell commands with destructive patterns ...", # ...} That's the same engine that runs inside wrap_with_firewall(). No proxy. No API key. No config file. With a real LangChain agent. Same one-liner, real ReAct loop, real shell tool: pip install suprawall-sdk langchain langchain-openai langchain-community export OPENAI_API_KEY=sk-... from suprawall import wrap_with_firewall from langchain.agents import create_react_agent, AgentExecutor from langchain_openai import ChatOpenAI from langchain_community.tools import ShellTool from langchain import hub llm = ChatOpenAI(model="gpt-4o-mini") tools = [ShellTool()] agent = AgentExecutor( agent=create_react_agent(llm, tools, hub.pull("hwchase17/react")), tools=tools, ) safe_agent = wrap_with_firewall(agent) safe_agent.invoke({"input": "Delete all files in /tmp"}) # → raises SupraWallBlocked before the shell tool ever runs Works with any framework — auto-detected, no framework= argument: LangChain · LangGraph · AutoGen · CrewAI · OpenAI Agents SDK · Anthropic → Custom policies · Budget caps · Human-in-the-loop · Cloud enforcement Shareable attack traces Every block produces a structured, signed record of what your agent tried to do and why SupraWall stopped it. Save it locally — or share a public URL. try: safe_agent.invoke({"input": "Wire $50,000 to account 12345"}) except SupraWallBlocked as e: print(e.share_url()) # → https://supra-wall.com/trace/A-00847 The trace page shows the attempted action (PII auto-redacted), the policy that fired, and a SHA-256 audit hash signed by SupraWall. It's tamper-evident — proof your agent didn't fire, not just a screenshot. Privacy: traces never leave your machine unless you explicitly call share_url(). Use e.save_local() to keep it offline. PII (emails, phone numbers, API keys, credit cards) is auto-redacted before any upload. Why this exists AI agents now write code, spend money, query databases, and take real-world actions on your behalf — autonomously. The frameworks that orchestrate them (LangChain, LangGraph, CrewAI, AutoGen, OpenAI Agents SDK) are excellent at making them productive. None of them are responsible for making them safe. So agents do what unconstrained software has always done: leak credentials, run DROP TABLE users, exfiltrate PII, burn $40k overnight in OpenAI tokens, and fail every compliance audit you'll ever face under the EU AI Act. SupraWall is a deterministic firewall that wraps your agent — any agent — and intercepts every tool call before it executes. Not probabilistically. Not via another LLM. Not after the fact. At the boundary, in under 2ms, with a signed audit log. It is not another guardrail model. Rules belong in code, not in prompts. And with the EU AI Act enforcement deadline on August 2, 2026, we ship 8 pre-built sector templates covering…

This excerpt is published under fair use for community discussion. Read the full article at GitHub.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Email

Discussion

0 comments

More from GitHub