Remote Execution with Local Summaries: The Wrapper Pattern for Token Efficiency

Run security tools remotely, get summaries back. Professional infrastructure at a fraction of the cost.

Cyberpunk anime: Raw data firehose filtered to pure insight droplets while Docker containers float like cargo ships in a digital ocean

What if your AI could run professional security tools without consuming your entire context window?

A full nmap scan returns thousands of lines. A nuclei vulnerability scan even more. Dump that raw output into an AI conversation and you've burned through your context budget before you've even started analyzing.

Intelligence Adjacent solves this with the VPS Code API pattern—remote execution with parsed summaries. This aligns with Anthropic's guidance on building effective agents: tools should return concise, actionable results rather than overwhelming context.

The Problem: Tool Output Overload

Think of it like getting a medical test. You don't want the raw lab data with every measurement, calibration value, and machine log. You want the doctor's summary: "Your cholesterol is high, here's what to do."

Most AI security tools dump the raw data. Every line of nmap output. Every finding from nuclei. Every packet from network capture. Your context fills up with noise.

VPS wrappers are the doctor's summary. Run the tool remotely. Parse the important findings. Return what matters. Store the raw data for later if you need it.

The Solution: VPS Code API Pattern

The framework uses a simple but powerful pattern:

Agent
  ↓
Python Wrapper
  ↓
SSH to VPS
  ↓
Docker Container
  ↓
Tool Execution
  ↓
Local File Storage
  ↓
Parsed Summary (returned)

Key insight: The wrapper saves raw output to a local file, then returns only a parsed summary. The agent gets enough context to make decisions. The full output is always available if needed.

Token savings: ~95% reduction (thousands of lines → concise summary)

The Infrastructure

The framework runs on affordable VPS infrastructure:

OVHcloud Security VPS: Dedicated security testing with Docker containers for specialized tools. On-demand model—containers start when needed, stop after engagement.

Hostinger Production: Production services including Ghost blog platform and n8n workflow automation.

Monthly cost: Comparable to a streaming subscription. Annual savings vs. cloud providers: 90%+ reduction compared to AWS or DigitalOcean equivalents.

Security Tool Categories

The VPS hosts multiple specialized Docker containers:

Network Security: Port scanning, service detection, subdomain enumeration, HTTP probing, endpoint discovery

Vulnerability Assessment: Template-based scanning, web server analysis, application scanning, CMS-specific tools

Smart Contract Security: Solidity static analysis, formal verification, property-based fuzzing, blockchain interaction

Mobile Security: APK decompilation, Java decompiling, permissions analysis, dynamic instrumentation

Exploitation Framework: Module search, payload generation, reverse shell handling with background session management

Active Directory: Kerberos tools, credential validation, privilege escalation, certificate abuse

Cloud Security: Multi-cloud scanning, compliance checking, misconfiguration detection

AI/ML Security: Adversarial testing, prompt injection scanning, model vulnerability assessment

On-Demand Container Model

Containers don't run constantly—they start when an engagement begins and stop when it completes.

Why this matters:

  • Prevents disk accumulation (security tools generate substantial logs)
  • Reduces memory footprint when not in use
  • Containers ready to start in seconds when needed
  • Same tooling, lower resource consumption

The lifecycle is managed through Docker Compose with simple start/stop commands.

The Wrapper Pattern

Every tool wrapper follows the same structure:

def nmap(target: str, options: str = "-sV -sC", engagement_dir: str = None):
    """Execute nmap on VPS, return parsed summary."""

    # 1. Execute on VPS via SSH + docker exec
    result = docker_exec("kali-pentest", f"nmap {options} {target}")

    # 2. Save raw output to local file
    output_file = save_to_engagement(result, engagement_dir)

    # 3. Parse key findings
    summary = parse_nmap_output(result)

    # 4. Return summary + file reference
    return {
        "summary": summary,          # ~150 tokens
        "outputFile": output_file,   # Full output path
        "message": format_summary(summary)
    }

The pattern is consistent: Execute remotely → Save locally → Parse findings → Return summary.

The agent uses the summary for decisions. If more detail is needed, it reads the full output file.

Evidence Organization

Output files follow a structured organization:

During engagements:

output/engagements/[client]/
├── 02-reconnaissance/
│   ├── nmap/
│   ├── subfinder/
│   └── httpx/
├── 03-vulnerability-assessment/
│   ├── nuclei/
│   └── nikto/
└── 04-exploitation/
    └── [tool-outputs]/

Ad-hoc testing:

sessions/[date]/
└── scans/
    ├── nmap/
    ├── nuclei/
    └── [tool]/

Raw output is timestamped and preserved. Nothing is lost. Everything is auditable.

Token Efficiency in Practice

Typical nmap scan:

  • Raw output: ~3,000 tokens
  • Parsed summary: ~150 tokens
  • Savings: 95%

Typical nuclei scan:

  • Raw output: ~5,000+ tokens
  • Parsed summary: ~200 tokens
  • Savings: 96%

What the summary includes:

  • Open ports and services detected
  • Critical/high severity findings
  • Key metrics (hosts up, services found)
  • Path to full output for deep dives

What gets stored in files:

  • Complete raw tool output
  • Timestamps and execution metadata
  • Full context for report generation

Security Measures

Access Control:

  • SSH key authentication only (no passwords)
  • Minimal open ports (SSH only, everything else through zero-trust)
  • Fail2ban active for brute force protection
  • Root login disabled

Zero-Trust Networking:

  • Web services accessed through Twingate zero-trust access
  • No direct port exposure for sensitive interfaces
  • Container network isolation

Automated Monitoring:

  • Health checks every few minutes
  • Log rotation and monitoring
  • Security updates applied automatically

Why Not Cloud Functions?

You could run security tools in AWS Lambda or Cloud Run. Here's why VPS works better:

Persistent state: Docker containers maintain tool configurations and databases between runs. Lambda is stateless.

Cost predictability: Fixed monthly cost regardless of usage. Cloud functions charge per invocation—heavy scanning gets expensive.

Tool compatibility: Security tools expect Linux environments with specific dependencies. VPS gives you full control.

Long-running operations: Some scans take hours. Lambda times out at 15 minutes. VPS runs as long as needed.

Integration with Framework

Security agents invoke VPS tools naturally:

  1. Skill loads: security-testing methodology
  2. Skill identifies: tool needed for this phase
  3. Wrapper executes: SSH → container → tool
  4. Summary returns: Agent continues with findings
  5. Full output: Available for report generation

The agent doesn't know or care about VPS infrastructure. It calls a function, gets findings back. The wrapper handles all the remote execution complexity.

Try It Yourself

The VPS Code API pattern is framework-portable. You can:

  1. Deploy any VPS (OVHcloud, DigitalOcean, Linode, etc.)
  2. Install Docker and your security tools
  3. Create Python wrappers using the pattern above
  4. SSH from your local machine to execute

The pattern works for any remote tool execution—not just security testing.



Sources

Framework & Architecture

Infrastructure

Security Tools Referenced

  • Nmap - Network discovery and security auditing
  • Nuclei - Vulnerability scanner with templates
  • Nikto - Web server scanner