Intelligence Adjacent Framework Setup Guide

The Intelligence Adjacent (IA) framework augments human intelligence through file-based context management, skills-based architecture, and automated tool orchestration. This is the technical documentation for setting it up.

This guide walks through installation from prerequisites to deployment. Most users complete core setup in 10-15 minutes. Optional components (VPS security tools, blog publishing, multi-model AI) take additional time depending on what you configure.

What you'll configure:

  • Core framework with skills and specialized agents
  • Slash commands for guided workflows
  • Optional: VPS-hosted security tools with automated deployment
  • Optional: Blog publishing integration for Ghost CMS
  • Optional: Multi-model AI access via OpenRouter (Grok, Perplexity, etc.)

Time investment: 10-15 minutes for core framework, additional time for optional integrations.

How it works: You run the setup script and add credentials to a .env file. The framework handles dependency installation, environment configuration, and infrastructure deployment automatically.

Let's get started.

Part 1: Prerequisites

Before installing the framework, verify you have the required software.

Required (Core Framework)

Python 3.10 or higher

The framework requires Python 3.10+ because the core tools and wrappers use modern Python features like structural pattern matching, improved type hints, and async/await enhancements introduced in 3.10.

  • Download: python.org/downloads
  • Verify installation: python3 --version or python --version
  • Minimum version: Python 3.10

Git

Git is required to clone the framework repository and manage version control for any customizations you make to skills, agents, or commands.

Package manager (choose one):

You need a Python package manager to install dependencies from requirements.txt.

  • uv (recommended for speed and better dependency resolution)
    • Install on Linux/Mac: curl -LsSf https://astral.sh/uv/install.sh | sh
    • Install on Windows: powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
    • Why uv? It resolves dependencies faster than pip and handles complex dependency trees more reliably.
  • pip (standard alternative)
    • Usually included with Python installations
    • Verify: pip --version or pip3 --version
    • If missing, install: python3 -m ensurepip --upgrade

Optional Components

Node.js or Bun (for blog publishing):

Only needed if you plan to publish blog posts to Ghost CMS using the blog-writer skill. The TypeScript publishing tools require a JavaScript runtime.

  • Bun (recommended for faster execution): bun.sh
  • Node.js (standard alternative): nodejs.org

Docker Desktop (for local security tools testing):

Only needed if you want to test security tools locally without a VPS. Docker Desktop runs containers on your machine for learning the framework or testing tools before production deployment.

VPS Account (for production security tools):

Only needed if you plan to use security testing skills in production. A VPS provides a clean IP address (important for maintaining reputation during authorized security testing) and isolated infrastructure.

  • Providers: OVHcloud, DigitalOcean, Linode, Hostinger, Vultr
  • Recommended specs: 2GB RAM minimum, 4GB recommended
  • Cost: $5-7/month for basic VPS
  • Why VPS? Clean IP reputation matters for security testing (your home IP shouldn't be associated with scanning activity), and a VPS provides isolated infrastructure separate from your main system.

SSH client:

Required for VPS deployment automation if you configure security tools.

  • Linux/Mac: Pre-installed
  • Windows: Included in Windows 10/11 (Settings → Apps → Optional Features → OpenSSH Client)

That's all for prerequisites. Most users only need Python, Git, and a package manager for core functionality.

Part 2: Quick Install

The framework provides automated setup scripts for all platforms.

Critical requirement: The framework MUST be installed in the ~/.claude directory (Claude's root directory). This is a flat architecture where all framework files—CLAUDE.md, skills, agents, servers, tools—live in one location. The framework expects all paths relative to this root. Installing elsewhere will break the system.

Understanding What the Setup Script Does

Before running installation, understand what the automated script will do:

  1. Checks prerequisites - Verifies Python, Git, Node/Bun, Docker, and SSH are installed (skips optional components gracefully if missing)
  2. Installs Python dependencies - Reads requirements.txt and installs packages using uv (or falls back to pip)
  3. Installs Node dependencies - Reads package.json and installs if Bun or Node.js detected (skips if not present)
  4. Installs Playwright browsers - Downloads Chromium for screenshot capabilities (optional feature, skips if Playwright not needed)
  5. Creates .env file - Copies .env.example to .env so you can add credentials

Total time: 2-3 minutes depending on connection speed.

The script handles errors gracefully—if an optional component is missing, it continues with core setup.

Windows Installation

Option 1: Automated (Recommended)

# Clone the repository to Claude's root directory
git clone https://github.com/notchrisgroves/ia-framework.git "$env:USERPROFILE\.claude"
cd "$env:USERPROFILE\.claude"

# Run setup script
.\setup.ps1

Option 2: Manual

If you prefer to run each step manually:

# Clone repository to Claude's root directory
git clone https://github.com/notchrisgroves/ia-framework.git "$env:USERPROFILE\.claude"
cd "$env:USERPROFILE\.claude"

# Install Python dependencies
uv pip install -r requirements.txt

# Install Node dependencies (optional, skip if not using blog-writer)
bun install

# Create environment file
copy .env.example .env

Note: $env:USERPROFILE\.claude typically expands to C:\Users\YourUsername\.claude

Linux/Mac Installation

Option 1: Automated (Recommended)

# Clone the repository to Claude's root directory
git clone https://github.com/notchrisgroves/ia-framework.git ~/.claude
cd ~/.claude

# Run setup script
chmod +x setup.sh
./setup.sh

Option 2: Manual

If you prefer to run each step manually:

# Clone repository to Claude's root directory
git clone https://github.com/notchrisgroves/ia-framework.git ~/.claude
cd ~/.claude

# Install Python dependencies
uv pip install -r requirements.txt

# Install Node dependencies (optional, skip if not using blog-writer)
bun install

# Create environment file
cp .env.example .env

Why ~/.claude? The framework uses a flat architecture where CLAUDE.md (the organizational root), all skills, agents, servers, and tools live in the same directory. Claude Code loads context from this location, and the framework expects all paths relative to this root. This design simplifies context loading and eliminates the need for complex path resolution.

Setup Script Options

You can skip optional dependency installation if you don't need certain features:

# Skip Python dependency installation (if you already installed manually)
./setup.sh --skip-python

# Skip Node dependency installation (if not using blog-writer)
./setup.sh --skip-node

# Skip both (just create .env file)
./setup.sh --skip-python --skip-node

After setup completes, you'll see a success message with next steps.

Part 3: Core Configuration

The framework uses a .env file for all credentials and configuration. The setup script created this file from .env.example with placeholder values.

Understanding .env Structure

The .env file organizes credentials by integration category. Each section is optional—the framework works without any .env configuration, you just won't have access to external service integrations.

# VPS Configuration (optional - only for security tools)
VPS_HOST=YOUR_VPS_IP_HERE
VPS_USER=YOUR_VPS_USERNAME
SSH_PRIV=~/.ssh/YOUR_SSH_KEY

# Ghost Blog API (optional - only for blog publishing)
GHOST_API_URL=https://YOUR_SITE.ghost.io
GHOST_ADMIN_API_KEY=YOUR_ADMIN_KEY_HERE
GHOST_CONTENT_API_KEY=YOUR_CONTENT_KEY_HERE

# OpenRouter API (optional - for multi-model access)
OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE

# Other optional integrations...

Python is the only hard requirement because all core framework tools and wrappers are written in Python. Without configuring .env, you can still explore the framework documentation, use local file-based tools (PDF splitting, inventory generation, etc.), and understand the architecture—you just won't be able to connect to external services like VPS hosts, Ghost CMS, or OpenRouter.

Minimal Setup (Framework Only)

For exploring the framework without external integrations:

  1. Keep .env file as-is with all placeholder values
  2. You can now:
    • Read all skills documentation in skills/*/SKILL.md
    • View slash command registry in commands/
    • Explore agent capabilities in agents/
    • Use file-based tools like PDF splitting (tools/pdf/splitter.py)
    • Generate framework inventory (tools/generate-materials-inventory.py)

What you cannot do without .env configuration:

  • Run security tools on VPS or Docker containers (requires VPS_HOST or Docker)
  • Publish blog posts to Ghost (requires GHOST_ADMIN_API_KEY)
  • Use multi-model AI features via OpenRouter (requires OPENROUTER_API_KEY)

This minimal setup is perfect for:

  • Learning the framework architecture and understanding the skills system
  • Testing local tools and exploring documentation
  • Customizing skills, agents, or commands for your specific needs
  • Evaluating whether the framework fits your workflow before committing to VPS/service setup

Full Setup (All Features)

For production use with all capabilities:

Configure the optional integrations you want. The following section (Part 4) provides detailed setup instructions for each optional component. You don't need to configure all of them—choose the integrations relevant to your use case.

Part 4: Optional Component Setup

Choose which optional features you want to enable. Each subsection provides complete setup instructions.

Security Tools (VPS Deployment)

What this enables: 46 security tools across 5 Docker containers, accessible via Python wrappers with 90-95% token reduction.

Available containers and their purposes:

  • kali-pentest: 18 tools for penetration testing (nmap, nuclei, sqlmap, nikto, wpscan, gobuster, ffuf, etc.)
  • mobile-security: 9 tools for mobile app security testing (jadx, apktool, frida, androguard, etc.)
  • web3-security: 13 tools for smart contract auditing (slither, mythril, semgrep, halmos, etc.)
  • reaper: 4 production reconnaissance tools (feroxbuster, subfinder, httpx, katana)
  • metasploit: 2 exploitation tools (msfconsole, msfvenom)

Prerequisites:

  • VPS running Ubuntu 22.04+ or Debian 12+ (any provider)
  • SSH key authentication configured on the VPS
  • Budget: $5-7/month for VPS hosting

Setup steps:

1. Get a VPS

Sign up with any VPS provider (OVHcloud, DigitalOcean, Linode, Hostinger, Vultr, etc.) and create a server with these specs:

  • Operating System: Ubuntu 22.04+ or Debian 12+
  • RAM: 2GB minimum (4GB recommended for running multiple tools simultaneously)
  • Storage: 20GB minimum (40GB recommended for large wordlists and evidence files)
  • Note the IP address after provisioning

2. Setup SSH key authentication

Generate an SSH key if you don't already have one, then copy it to your VPS:

# Generate SSH key (skip if you already have one)
ssh-keygen -t ed25519 -f ~/.ssh/vps_key

# Copy public key to VPS
# Method 1: Via ssh-copy-id (if you can password-login first)
ssh-copy-id -i ~/.ssh/vps_key.pub your-username@your-vps-ip

# Method 2: Via VPS provider console (paste contents of ~/.ssh/vps_key.pub)
cat ~/.ssh/vps_key.pub

# Test connection
ssh -i ~/.ssh/vps_key your-username@your-vps-ip

If the test connection succeeds, SSH key authentication is configured correctly.

3. Add credentials to .env

Open ~/.claude/.env and update these three variables:

VPS_HOST=YOUR_VPS_IP          # Example: 15.204.218.153
VPS_USER=YOUR_VPS_USERNAME    # Example: debian, ubuntu, or root
SSH_PRIV=~/.ssh/vps_key       # Path to your SSH private key

4. Deploy infrastructure automatically

You're done with manual setup. The framework handles the rest.

When you're ready to deploy security tools, ask the framework:

"Set up my security testing environment on the VPS"

The framework will automatically:

  1. SSH into your VPS using the credentials from .env
  2. Check if Docker is installed (installs it if missing)
  3. Pull required Docker images (Kali Linux base image, custom security tool containers)
  4. Deploy all 5 containers with proper configuration (network isolation, persistent volumes)
  5. Install all 46 security tools inside the containers
  6. Configure Python wrapper scripts to use SSH + docker exec for remote execution
  7. Create persistent storage volumes for evidence preservation
  8. Test connectivity to each container
  9. Report: "Ready! You have 46 security tools available."

You don't run Docker commands manually—the framework handles deployment through automated runbooks in skills/infrastructure-ops/runbooks/.

VPS Code API Architecture:

The framework uses a novel architecture for remote tool execution that reduces token consumption by 90-95% compared to traditional Model Context Protocol (MCP) servers. Understanding this architecture helps you appreciate why VPS deployment is token-efficient.

Traditional MCP approach (NOT used by this framework):

Request → MCP Server → Protocol layer overhead → Docker exec →
Parse full output → Return 8,000+ tokens

IA Framework approach (VPS Code API):

Your request → Python wrapper (local) → SSH + docker exec (VPS) →
Tool execution → Save full output to local file →
Return 150-token summary

Why this matters:

  • Token reduction: Most security tools generate verbose output (nmap scans, nuclei results, etc.). Returning full output consumes 5,000-10,000 tokens per tool execution. The VPS Code API saves full output to servers/{container}/output/{timestamp}-{tool}.txt and returns only a summary (exit code, key findings, file path). This reduces token consumption from ~8,000 to ~150 tokens per execution.
  • Evidence preservation: Full output is saved locally in timestamped files, providing a complete audit trail for security testing engagements.
  • Automatic caching: File-based output enables simple caching—if you re-run the same scan, the wrapper can check if recent output exists and skip redundant execution.
  • Zero protocol overhead: Direct SSH + docker exec has no MCP protocol layer, making debugging trivial (just SSH to the VPS and check docker logs).
  • Simple maintenance: No MCP server process to manage, restart, or troubleshoot.

For complete technical details about the VPS Code API architecture, see servers/ARCHITECTURE.md.

Security Tools (Local Docker)

Alternative to VPS: Run security tools locally using Docker Desktop.

When to use local Docker:

  • Learning the framework before committing to VPS costs
  • Testing tools in a safe environment before production deployment
  • Budget constraints (no VPS subscription needed)
  • Situations where IP reputation doesn't matter (note: your home IP will be associated with scanning activity if you test external targets)

Prerequisites:

  • Docker Desktop installed and running
  • 10GB free disk space for container images and tool data

Setup steps:

1. Install Docker Desktop

Download Docker Desktop for your operating system from docs.docker.com/get-docker, install it, and start the Docker engine.

Verify Docker is running:

docker ps

If this command returns a list of containers (or an empty list with column headers), Docker is running correctly.

2. Leave VPS settings empty in .env

Don't configure VPS_HOST, VPS_USER, or SSH_PRIV in your .env file. The framework will detect the absence of VPS credentials and default to local Docker execution.

# VPS settings - leave commented out for local Docker
# VPS_HOST=
# VPS_USER=
# SSH_PRIV=

3. Deploy infrastructure locally

Ask the framework:

"Set up security tools locally using Docker"

The framework will automatically:

  1. Verify Docker Desktop is running
  2. Pull Kali Linux base image and security tool containers
  3. Create containers with proper configuration (bind mounts for evidence output)
  4. Install all security tools inside containers
  5. Configure Python wrappers to use docker exec for localhost execution (instead of SSH)
  6. Test tools with sample commands
  7. Report: "Ready! Tools running locally."

Blog Publishing (Ghost CMS)

What this enables: Automated blog post publishing to Ghost CMS, including markdown-to-HTML conversion, featured image upload, and content tier assignment.

Prerequisites:

  • Ghost blog (either ghost.io hosted subscription or self-hosted installation)
  • Admin API access (requires creating a custom integration in Ghost Admin)

Setup steps:

1. Create Ghost integration

Log into your Ghost Admin panel and create a custom integration for the framework:

  • Navigate to Settings → Integrations
  • Click "Add custom integration"
  • Name: "Intelligence Adjacent Framework"
  • After creation, you'll see Admin API Key and Content API Key
  • Copy both keys (you'll need them for .env configuration)

2. Add credentials to .env

Open ~/.claude/.env and update these variables:

GHOST_API_URL=https://YOUR_SITE.ghost.io
GHOST_ADMIN_API_KEY=YOUR_24CHAR_ID:64CHAR_SECRET
GHOST_CONTENT_API_KEY=YOUR_CONTENT_KEY_HERE

The Admin API Key format is id:secret (24-character ID, colon separator, 64-character secret). Make sure you copy the entire key including the colon.

3. Install Node/Bun dependencies

If you skipped Node dependency installation during initial setup, install them now:

cd ~/.claude/skills/blog-writer
bun install  # or npm install if using Node.js
cd ../..

4. Test connection

Verify the framework can connect to your Ghost blog:

cd ~/.claude/skills/blog-writer
bun run tests/test-ghost.ts

If successful, you'll see: ✅ Ghost API connection successful

Usage:

The blog-writer skill provides several TypeScript tools for managing Ghost content:

cd ~/.claude/skills/blog-writer

# List all posts
bun run list-posts.ts

# List pages
bun run list-pages.ts

# Publish workflow (interactive - guides you through creating a post)
bun run publish-workflow.ts

For complete publishing workflow details, including content tier assignment, featured image upload, and markdown conversion, see skills/blog-writer/SKILL.md.

Multi-Model AI Access (OpenRouter)

What this enables: Access to Grok, Perplexity, Claude variants, and 200+ other AI models through a unified API.

Used for:

  • Blog post QA review (Grok provides adversarial perspective for quality assurance)
  • Multi-model research (personal-development and osint-research skills use multiple models for comprehensive analysis)
  • Verification and fact-checking (cross-model validation for critical claims)

Prerequisites:

Setup steps:

1. Get OpenRouter API key

Visit openrouter.ai, sign up or log in, navigate to the Keys section, and create a new API key. Copy the key (starts with sk-or-).

2. Add to .env

Open ~/.claude/.env and update:

OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE

Cost: Approximately $0.03/month for typical usage (10-15 blog post QA reviews with Grok). OpenRouter charges per-token based on the specific model you use. Grok and Perplexity have competitive pricing compared to direct API access.

Usage:

The framework's multi-model features are accessed through skills like qa-review, osint-research, and personal-development. For direct OpenRouter usage examples, see servers/openrouter/README.md.

Discord Notifications (Optional)

What this enables: Automated notifications when blog posts are published, security scans complete, or other framework events occur.

Setup steps:

1. Create Discord webhook

In your Discord server:

  • Server Settings → Integrations → Webhooks
  • Create webhook for the desired notification channel
  • Copy webhook URL

2. Add to .env

DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/YOUR_ID/YOUR_TOKEN

n8n Workflow Automation (Optional)

What this enables: Advanced workflow automation and integration orchestration for complex multi-step processes.

Prerequisites:

  • n8n instance (either self-hosted or n8n.cloud subscription)

Setup steps:

1. Get n8n API key

In your n8n instance:

  • Settings → API
  • Create new API key
  • Copy the key

2. Add to .env

N8N_API=YOUR_N8N_API_KEY
N8N_INSTANCE_URL=https://YOUR_N8N_INSTANCE.com/

For workflow automation examples and integration patterns, see servers/n8n/README.md.

Blockchain RPC Endpoints (Web3 Security)

What this enables: Smart contract testing and blockchain interaction for Web3 security auditing.

Prerequisites:

  • Alchemy, Infura, or QuickNode account (Alchemy recommended for generous free tier)

Setup steps:

1. Get RPC endpoints

Sign up at alchemy.com and create apps for the networks you want to test:

  • Create app for Ethereum Mainnet
  • Create app for Polygon Mainnet
  • Create app for Arbitrum Mainnet (if needed)
  • Copy RPC URLs from each app dashboard

2. Add to .env

ETH_RPC_URL=https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY
POLYGON_RPC_URL=https://polygon-mainnet.g.alchemy.com/v2/YOUR_KEY
ARBITRUM_RPC_URL=https://arb-mainnet.g.alchemy.com/v2/YOUR_KEY
ETHERSCAN_API_KEY=YOUR_ETHERSCAN_KEY

For smart contract auditing workflows using these endpoints, see servers/web3-security/README.md.

Twingate Zero-Trust Network Access (Advanced)

What this enables: Secure remote access to VPS services without exposing ports publicly, using zero-trust network architecture.

When to use Twingate:

  • Running services on VPS (n8n workflow automation, Ghost CMS, databases, monitoring tools)
  • Want secure access without exposing services to the public internet
  • Need zero-trust network architecture with identity-based access controls
  • Professional security posture for production infrastructure

Why zero-trust matters: Traditional VPS security exposes services on public ports (e.g., n8n on port 5678, Ghost on port 2368) and relies on firewall rules to restrict access. Zero-trust architecture assumes no network is trustworthy—services bind to localhost only, and Twingate creates encrypted tunnels for authenticated access. This eliminates attack surface completely.

Prerequisites:

  • VPS with services running (Hostinger, OVHcloud, DigitalOcean, etc.)
  • Twingate account (free tier available at twingate.com)

Setup steps:

1. Create Twingate account

Visit twingate.com, sign up (free tier supports up to 5 users and unlimited resources), and create your network (e.g., "my-infrastructure"). The network name becomes your Twingate domain.

2. Deploy Twingate Connector on VPS

The Twingate Connector is a lightweight Docker container that runs on your VPS and creates encrypted tunnels back to the Twingate network. The framework can automate deployment:

"Deploy Twingate connector on my VPS"

The framework will automatically:

  • SSH into your VPS using credentials from .env
  • Install Docker if not present
  • Deploy Twingate connector container with proper configuration
  • Register connector with your Twingate network using generated tokens
  • Verify connectivity to Twingate cloud infrastructure

Manual deployment (if you prefer manual control):

First, generate connector tokens via Twingate Admin Console:

  • Settings → Connectors → Generate Tokens
  • Save the access token and refresh token

Then SSH to your VPS and run:

ssh -i ~/.ssh/vps_key user@vps-ip

# Deploy Twingate connector
docker run -d \
  --name=twingate-connector \
  --restart=unless-stopped \
  -e TWINGATE_NETWORK="your-network" \
  -e TWINGATE_ACCESS_TOKEN="your-access-token" \
  -e TWINGATE_REFRESH_TOKEN="your-refresh-token" \
  -e TWINGATE_LOG_LEVEL="info" \
  twingate/connector:latest

Verify the connector is running:

docker ps | grep twingate-connector

3. Register Resources

For each service you want to access (n8n, Ghost, SSH, databases, etc.), register it as a Twingate Resource:

In Twingate Admin Console:

  • Navigate to Resources → Add Resource
  • Name: "n8n Automation" (or whatever describes the service)
  • Address: 127.0.0.1 (localhost on VPS—services don't need public exposure)
  • Port: 5678 (or whatever port your service uses internally)
  • Protocol: TCP

Repeat for each service. This creates DNS aliases like n8n.twingate that route through encrypted tunnels.

4. Install Twingate Client

Download the Twingate Client for your operating system from twingate.com/download, install it, sign in with your account, and connect to your network.

5. Test access

After connecting via Twingate Client, test access to your registered resources:

# Test SSH via Twingate (if you registered SSH as a resource)
ssh -p 2222 your-vps.twingate

# Test n8n via Twingate (if you registered n8n)
curl http://n8n.twingate:5678

# Test any configured resource using its Twingate DNS alias

Benefits of zero-trust architecture:

  • Zero public exposure: Services bind to localhost (127.0.0.1), eliminating public attack surface. No firewall ports needed.
  • Encrypted tunnels: All traffic encrypted end-to-end using WireGuard protocol (industry-standard, faster than OpenVPN).
  • Identity-based access policies: Control which users or devices can access specific resources using Twingate policies.
  • Audit logging: Track all access attempts with detailed logs (who accessed what, when, from where).
  • Multi-device support: Access services from laptop, phone, tablet—anywhere with Twingate Client installed.

Architecture overview:

The Twingate architecture works as follows:

  1. Your laptop runs Twingate Client and connects to Twingate Network (cloud)
  2. Encrypted tunnel established between client and network using WireGuard
  3. Twingate Network routes traffic to appropriate Connector
  4. VPS runs Twingate Connector with encrypted tunnel to network
  5. Connector forwards traffic to localhost services on VPS
  6. Services never expose public ports—they bind to 127.0.0.1 only

For complete technical details about Twingate configuration, service registration, and security hardening, see skills/infrastructure-ops/runbooks/TWINGATE-ARCHITECTURE.md.

Optional .env configuration:

If using automation scripts for Twingate resource management, add this to .env:

# Twingate Zero-Trust Network Access
TWINGATE_SERVICE_KEY=YOUR_SERVICE_KEY_HERE

This enables programmatic resource creation via the Twingate API, but it's not required for basic manual setup.

Part 5: Verification

After setup, verify the framework is configured correctly.

Test Core Framework

Verify Python dependencies:

python3 -c "import anthropic, requests, dotenv; print('✅ Core dependencies installed')"

If this succeeds, core Python packages (Anthropic SDK, requests, python-dotenv) are installed correctly.

Verify environment loading:

python3 -c "from dotenv import load_dotenv; import os; load_dotenv(); print('✅ Environment loaded')"

If this succeeds, the .env file is loading properly (credentials can be read by scripts).

Check framework structure:

Verify all major directories exist:

# View skills
ls ~/.claude/skills/

# View server tools
ls ~/.claude/servers/

# View agents
ls ~/.claude/agents/

# View slash commands
ls ~/.claude/commands/

Expected output:

  • Skills directories: security-testing, code-review, blog-writer, cybersecurity-advisory, infrastructure-ops, personal-development, osint-research, qa-review, legal-compliance, and others
  • Server tool directories: kali-pentest, mobile-security, web3-security, reaper, metasploit, ghost-blog, openrouter, n8n, context7
  • Agent markdown files: director.md, security.md, writer.md, advisor.md, legal.md
  • Slash command files: pentest.md, job-analysis.md, osint.md, tech-docs.md, and others

Test Security Tools (After VPS/Docker Setup)

If you configured VPS or Docker, test a tool:

Test nmap wrapper (basic port scan):

cd ~/.claude
python3 servers/kali-pentest/nmap.py scan scanme.nmap.org

Expected behavior:

  • Tool executes on VPS container or local Docker
  • Full nmap output saved to servers/kali-pentest/output/{timestamp}-nmap.txt
  • Summary returned to console (open ports found, execution time, output file path)

Test nuclei wrapper (vulnerability template check):

python3 servers/kali-pentest/nuclei.py --help

Expected: Nuclei help text displayed, confirming the tool is accessible.

Test mobile security tools:

python3 servers/mobile-security/jadx.py --help

Expected: jadx (Java decompiler) help text displayed.

Test Blog Integration (If Configured)

If you configured Ghost credentials, test the connection:

cd ~/.claude/skills/blog-writer

# Test Ghost API connection
bun run tests/test-ghost.ts

# List posts (if connection succeeds)
bun run list-posts.ts

Expected output:

  • ✅ Ghost API connection successful from test script
  • List of existing posts from your Ghost blog (or empty list if new blog)

Test Multi-Model Access (If Configured)

If you configured OpenRouter, test the connection:

cd ~/.claude
python3 servers/openrouter/client.py --test

Expected: Successful connection to OpenRouter API, confirmation message with available credits or model list.

Part 6: Understanding the Framework

Now that installation is complete, understand the architecture and what you have available.

Directory Structure

The framework uses a flat architecture with everything in ~/.claude/:

Core files:

  • CLAUDE.md - Level 1 context (organization navigation)
  • README.md - Complete framework overview
  • INSTALLATION.md - This setup guide
  • QUICK-START.md - 5-minute getting started tutorial

Major directories:

  • skills/ - Level 2 context (complete methodology for each skill)
  • agents/ - Level 3 context (agent identity and skill orchestration)
  • servers/ - VPS Code API wrappers (Python scripts for remote tool execution)
  • commands/ - Slash commands (guided workflows)
  • tools/ - Executable utilities (PDF splitting, inventory generation, etc.)
  • docs/ - Architecture documentation

For complete directory inventory with file counts and descriptions, see docs/directory-structure.md.

Three-Level Context Architecture

The framework uses hierarchical context loading for token efficiency. Instead of loading one massive prompt with all skills and tools, it loads progressively based on what you need.

Level 1: CLAUDE.md (200-250 lines)

Organization-level navigation. This file points to skills and agents but doesn't contain their full methodology. Think of it as the table of contents.

Level 2: skills/*/SKILL.md (300 lines max per skill)

Complete methodology for each skill. These files contain tool references, workflows, checklists, and decision trees. Loaded only when the skill is needed.

Level 3: agents/*.md (150 lines max per agent)

Agent identity and context loading instructions. Agents load specific skills based on the request type.

Benefits of this architecture:

  • Progressive loading: Load only the context you need for the current task, reducing token consumption by 60-80% compared to monolithic prompts.
  • Token efficiency: No massive 50,000-token prompt loaded every time—only relevant skills are loaded.
  • Modular architecture: Add new skills without breaking existing system. Each skill is self-contained.

For complete architecture details and token efficiency measurements, see docs/hierarchical-context-loading.md.

VPS Code API Pattern

The framework's approach to remote tool execution differs from traditional Model Context Protocol (MCP) servers. Understanding this difference helps you appreciate the token efficiency and simplicity of the design.

Traditional MCP approach (NOT used by this framework):

MCP servers create a protocol layer between Claude and tools. When you run a tool, the request flows through: Claude → MCP Server → Protocol encoding → Docker exec → Parse full output → Protocol encoding → Return thousands of tokens to Claude. This approach consumes 5,000-10,000 tokens per tool execution because the full output is returned.

IA Framework approach (VPS Code API):

The framework uses simple Python wrappers that execute tools directly via SSH and docker exec, save the full output to a local file, and return only a summary to Claude. The flow is: Claude requests tool execution → Python wrapper on your machine → SSH to VPS → docker exec in container → Tool runs → Full output saved to servers/{container}/output/{timestamp}-{tool}.txt → Wrapper returns 150-token summary (exit code, key findings, output file path).

Token reduction: This approach reduces consumption from ~8,000 tokens to ~150 tokens per execution (95% reduction).

Additional benefits:

  • Evidence preservation: Full output saved in timestamped local files provides complete audit trail for security testing engagements.
  • Automatic caching: File-based output enables simple caching. Wrappers can check if recent output exists for identical commands and skip redundant execution.
  • Zero protocol overhead: Direct SSH + docker exec eliminates MCP protocol complexity. Debugging is trivial—just SSH to the VPS and check docker logs {container}.
  • No server management: No MCP server process to maintain, restart, or troubleshoot. Just Python scripts and standard SSH.

For complete technical details about wrapper implementation, output handling, and caching strategies, see servers/ARCHITECTURE.md.

Framework Discovery System

All framework components use YAML manifests for automatic discovery. There are no hardcoded registries to maintain—just drop a new component in the correct location with a valid manifest, and the framework discovers it automatically.

Discoverable component types:

  1. Skills - skills/*/MANIFEST.yaml (methodology, tools, workflows)
  2. Agents - agents/*.yml (identity, skill orchestration)
  3. Server tools - servers/*/tools.yml (VPS Code API wrappers)
  4. Slash commands - commands/*.md (guided workflows)
  5. Hooks - hooks/*.ts (event automation)
  6. Tools - tools/*/manifest.yml (utilities)
  7. Workflows - Future enhancement for complex multi-step processes

Pattern: Follow template → Drop in location → System finds automatically

Templates available: skills/create-skill/templates/ contains templates for all component types.

For complete discovery system documentation and manifest schemas, see README.md Framework Discovery System section.

Specialized Agents and Skills

The framework provides specialized agents that orchestrate skills based on request type.

security agent:

Handles all security operations using these skills:

  • security-testing (penetration testing methodology)
  • code-review (secure code analysis)
  • architecture-review (threat modeling and design security)
  • cybersecurity-advisory (risk assessments and security guidance)
  • dependency-audit (supply chain security analysis)
  • secure-config (infrastructure hardening validation)
  • benchmark-generation (CIS/STIG compliance automation)
  • threat-intel (CVE research and threat intelligence gathering)

writer agent:

Handles all content creation using these skills:

  • blog-writer (blog post creation and Ghost CMS publishing)
  • technical-writing (documentation following Diátaxis framework)
  • report-generation (security assessment reports using Haiku model for efficiency)

advisor agent:

Handles personal development and research using these skills:

  • personal-development (career development, CliftonStrengths coaching, mentorship)
  • osint-research (open-source intelligence gathering with dual-source verification)
  • qa-review (quality assurance using Haiku and Grok models for peer review)

legal agent:

Handles compliance using this skill:

  • legal-compliance (legal information with mandatory citation verification—not legal advice)

director agent:

Orchestration layer that routes complex requests to appropriate specialist agents. The director analyzes user intent, determines which agent should handle the request, and delegates with proper context.

For detailed agent capabilities and skill mappings, see agent prompts in agents/ directory and skill methodology in skills/ directory.

Slash Commands

Slash commands provide consistent, guided workflows for common tasks. They follow a standardized pattern: analyze request → load skill → execute workflow → return results.

Security Commands:

  • /pentest - Penetration testing with Director/Mentor/Demo modes
  • /vuln-scan - Automated vulnerability scanning
  • /segmentation-test - Network segmentation validation
  • /code-review - Security-focused code review
  • /arch-review - Architecture security review with threat modeling
  • /dependency-audit - Supply chain security analysis
  • /secure-config - Infrastructure hardening validation
  • /threat-intel - Threat intelligence gathering with CVE research
  • /benchmark-gen - CIS/STIG compliance automation
  • /risk-assessment - Formal risk assessment with compliance focus

Career/Personal Development Commands:

  • /job-analysis - Job posting analysis and application strategy
  • /resume-review - Resume optimization
  • /interview-prep - Interview preparation
  • Additional commands for CliftonStrengths coaching and mentorship

Research Commands:

  • /osint - Dual-source OSINT research with citations
  • /threat-intel - CVE research and MITRE ATT&CK mapping

Writing/Documentation Commands:

  • /tech-docs - Technical documentation using Diátaxis framework
  • /report-gen - Security assessment reports (PTES, OWASP, NIST standards)
  • /newsletter - Weekly digest generation and scheduling

Utility Commands:

  • /start - Interactive menu for all commands with pagination

For complete command registry and usage details, see commands/ directory.

Part 7: Next Steps

Learn the Framework

Read core documentation:

# Main framework overview
cat ~/.claude/README.md

# Quick start tutorial (5-minute walkthrough)
cat ~/.claude/QUICK-START.md

# Architecture deep dive (hierarchical context loading explained)
cat ~/.claude/docs/hierarchical-context-loading.md

# VPS deployment architecture (token efficiency details)
cat ~/.claude/servers/ARCHITECTURE.md

Explore skills:

# Security testing methodology
cat ~/.claude/skills/security-testing/SKILL.md

# Blog publishing system (Ghost CMS integration)
cat ~/.claude/skills/blog-writer/SKILL.md

# Code review process (secure code analysis)
cat ~/.claude/skills/code-review/SKILL.md

Review agent capabilities:

# Security agent (8 skills)
cat ~/.claude/agents/security.md

# Writer agent (3 skills)
cat ~/.claude/agents/writer.md

# Advisor agent (3 skills)
cat ~/.claude/agents/advisor.md

Deploy Infrastructure (Optional)

If you added VPS credentials:

Ask the framework to deploy your security tools:

"Set up my security testing environment on the VPS"

The framework will automatically install Docker, deploy containers, install tools, configure wrappers, and test connectivity. Total time: 10-15 minutes.

If using local Docker:

Ask the framework:

"Set up security tools locally using Docker"

The framework handles local deployment with the same automation, just using docker exec instead of SSH.

Try Real Workflows

Security testing (after VPS/Docker setup):

"Run vulnerability scan on example.com"
"Test this mobile app for security issues"
"Audit this smart contract for vulnerabilities"

Content creation (after Ghost setup):

"Write blog post about trending security topics"
"Create technical documentation for this feature"
"Generate penetration test report"

Career development:

"Analyze this job posting and create application strategy"
"Review my resume for this role"
"Prepare me for this interview"

Research:

"Research the latest CVE affecting this technology"
"Gather OSINT on this target (authorized scope)"

Customize Your Workflow

Add a custom slash command:

# Copy template
cp ~/.claude/commands/_TEMPLATE.md ~/.claude/commands/my-command.md

# Edit the markdown file with your workflow
# Save it—the framework automatically discovers new commands

Create a new skill:

# Use the create-skill tool
cd ~/.claude
python3 skills/create-skill/create.py my-skill-name

# Follow prompts to define skill purpose, tools, workflows
# Skill structure created automatically with MANIFEST.yaml

Add a hook (event automation):

# Copy template
cp ~/.claude/hooks/_TEMPLATE-hook.ts ~/.claude/hooks/my-hook.ts

# Implement event logic in TypeScript
# Hook loads automatically on framework startup

For all component templates, see skills/create-skill/templates/.

Common Workflows

Security audit workflow:

cd ~/.claude

# 1. Reconnaissance
python3 servers/kali-pentest/subfinder.py find example.com
python3 servers/kali-pentest/nmap.py scan example.com

# 2. Vulnerability scanning
python3 servers/kali-pentest/nuclei.py scan example.com
python3 servers/kali-pentest/nikto.py scan example.com

# 3. Web app testing
python3 servers/kali-pentest/sqlmap.py --url "http://example.com/page?id=1"

# 4. Generate report
# See skills/report-generation/SKILL.md for report workflow

Smart contract audit workflow:

cd ~/.claude

# 1. Static analysis
python3 servers/web3-security/slither.py analyze contract.sol
python3 servers/web3-security/semgrep.py analyze contract.sol

# 2. Symbolic execution
python3 servers/web3-security/mythril.py analyze contract.sol

# 3. Formal verification
python3 servers/web3-security/halmos.py verify contract.sol

# 4. Generate findings report
# See skills/code-review/SKILL.md for smart contract reporting

Blog publishing workflow:

cd ~/.claude/skills/blog-writer

# 1. Test Ghost connection
bun run tests/test-ghost.ts

# 2. Create draft (manually or with writer agent assistance)
# Edit draft markdown in your editor

# 3. Publish to Ghost
bun run publish-workflow.ts

# 4. Share on Discord (automated via workflow if webhook configured)

Part 8: Troubleshooting

Setup Script Issues

"python3: command not found"

Python 3.10+ is not installed or not in PATH.

  • Install Python from python.org/downloads
  • Verify installation: python3 --version or python --version
  • Ensure version is 3.10 or higher

"uv: command not found"

The uv package manager is not installed.

  • Install uv: See prerequisites section above
  • Alternatively, the setup script will fall back to pip automatically if uv is missing
  • You can also run setup with --skip-python and install dependencies manually using pip

"Permission denied executing setup script" (Linux/Mac)

The script doesn't have execute permissions.

chmod +x ~/.claude/setup.sh
./setup.sh

"Permission denied executing setup script" (Windows)

PowerShell execution policy is blocking the script.

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
.\setup.ps1

VPS Connection Issues

"Permission denied (publickey)"

SSH key authentication is not configured correctly on the VPS.

  • Check SSH key path in .env: SSH_PRIV=~/.ssh/vps_key should point to your private key
  • Verify public key is on VPS: ssh-copy-id -i ~/.ssh/vps_key.pub user@ip
  • Test manually: ssh -i ~/.ssh/vps_key user@ip
  • If manual SSH succeeds, the framework should work. If manual SSH fails, fix SSH key setup first.

"Connection refused"

The VPS is not reachable or SSH port is blocked.

  • Verify VPS_HOST IP address is correct in .env
  • Check VPS firewall allows SSH on port 22
  • Verify VPS is running: ping $VPS_HOST
  • Check VPS provider's firewall rules (some providers block inbound traffic by default)

Docker Issues

"docker: command not found" (VPS)

Docker is not installed on the VPS. Don't install it manually—let the framework handle it.

Ask the framework:

"Install Docker on my VPS"

The framework runs the appropriate installation commands for your VPS operating system.

"Cannot connect to Docker daemon" (local)

Docker Desktop is not running on your machine.

  • Start Docker Desktop
  • Verify: docker ps should return container list (or empty list if no containers running)
  • If Docker Desktop won't start, check system requirements and logs

"Container not found" errors

Containers have not been deployed yet. Don't deploy manually—let the framework handle it.

Ask the framework:

"Deploy kali-pentest container"

The framework handles container creation, configuration, and tool installation automatically.

Ghost API Issues

"Invalid API key"

The Ghost Admin API key format is incorrect or the key is invalid.

  • Verify GHOST_ADMIN_API_KEY format: 24-char-id:64-char-secret (must include colon separator)
  • Check you copied the ADMIN key, not the CONTENT key (they're different)
  • Regenerate the integration in Ghost Admin if needed (Settings → Integrations → Your Integration → Regenerate)

"API URL not found"

The Ghost API URL format is incorrect.

  • Ensure GHOST_API_URL has NO trailing slash
  • Correct format: https://yoursite.ghost.io (not https://yoursite.ghost.io/)
  • For self-hosted Ghost, use your domain without /ghost/ path

OpenRouter Issues

"Authentication failed"

The OpenRouter API key is missing or invalid.

  • Verify OPENROUTER_API_KEY in .env is set correctly
  • Check you copied the full key (starts with sk-or-)
  • Verify your OpenRouter account has credits: openrouter.ai/credits
  • Regenerate the key if needed

Part 9: Getting Help

Documentation:

  • README.md - Complete framework overview with component registry
  • INSTALLATION.md - This setup guide
  • QUICK-START.md - 5-minute tutorial for quick start
  • skills/*/SKILL.md - Skill-specific methodology and tool documentation
  • servers/*/README.md - Server tool documentation and usage examples
  • docs/ - Architecture documentation and design decisions

GitHub:

Deep-Dive Content:

For comprehensive walkthroughs of specific topics, see the blog:

Summary

You've installed the Intelligence Adjacent framework and configured optional components based on your needs.

What you have:

  • Core framework with skills and specialized agents
  • Slash commands for guided workflows
  • VPS Code API wrappers (if configured)
  • Security tools deployed to VPS or local Docker (if configured)
  • Blog publishing system connected to Ghost CMS (if configured)
  • Multi-model AI access via OpenRouter (if configured)

Key concepts:

  • Intelligence Adjacent: The framework augments human intelligence rather than attempting to replace it. It builds capability through orchestration and scaffolding.
  • Skills = Projects: Each skill is a self-contained project with methodology, tools, and workflows. Skills are discoverable via YAML manifests—no hardcoded registries.
  • VPS Code API: Remote tool execution with 90-95% token reduction compared to traditional MCP servers. Full output preserved in local files, summaries returned to Claude.
  • Three-level context: Progressive context loading reduces token consumption by 60-80%. Only load the skills you need for the current task.
  • Framework discovery: All components use YAML manifests for automatic discovery. Drop a new skill in skills/ with a valid manifest, and the system discovers it automatically.

Your next steps:

  1. Read README.md for complete framework overview and component registry
  2. Try a real workflow (security test, blog post, research project)
  3. Customize with your own skills and commands using templates
  4. Deploy VPS infrastructure when ready for production security testing

Remember: You handle credentials and infrastructure decisions. The framework handles automation, deployment, and orchestration.

That's the Intelligence Adjacent approach—building systems that augment human intelligence, not replace it.


Intelligence Adjacent Framework v3.4

Building capability, not dependency.