Skills System Deep Dive: How Intelligence Adjacent Organizes Knowledge

The Intelligence Adjacent framework uses specialized skills with hierarchical context loading. This post breaks down how the skills system works, where to put files, and how to extend the system with new frameworks.

Modular skill architecture with organized knowledge nodes radiating from central hub, progressive context loading without bloat, cyberpunk anime style

The Intelligence Adjacent framework uses specialized skills as complete projects. Each skill has its own context, tools, workflows, and documentation. This post breaks down how the framework organizes knowledge, where files belong, and how the system extends with new frameworks.

The Core Problem: Context Bloat

Most AI agent systems dump everything into one massive configuration file. By the time you add documentation, tool lists, methodologies, and examples, you're loading thousands of lines of context every session whether you need it or not.

That's wasteful and slow.

The framework solves this through hierarchical context loading. The system loads only what it needs, when it needs it.

Three-Layer Architecture

The framework uses a clean separation:

Layer 1: Organization (CLAUDE.md)

  • Navigation layer only
  • 250 lines max
  • Points to skills and agents
  • Global preferences

Layer 2: Skills (SKILL.md files)

  • Complete skill context
  • 300 lines max per skill
  • Workflows, tools, templates
  • Loaded on-demand

Layer 3: Agents (agent prompts)

  • Identity and context loading
  • 150 lines max
  • References skills

When you invoke an agent, the framework loads the organization layer, then pulls in only the skill context needed. For simple tasks, the system loads 200 lines of context. For complex work, it loads 200 + 300 = 500 lines. That's a 37.5% reduction compared to loading everything.

Skills Across Five Domains

The framework covers five domains:

Security Skills:

  • security-testing - Pentesting, vuln scanning, network segmentation
  • cybersecurity-advisory - Risk assessment, compliance mapping
  • code-review - Security-focused code analysis
  • secure-config - Infrastructure hardening (CIS, STIG)
  • benchmark-generation - Compliance automation scripts
  • architecture-review - Threat modeling, secure design
  • dependency-audit - Supply chain security
  • threat-intel - MITRE ATT&CK, CVE research

Writing Skills:

  • blog-writer - This blog (trending news, project docs)
  • technical-writing - Guides, tutorials, documentation
  • report-generation - Security reports, assessments

Advisory Skills:

  • personal-development - Career, CliftonStrengths coaching, mentorship
  • osint-research - Multi-source research with citations
  • qa-review - Quality validation, consistency checking

Legal Skill:

  • legal-compliance - Legal review with mandatory citations

Infrastructure Skills:

  • infrastructure-ops - VPS management, Docker deployment
  • gitingest-repo - Repository analysis
  • blog-workflow - Ghost CMS automation
  • create-skill - Template-driven skill creation

Each skill lives in skills/[skill-name]/ with a complete SKILL.md file.

File Placement: The Decision Matrix

This is critical. Where files go determines whether they survive long-term.

The Permanent vs Temporary Test

Before creating any file, the system evaluates: "If this file gets deleted tomorrow, would the project be damaged?"

If yes → Permanent document → Use docs/, skills/, or project location
If no → Temporary work → Use scratchpad/

Location Rules

System Infrastructure Analysisdocs/[category]/

  • Framework architecture decisions
  • Migration planning
  • System audits
  • Standards and best practices

Skill Methodology Updatesskills/[skill]/SKILL.md

  • Added as new section in existing SKILL.md
  • Updates to workflows or methodologies
  • No separate methodology files

Session Decisions[project]/SESSION-STATE.md

  • Multi-session project tracking
  • Current phase and blockers
  • Resume context for next session

Engagement Findingsprofessional/engagements/[type]/[id]/

  • Client scope documentation
  • Testing results
  • Reports and findings
  • Note: .gitignore excludes professional/ by default - you can modify this for private repo version control

Blog Contentpersonal/blog/[drafts|published|docs]/

  • Drafts, published posts, planning docs
  • Never in scratchpad for blog files
  • Note: .gitignore excludes personal/ by default - you can modify this for private repo version control

Resource Location Strategy

When you drop frameworks or reference materials into the system, the framework organizes them using this decision matrix:

Framework-provided resourcesskills/[skill-name]/reference/

  • Auto-loaded by skill
  • Version-controlled
  • Examples: CIS Benchmarks, OWASP guides, MITRE ATT&CK
  • These are community/open-source frameworks everyone can access

Personal learning materialspersonal/resources/

  • NOT version-controlled (excluded in .gitignore by default)
  • Your own notes, courses, private study materials
  • Surgical blocking - only exclude what must stay private

Professional/licensed materialsprofessional/resources/

  • NOT version-controlled (excluded in .gitignore by default)
  • Licensed content you can't redistribute
  • Client-specific materials under NDA
  • Surgical blocking - only exclude what must stay private

The decision tree:

  1. Is this a community framework anyone can access?skills/[skill]/reference/
  2. Is this my personal learning material?personal/resources/
  3. Is this licensed or client-confidential?professional/resources/

Examples:

  • CIS Benchmark PDF → skills/secure-config/reference/cis-benchmarks/
  • Your OSCP study notes → personal/resources/certifications/
  • Client-provided architecture diagrams → professional/resources/[client-code]/

The framework loads skills/*/reference/ automatically. Your personal and professional resources stay private but organized.

Adding Frameworks to Skills: Autonomous Processing

When you drop a large PDF framework into the system, here's what happens automatically:

Step 1: Framework Receipt

You download the framework documentation and provide it to the system. For large PDFs, the framework splits them into chunks:

What the system runs:

python tools/pdf/splitter.py --input cis-benchmark.pdf --output-dir skills/secure-config/reference/cis-benchmarks/ --pages-per-file 10

Large PDFs get split into 10-page chunks. Why? Large PDFs are slow to load and use excessive tokens. Chunked files load faster and let the system pull exactly what it needs.

Step 2: File Organization

The framework creates this structure automatically:

skills/secure-config/reference/
└── cis-benchmarks/
    ├── section-01-initial-setup.pdf
    ├── section-02-services.pdf
    ├── section-03-network.pdf
    └── metadata.json

The skill's SKILL.md file references these in the Tools or References section.

Step 3: Inventory Regeneration

The framework maintains an auto-generated inventory of all reference materials:

What the system runs:

python tools/generate-materials-inventory.py

The system scans all skills/*/reference/ directories and creates a searchable inventory in about 5 seconds. The inventory tracks what frameworks exist and where to find them.

On-Demand Loading

The framework doesn't load entire frameworks at startup. It loads relevant sections as needed.

Example: The secure-config skill has CIS Benchmarks for multiple operating systems. When hardening Ubuntu, only the Ubuntu benchmark sections load. When hardening Windows, only Windows sections load.

This keeps context size manageable.

Template-Driven Creation

Every skill uses the same structure. The create-skill skill provides templates for all component types:

Skill Components:

  • SKILL.md (main context file)
  • MANIFEST.yaml (discovery metadata)
  • methodology.md (detailed procedures)
  • tools.md (tool setup and usage)
  • templates/ (reusable templates)
  • reference/ (frameworks and standards)

Supporting Components:

  • Agent manifests (agent identity)
  • Command manifests (slash commands)
  • Hook manifests (automation triggers)
  • Server tool manifests (API wrappers)
  • Workflow manifests (multi-step procedures)

Templates live in skills/create-skill/templates/. When you need a new skill, the system copies the template, fills in the sections, drops it in the right location. The framework discovers it automatically through YAML manifests.

YAML Manifest Discovery

The framework finds components through YAML frontmatter, not hardcoded registries.

Every SKILL.md file starts with:

---
name: security-testing
description: Professional penetration testing and vulnerability assessment
agent: security
modes:
  - penetration-testing
  - vulnerability-scanning
  - network-segmentation
tools:
  native:
    - WebSearch
    - Grep
    - Bash
  servers:
    - servers.nmap.*
    - servers.nuclei.*
---

The framework scans for --- delimited YAML, parses the metadata, and builds the registry automatically. No manual updates to central files.

This means zero-maintenance scaling. You create a new skill, add YAML frontmatter, and the system finds it.

Progressive Context Loading

Skills use a 300-line budget. That's not arbitrary - it's based on memory best practices for agent context.

When a skill needs more detail, the framework splits across files:

  • SKILL.md - High-level overview, workflows, tool lists (300 lines)
  • methodology.md - Detailed step-by-step procedures
  • tools.md - Complete tool setup and configuration
  • examples/ - Working examples and use cases

The system loads SKILL.md first, then pulls methodology or tool details as needed.

STATUS.md Tracking Protocol

The framework maintains a STATUS.md file for every skill to track progress:

## Status: In Development

**Last Updated:** 2025-11-28

## Completed Features
- [x] SKILL.md structure with 3-tier content strategy
- [x] QA review integration (OpenRouter API)
- [x] Tier classifier tool (automatic visibility assignment)

## In Progress
- [ ] Newsletter digest automation refinement
- [ ] Hero image generation workflow improvements

## Remaining Work
- Multi-model research integration
- Twitter/X integration for auto-posting

## Known Issues
| Issue | Severity | Workaround |
|-------|----------|------------|
| Ghost Pages typescript workflow needs documentation | Minor | Use existing workflow, add docs |

## Session History
- 2025-11-24: Added QA review and tier assignment automation
- 2025-11-19: Completed Ghost API integration
- 2025-11-10: Initial skill creation

The framework updates STATUS.md when:

  • Completing features from "In Progress"
  • Discovering new work for the skill
  • Skill status changes (Development → Production)
  • Resolving known issues
  • Making architecture changes
  • Sessions > 3 hours with measurable progress

Skill Categories Explained

Security Skills

Multiple skills cover offensive security operations. They share common patterns:

  • Authorization-first approach (assume proper scoping)
  • Methodology-driven (PTES, OWASP, MITRE ATT&CK)
  • Three engagement modes (Director, Mentor, Demo)
  • Comprehensive reporting standards

The security-testing skill consolidates pentesting, vuln scanning, and segmentation testing. Originally separate skills, they share 90% of their tooling and methodology.

Writing Skills

Multiple skills handle different content types:

  • blog-writer - This blog (Ghost CMS integration)
  • technical-writing - Documentation (Diataxis framework)
  • report-generation - Security reports (PTES standards)

Each has distinct workflows but shares writing principles (clear, actionable, well-structured).

Advisory Skills

Multiple skills provide research and guidance:

  • personal-development - Career coaching, CliftonStrengths, mentorship
  • osint-research - Multi-source research with Grok integration
  • qa-review - Dual-model peer review (Haiku + Grok)

These skills emphasize thorough research, citation verification, and actionable recommendations.

One skill with mandatory citation requirements:

  • legal-compliance - Legal information (NOT legal advice) with citation verification

This skill has strict guardrails - every claim requires a citation to authoritative legal sources.

Infrastructure Skills

Multiple skills manage the framework itself:

  • infrastructure-ops - VPS management, Docker deployment, Twingate connectivity
  • gitingest-repo - Repository analysis and documentation generation
  • blog-workflow - Ghost CMS automation for weekly digests
  • create-skill - Template-driven skill creation

These are meta-skills - they build and maintain the system that runs everything else.

Real-World Example: Security-Testing Skill

The security-testing skill shows the full pattern:

Location: skills/security-testing/

Structure:

skills/security-testing/
├── SKILL.md                    (300 lines - main context)
├── MANIFEST.yaml               (discovery metadata)
├── STATUS.md                   (progress tracking)
├── PREREQUISITES.md            (requirements and setup)
├── workflows/
│   ├── director-mode.md        (production workflow)
│   ├── mentor-mode.md          (learning workflow)
│   └── demo-mode.md            (testing workflow)
├── templates/
│   ├── scope-template.md       (engagement scoping)
│   ├── report-template.md      (findings format)
│   └── retest-template.md      (verification tracking)
└── reference/
    ├── ptes-standard/          (penetration testing execution standard)
    ├── owasp-testing-guide/    (web app testing methodology)
    └── mitre-attack/           (tactics and techniques)

Complete Autonomous Pentest Workflow

When you request "Pentest https://example.com", here's what the framework executes autonomously without you touching the terminal:

What You Provide:

  • Scope document (in-scope assets, restrictions, out-of-scope items)
  • Authorization (written permission from asset owner)

What The System Executes (Complete End-to-End):

Phase 1: Scope Analysis

  1. Scope Receipt & Parsing - The system reads your scope document, extracts in-scope assets, restrictions, and out-of-scope items
  2. API Intelligence Gathering - For HackerOne/BugCrowd programs, the system fetches program data via API (credentials loaded from .env)
  3. Scope Validation - The system verifies authorization clarity, checks for ambiguities, asks clarifying questions if needed
  4. Engagement Initialization - The system creates professional/engagements/pentest/[client]-[YYYY-MM]/ directory structure with subdirectories:
    • scope/ - Authorization documents, rules of engagement
    • reconnaissance/ - OSINT findings, enumeration results
    • scanning/ - Tool outputs, vulnerability scan results
    • exploitation/ - Proof-of-concept exploits, evidence
    • reports/ - Findings, final deliverables

Phase 2: Environment Setup

  1. VPS Environment Verification - The system SSHs to VPS, tests connectivity to security tools in Docker containers via MCP tools:
    • Network scanners (nmap, masscan, rustscan)
    • Web scanners (nuclei, httpx, ffuf, gobuster)
    • Vulnerability scanners (sqlmap, wpscan, nikto)
    • Exploitation frameworks (Metasploit, BurpSuite)
    • Post-exploitation tools (Covenant, Sliver, Mimikatz)

Phase 3: Planning

  1. Test Plan Creation - The system generates comprehensive test plan using the most appropriate methodology based on scope and requirements (OWASP, PTES, NIST, OSSTMM, etc.):
    • OSINT Research - Understand target's tech stack, industry, attack surface, known vulnerabilities
    • Framework References - Load relevant methodologies from skills/security-testing/reference/ (OWASP Testing Guide, PTES, NIST, OSSTMM from available frameworks)
    • Professional Resources - Reference pentesting books and licensed materials from professional/resources/
    • Methodology reference (OWASP Top 10, MITRE ATT&CK)
    • Test Plan Output - Asset inventory, test objectives, attack surface mapping, timeline, methodology selection based on scope

Phase 4: Reconnaissance

  1. Reconnaissance (Passive) - The system gathers OSINT without touching target systems:
    • WHOIS lookups for domain ownership
    • DNS enumeration (subdomains, NS records, MX records)
    • Public records (breach databases, GitHub repos, Google dorking)
    • Certificate transparency logs
    • Shodan/Censys queries
    • Social media enumeration
  2. Reconnaissance (Active) - The system actively enumerates target with permission:
    • Subdomain discovery (subfinder, amass, dnsenum)
    • Live host detection (httpx, httprobe)
    • Port scanning (nmap full TCP/UDP scan, service detection)
    • Web technology identification (Wappalyzer, WhatWeb)
    • Virtual host discovery

Phase 5: Scanning

  1. Vulnerability Scanning - The system runs automated tools against in-scope targets:
    • Nuclei templates (thousands of CVE checks)
    • SQLMap for injection testing
    • WPScan for WordPress vulnerabilities
    • Nikto for web server misconfigurations
    • Directory brute-forcing (ffuf, gobuster, dirsearch)
    • SSL/TLS testing (testssl.sh)

Phase 6: Manual Testing

  1. Manual Testing - The system follows test plan methodically, adhering strictly to in-scope rules:
    • Business logic flaws (payment bypass, workflow manipulation)
    • Authentication bypass (credential stuffing, session fixation)
    • Authorization issues (IDOR, privilege escalation)
    • Input validation (XSS, SQLi, command injection)
    • API security (broken authentication, excessive data exposure)
    • Session management (token analysis, cookie security)

Phase 7: Exploitation

  1. Exploitation - If authorized and vulnerabilities confirmed, the system attempts exploitation:
    • Develop proof-of-concept exploits
    • Verify vulnerability severity
    • Document exploitation steps with screenshots
    • Maintain stealth (if required by scope)
  2. Post-Exploitation - If authorized and successful, the system performs:
    • Privilege escalation (vertical/horizontal)
    • Lateral movement (network pivoting)
    • Data exfiltration demonstration (non-destructive)
    • Persistence establishment (if authorized)
    • Impact documentation (CVSS scoring)

Phase 8: Session Management

  1. Session State Tracking - Throughout engagement, the system maintains SESSION-STATE.md:
    • Assets tested and current status
    • Vulnerabilities discovered (severity, location, evidence)
    • Next steps for session resumption
    • Time tracking and phase completion
    • This allows multi-day engagements to resume seamlessly

Phase 9: Documentation

  1. Vulnerability Documentation - For each finding, the system documents:
    • Vulnerability title and description
    • Affected systems/endpoints
    • Steps to reproduce
    • Proof-of-concept screenshots/video
    • CVSS severity scoring
    • Impact assessment
    • Remediation recommendations
    • References (CVE, CWE, OWASP)

Phase 10: Reporting

  1. Report Generation - The system compiles professional PTES-compliant report:
    • Executive summary (non-technical overview, business risk)
    • Scope and methodology
    • Findings summary (critical, high, medium, low)
    • Technical findings (detailed vulnerability descriptions)
    • Remediation roadmap (prioritized fixes)
    • Appendices (tool outputs, technical details)
  2. Report Delivery - The system saves complete deliverable package:
    • professional/engagements/pentest/[client]-[YYYY-MM]/reports/final-report-[date].pdf
    • professional/engagements/pentest/[client]-[YYYY-MM]/reports/executive-summary.pdf
    • professional/engagements/pentest/[client]-[YYYY-MM]/reports/technical-appendix.pdf
    • Evidence archive (screenshots, proof-of-concepts, logs)
  3. Session Cleanup - The system verifies engagement integrity:
    • No artifacts left on target systems
    • All activities documented in engagement log
    • Tools disconnected from target environment
    • Final STATUS.md update for skill improvements

Key Operational Details

Version Control & Privacy:

  • .gitignore excludes professional/ by default (customer data stays private)
  • You can modify .gitignore if version control desired for private repository
  • All engagement data organized in standardized directory structure

Multi-Session Support:

  • SESSION-STATE.md allows resuming after hours or days
  • The framework maintains complete context of what's tested, what's pending
  • You provide "resume pentest" → The system loads state, continues from last checkpoint

Three Engagement Modes:

  • Director (production) - Complete workflow executes autonomously
  • Mentor (learning) - Each step explained, teaching methodology
  • Demo (rapid testing) - Demonstrate techniques without full engagement

HackerOne/BugCrowd Integration:

  • The system fetches program data via API: scope, bounty tiers, accepted vulnerabilities
  • High-value targets prioritized based on bounty ranges
  • Findings formatted in platform-specific templates for submission

Total User Involvement:

  • You provide: Scope document + authorization
  • The system executes: Complete workflow autonomously
  • You review: Final report and approve submission

You never SSH to VPS. You never run nmap manually. You never write the report. The framework handles complete execution from scope receipt to final deliverable.

How the framework works with the skill:

  1. User requests pentest → Director agent routes to security agent
  2. System loads CLAUDE.md (navigation layer)
  3. System loads skills/security-testing/SKILL.md (skill context)
  4. SKILL.md references methodology in workflows/director-mode.md
  5. System pulls PTES framework from reference/ptes-standard/ as needed
  6. System executes complete workflow
  7. System saves findings to professional/engagements/pentest/[client]-[YYYY-MM]/
  8. System updates STATUS.md if new features developed during engagement

Total context loaded: 200 (CLAUDE.md) + 300 (SKILL.md) + sections from methodology = efficient targeted loading.

Common Mistakes and Fixes

Mistake 1: Permanent Docs in Scratchpad

Wrong:

scratchpad/PUBLIC-REPO-GAP-ANALYSIS-2025-11-28.md

Right:

docs/public-migration/gap-analysis.md

Why: Gap analysis is permanent project planning, not temporary work. Scratchpad files may get deleted during cleanup. The framework routes permanent documentation to docs/ automatically.

Mistake 2: Skill Methodology in Root CLAUDE.md

Wrong: Adding complete pentest methodology to root CLAUDE.md

Right: Methodology lives in skills/security-testing/SKILL.md, root CLAUDE.md just points to it

Why: Root file is navigation only. Skill details belong in skill files. The framework enforces this through hierarchical loading.

Mistake 3: Hardcoded Registries

Wrong: Maintaining a central registry file that lists all skills

Right: Skills self-register through YAML manifest frontmatter

Why: YAML discovery means zero-maintenance scaling. You add a skill, the system discovers it automatically.

Mistake 4: Resource Misplacement

Wrong: Putting your personal OSCP notes in skills/security-testing/reference/

Right: Personal notes go in personal/resources/certifications/ (not version-controlled)

Why: Skills reference directory is for community frameworks everyone can access. Your private study materials stay private. The framework respects the .gitignore boundaries.

Extending the Framework: Creating a New Skill

Want to add a new skill? Here's what the framework does autonomously:

Step 1: Template Copy

When you request a new skill, the system runs:

cp -r skills/create-skill/templates/SKILL-template.md skills/your-skill/SKILL.md

Step 2: YAML Frontmatter Generation

The system generates the manifest based on your requirements:

---
name: your-skill
description: What this skill does
agent: which-agent-uses-it
tools:
  native:
    - Tool1
    - Tool2
  servers:
    - servers.tool.*
---

Step 3: Context Writing

The system fills in the sections:

  • Overview (what and why)
  • Capabilities (what it can do)
  • Methodology (how it works)
  • Tools (what it uses)
  • Templates (reusable files)

Content stays under 300 lines. If more detail needed, the system splits into methodology.md and tools.md.

Step 4: Supporting Files Creation

The system generates the complete structure:

skills/your-skill/
├── SKILL.md
├── MANIFEST.yaml
├── STATUS.md
├── methodology.md
├── tools.md
├── templates/
└── reference/

Step 5: Discovery Testing

The framework automatically finds your skill through YAML parsing. Testing happens by invoking the agent that uses it.

Step 6: Framework Addition (If Needed)

If your skill uses reference frameworks:

  1. You provide the framework
  2. System splits large PDFs (tools/pdf/splitter.py)
  3. System places files in skills/your-skill/reference/
  4. System runs python tools/generate-materials-inventory.py
  5. System references in SKILL.md Tools section

Done. Your skill is now part of the framework.

The Vision: Solve Once, Reuse Forever

This is the core IA principle. When you solve a problem, the framework captures it as a skill. That skill becomes reusable infrastructure.

The skills library scales as needed. New skills can be added without breaking existing functionality.

Each skill is a permanent solution to a category of problems. Once the skill exists, you never solve that problem manually again. You invoke the skill, and the system executes it.

That's the Intelligence Adjacent approach - build systems that augment human intelligence through proper orchestration and scaffolding.

The skills system is the scaffolding.

What you do: Provide requirements ("pentest this web app", "write blog post about X", "analyze this job posting")

What the framework does autonomously:

  • Route to appropriate agent
  • Load only relevant skills
  • Execute complete workflows
  • Generate professional deliverables
  • Save outputs to correct locations
  • Update tracking files
  • Return results for your review

You rarely touch the terminal. You rarely run commands manually. The framework handles the execution.


Ready to explore the agent layer?

The next post covers the agent architecture and how the framework routes requests intelligently.


Sources: