No Hardcoded Counts
Every framework says '43 tools' and becomes wrong the moment you add tool 44. Here's a controversial but practical solution: ban all hardcoded counts.
Many framework READMEs say "43 tools" and become wrong the moment you add tool 44.
This is a common problem in technical documentation. You document your feature count, add a feature, and now your documentation is stale. The maintenance trap compounds: you have to find every instance of "43" and update it to "44"—and you'll miss some.
I implemented a controversial solution: ban hardcoded counts of dynamic things—tools, skills, features—from documentation. The hypothesis: documentation that doesn't assert specific counts can't become wrong when those counts change.
This isn't about avoiding all numbers—it's about avoiding counts that will change. The approach trades precision for durability. I don't have controlled studies proving it works better—this is a practical proposal, not research. But the logic is straightforward: a statement that can't become false can't mislead.
The Magic Number Problem
Programmers have known about this issue since the 1960s. The COBOL, FORTRAN, and PL/1 manuals called them "magic numbers" — numeric literals in source code with special meaning that isn't clear from context.
As Refactoring Guru explains, magic numbers create problems:
Yet more difficulties arise when you need to change this magic number. Find and replace won't work for this: the same number may be used for different purposes in different places.
The same number 43 might appear as a tool count in one place and an array size in another. Find-and-replace is dangerous.
The code smell literature describes why this matters:
Over time, as requirements change, magic numbers become ticking time bombs. If a constant value needs adjustment, you risk introducing bugs.
The Documentation Parallel
This same problem exists in documentation, though the stakes differ:
| Code Problem | Documentation Problem |
|---|---|
Magic number 43 |
"43 tools available" |
| No semantic meaning | Count has no context |
| Changes require find/replace | Changes missed in multiple places |
| Causes runtime failures | Erodes reader trust incrementally |
The analogy isn't perfect—documentation staleness doesn't cause system failures like code bugs do. But the maintenance burden and find/replace fragility are similar. When you write "This framework includes 43 tools," you've created a value that will drift from reality.
Addressing the Obvious Objection
"But readers want specifics!" This is true. Specific numbers feel more credible than vague language. "43 tools" sounds more impressive than "multiple tools."
Here's the trade-off: precise but immediately stale versus less specific but permanently accurate.
Why Not Dynamic Generation?
There's a middle ground: scripts that pull live counts from the repository at build time. This is arguably the best solution when it's available. But many documentation workflows don't support it:
- GitHub README files render statically—no build step
- Docs within source code (CLAUDE.md, CONTRIBUTING.md) have no generation pipeline
- Small projects don't justify the tooling overhead of Jekyll/Hugo/MkDocs
- Multi-repo documentation requires cross-repo count aggregation
If you have MkDocs, Sphinx, or another documentation generator with build-time hooks, dynamic counts are superior. For static markdown files—which include most README files—qualitative language is the practical fallback.
True, motivated readers can verify counts against the repository. But most won't—and shouldn't have to. Documentation should be trustworthy without requiring validation.
If your tools drop from forty to three, "multiple" technically remains true but becomes misleading. The approach works best for counts that grow or stay stable—common in actively maintained projects. Significant shrinkage (deprecation waves, architectural changes) usually warrants documentation updates anyway because the change itself is newsworthy, not just the count.
The Stale Data Cost
Contact data research provides useful context, even if the exact figures don't transfer to documentation. Research from Landbase shows that B2B contact data decays between 22.5% and 70.3% annually. Poor data quality costs U.S. businesses $3.1 trillion annually.
These statistics describe contact data specifically—documentation staleness doesn't cost trillions. But the underlying principle scales down: information that changes in reality but not in documentation creates friction. Stale counts don't break systems; they erode trust incrementally and create small maintenance burdens that accumulate.
Here's the insidious part about AI systems and stale data. As one analysis notes:
Stale data doesn't trigger errors. The AI just confidently gives wrong answers. You only discover it during an audit or when someone complains.
The same applies to documentation. Nobody gets an error when they read "43 tools" and the actual count is 47. They get slightly wrong information—and your credibility takes a small hit each time.
Without clear processes for documentation updates, context files quickly become stale again. The decay is constant and largely invisible.
The Solution: Qualitative Language
The solution borrows from the magic number fix: replace literals with qualitative language.
In code, you replace 43 with MAX_TOOLS. In documentation, you replace "43 tools" with "multiple tools."
| Blocked Pattern | Suggested Replacement |
|---|---|
| "43 active tools" | "Multiple tools" |
| "17 specialized skills" | "Various skills" |
| "8 agents handle" | "Specialized agents handle" |
| "Over 50 features" | "Extensive features" |
In this context, imprecision may be preferable to false precision. A document saying "43 tools" when there are now 47 actively misleads. "Multiple tools" describes what exists rather than counting it.
The DRY Principle for Prose
The docs-as-code movement emphasizes:
The "Don't Repeat Yourself" principle, well-known in software development, applies equally to documentation. Redundant information is difficult to maintain and can lead to inconsistencies.
While this principle doesn't specifically mention counts, the logic extends: a count is information duplicated between reality (the actual number of tools) and documentation (the written number). When these diverge—and they will—you have inconsistency.
Qualitative language doesn't eliminate duplication entirely—"multiple" still implicitly claims there's more than one. But it reduces the surface area for drift. "Multiple tools" remains accurate whether you have 5 or 50; "43 tools" becomes wrong the moment you have 44.
Enforcement Through Pre-commit Hooks
The solution only works with enforcement. Otherwise, someone will add "43 tools" in a moment of specificity.
Pre-commit hooks provide the mechanism. As the Git documentation explains:
The pre-commit hook is run first, before you even type in a commit message. It's used to inspect the snapshot that's about to be committed... Exiting non-zero from this hook aborts the commit.
The implementation is a regex pattern:
\d+ (tools|skills|agents|commands|features)
This catches patterns like "43 tools" or "17 skills" and blocks the commit with a message suggesting qualitative replacements.
The regex will produce false positives—"43% of tools" or "page 43 tools chapter" would trigger incorrectly. In practice, these are rare enough that a quick bypass (commenting out the hook temporarily, or rewording) is acceptable. If your documentation frequently uses these patterns legitimately, you'd need word boundary matching or a more sophisticated parser. For most projects, the simple regex catches the common case.
Setting Up the Hook
Pre-commit configurations should be shared in your repository so all team members use the same hooks:
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: no-hardcoded-counts
name: Block hardcoded counts in docs
entry: python hooks/check-hardcoded-counts.py
language: python
types: [markdown]
The check script scans staged markdown files for the pattern and returns non-zero if violations are found.
Documentation Linting at Scale
For larger teams, Vale provides industrial-strength documentation linting. Datadog describes their implementation:
To automate the enforcement of our style guidelines, we adopted Vale, an open source command-line linting tool into our authoring environment and CI workflow.
Vale supports custom rules in YAML format. A rule to catch hardcoded counts:
# NoHardcodedCounts.yml
extends: existence
message: "Avoid hardcoded counts. Use qualitative language."
level: error
tokens:
- '\d+ tools'
- '\d+ skills'
- '\d+ agents'
- '\d+ commands'
Editorial teams can set up Vale with rule checkers based on their style guide and run automated tests to find all errors and style-guide violations left in the text.
The AI Content Challenge
When AI generates documentation, it could confidently write counts from training data. An LLM doesn't know your current tool count—it might produce "the framework includes 38 tools" based on whatever version it saw during training.
This requires an additional layer: a Content Guardian prompt that checks AI output before publication. The prompt flags hardcoded counts for removal before they enter the system.
The principle from the README maintenance literature applies:
Having to manually update your docs any time a deployment or a big release goes out means it will get out of sync.
AI-generated content is no exception. Automated validation catches the staleness at the source.
Limitations and Trade-offs
I don't have controlled studies or A/B tests showing that readers trust "multiple tools" more than stale "43 tools." The argument is logical rather than empirical: a statement that can't become false can't erode trust by being false.
The trade-offs are real:
- Loss of precision: "Multiple tools" doesn't convey scale the way "47 tools" does
- Semantic drift: "Extensive" features might feel accurate at 50 but misleading at 500
- False positives: The regex enforcement requires occasional bypasses for legitimate uses
- Doesn't eliminate updates: Major count changes (deprecation waves) still need documentation
Whether this trade-off makes sense depends on your context: how often counts change, how reliably you update documentation, and whether your readers care more about precision or accuracy.
Results in Practice
In my own repository, the pre-commit hook has prevented the common "update README" maintenance task for tool counts. I can't quantify the time saved or provide before/after metrics—I didn't measure the baseline.
The concrete benefit:
- No more "find all instances of X and increment"
- No more pull requests that only update numbers for growing counts
This isn't a complete solution to documentation staleness—there are many other sources of decay. But it removes one predictable source: counts of things that change.
When Specific Numbers Matter
Some numbers should remain specific:
- Metrics with sources: "Reduced latency by 47%" (with citation)
- Versions: "Requires Python 3.10+"
- Dates: "Published December 2025"
- Configuration values: "Set timeout to 30 seconds"
The rule applies to counts of dynamic things—features, tools, components that change as the system evolves. Static values and measured outcomes with citations are fine.
Implementation Checklist
For teams considering this approach:
- Add regex pattern to pre-commit hooks - Block the pattern at commit time
- Create Content Guardian prompt - Catch violations in AI-generated content
- Document qualitative replacements - Give writers alternatives to reach for
- Configure Vale or similar - Scale to larger documentation sets
- Train the team - Explain the reasoning, not just the rule
The controversial part isn't the implementation—it's accepting that less specific language can be more accurate over time.
The Intelligence Adjacent framework is free and open source. If this helped you, consider joining as a Lurker (free) for methodology guides, or becoming a Contributor ($5/mo) for implementation deep dives and to support continued development.
Sources
Documentation Decay and Staleness
- AI Agent Documentation Maintenance Strategy
- The Stale Data Problem in AI Systems
- Data Decay Rate Statistics
- Data Decay & Degradation: The True Impact
README and Documentation Maintenance
- Automate Your Documentation - ReadMe
- Keeping your README Fresh and Engaging - Microsoft GenAIScript
- Tools for README.md Creation and Maintenance - Talk Python
Magic Numbers and Code Smells
- Magic Number (Programming) - Wikipedia
- Code Smell 02 - Constants and Magic Numbers
- Replace Magic Number with Symbolic Constant - Refactoring Guru
Pre-commit Hooks and Validation
- Pre-commit: A Framework for Managing Hooks
- How to Set Up Pre-Commit Hooks - Stefanie Molin
- Git Hooks - Git Documentation
Docs-as-Code and CI/CD
- What is Docs as Code? - Kong
- How to Improve DocOps using CI/CD - Pronovix
- Best Practices for Maintaining Documentation Accuracy - LinkedIn