AI BOM: the 2026 compliance artifact your board has not heard of yet

Introduction

Over the next 18 months, three forces collide: the EU AI Act’s model transparency duties, the EU Cyber Resilience Act vulnerability reporting start date (11 September 2026), and the operationalisation of ISO 42001 and the NIST AI RMF across enterprises. The connective tissue is an AI Bill of Materials (AI BOM), a structured inventory of models, datasets, training lineage, dependencies, and risks. If you are building, buying, or deploying AI, you will soon be asked to show it. Here is what to know and how to stand up an AI BOM in 90 days.

What is an AI BOM and how is it different from an SBOM?

An AI BOM is a machine readable inventory that lists every dataset, model, and software component used to build and operate an AI system, including versions, sources, training and evaluation artefacts, and relationships. Think of it as SBOM plus for AI. It captures model lineage and data provenance, not just code libraries.

The concept has matured quickly. OWASP runs an AI BOM project with guidance and a generator initiative under its GenAI Security workstream. Security leaders and vendors are publishing implementation guides, and researchers have proposed extending the SPDX SBOM standard to cover AI supply chains.

Why it matters: Without a transparent record of training data, fine tuning steps, embedded third party models, and evaluation results, you cannot credibly attest to safety, IP provenance, bias mitigation, or respond quickly when a dependency becomes vulnerable.

Why the AI BOM moment is 2026

1. EU AI Act model documentation now has teeth

The EU AI Office’s GPAI Code of Practice (published 10 July 2025) provides a Model Documentation Form for transparency obligations under Article 53 and sets out safety and security expectations for systemic risk models under Article 55. Obligations for new GPAI models began in August 2025, with enforcement powers beginning in August 2026 and legacy model compliance required by August 2027. An AI BOM operationalises this documentation at scale.

High risk AI requirements become fully applicable on 2 August 2026.

2. Cyber Resilience Act reporting begins 11 September 2026

From 11 September 2026, manufacturers must report actively exploited vulnerabilities and severe security incidents for all products with digital elements. The ENISA single reporting platform becomes operational for this purpose. AI BOMs help tie model and dataset provenance to SBOM style vulnerability processes, which becomes essential once these reporting timelines begin.

3. Framework alignment becomes mandatory practice

ISO 42001 provides a management system for AI. The NIST AI RMF and its Generative AI Profile (July 2024) provide risk based controls for GenAI. An AI BOM becomes the evidence spine linking these frameworks to concrete assets, versions, and documentation.

What goes into an AI BOM

A robust AI BOM includes:

  • Model lineage: Base models, checkpoints, fine tuning runs, RLHF steps, adapters, quantisation details, timestamps, and owners.
  • Datasets and provenance: Sources, licenses, collection methods, filtering steps, augmentation, and PII handling.
  • Training and evaluation: Objectives, metrics, test suites for toxicity, bias, jailbreak resistance, red team findings, and residual risks.
  • Runtime dependencies: Serving stack components, vector databases, plug ins, APIs, and agent tools.
  • Security posture: Known vulnerabilities, patch status, model specific risks such as prompt injection and data leakage, mitigations, and monitoring.
  • Usage and constraints: Intended purpose, prohibited uses, human in the loop processes, and EU AI Act model documentation fields.

There is active experimentation with extending the SPDX standard for AI artefacts.

How an AI BOM supports EU AI Act, CRA, and governance frameworks

  • EU AI Act: The GPAI Model Documentation Form maps naturally to the AI BOM structure. High risk AI obligations require traceability and evidence that an AI BOM provides.
  • Cyber Resilience Act: When a dependency is exploited, an AI BOM enables fast determination of exposure and the creation of mandatory 24 hour and 72 hour notifications.
  • ISO 42001 and NIST AI RMF: Both frameworks emphasise traceability, documentation, and continuous improvement. The NIST Generative AI Profile highlights risks such as confabulation and information integrity that must be tied to evaluation evidence recorded in an AI BOM.

The 90 day AI BOM plan (GRC Hub Assess, Align, Assure)

Days 0 to 15: Assess

  • Scope and inventory your AI systems.
  • Select an AI BOM schema that works with your SBOM processes, ideally an SPDX compatible approach.
  • Run a gap analysis against the GPAI Code of Practice, ISO 42001, and the NIST AI RMF.

Days 16 to 45: Align

  • Create a pilot AI BOM for priority systems. Use OWASP AI BOM resources to verify structure.
  • Integrate AI BOM population into development workflows so metadata is captured automatically.
  • Prepare CRA reporting workflows using AI BOM data.

Days 46 to 90: Assure

  • Establish policy requiring AI BOM creation for any system moving to pilot or production.
  • Create an audit pack combining the AI BOM with EU AI Act documentation and ISO 42001 evidence.
  • Set KPIs such as percentage of AI systems with an up to date AI BOM and time to impact assessment when a vulnerability is announced.

Common pitfalls

  • Treating AI BOMs as static documents rather than updating them as models and datasets change.
  • Failing to record dataset provenance which is critical for IP, privacy, and fairness claims under the AI Act.
  • Overlooking Generative AI specific risks listed in the NIST Generative AI Profile.
  • Not connecting AI BOMs with CRA incident reporting workflows.

Related obligations: DORA and NIS2

Financial sector entities required to comply with DORA from 17 January 2025 can strengthen their incident reporting and third party oversight with AI BOM evidence. NIS2 enforcement (from October 2024 to 2026 depending on the member state) similarly benefits from explicit AI dependency mapping.

What good looks like by Q4 2026

  • Every material AI system has an AI BOM aligned to ISO 42001.
  • GPAI documentation is complete and aligned with enforcement powers beginning August 2026.
  • CRA workflows are rehearsed and ready for 11 September 2026.
  • Evaluation suites cover NIST Generative AI Profile risks and are documented inside AI BOMs.

FAQ

Is AI BOM a formal standard?
Not yet. OWASP is leading practical work and researchers have proposed SPDX extensions.

How does it relate to model cards?
Model cards are human readable. AI BOMs are machine readable inventories that support security, audit, and compliance.

Do I need an AI BOM if I only deploy vendor models?
Yes. You still control configuration, guardrails, datasets, and risk mitigations. Request vendor documentation aligned to the GPAI Code and an AI BOM excerpt.

The Governance Risk & Compliance Hub - Data Protection and Cybersecurity Specialists Logo.

Governance Risk & Compliance Hub LIMITED