Landscape

Where AIIR fits among AI provenance and supply-chain attestation formats.

The gap

SLSA secures your build. in-toto attests your pipeline. Sigstore signs your releases. SPDX and CycloneDX inventory your dependencies. None of them answer: was AI involved in writing this code, and can you prove it?

AIIR fills that gap. It generates content-addressed receipts that make declared AI involvement tamper-evident and independently verifiable — then plugs into the tools you already use. This page compares capabilities so you can see where AIIR fits.

Feature comparison

Capability AIIR in-toto SLSA SPDX CycloneDX Sigstore
AI authorship detection
Content-addressed receipts
Zero runtime dependencies
Binary wire format (CBOR)
Normative specification
Published test vectors ✓ (97)
Cross-language conformance ✓ (Python + Rust + JS)
Public threat model ✓ (STRIDE/DREAD, 153 controls)
Continuous fuzzing ✓ (Hypothesis + ClusterFuzzLite)
Mutation testing
Browser-based verifier
Sigstore integration
SBOM / dependency graph
Build provenance attestation
EU AI Act evidence

✓ = capability present in the format or reference implementation. — = not a design goal or not present. Comparisons based on public documentation as of March 2026.

Complementary, not competing

These formats address different layers of the supply-chain trust stack. They are designed to work together:

Layer Question it answers Formats
AI authorship Was AI involved in this commit? How much? What kind? AIIR
Build provenance Where was this artifact built? By whom? From what source? in-toto, SLSA
Dependency inventory What components does this software contain? SPDX, CycloneDX
Signing & transparency Can we verify the signer's identity without managing keys? Sigstore

AIIR receipts can be wrapped as in-toto predicates (predicate type: https://aiir.io/commit-receipt/v1), and AIIR signs with Sigstore natively. An enterprise deployment might use AIIR for AI provenance, SLSA for build provenance, SPDX for SBOMs, and Sigstore for signing — each format in its own lane.

Why this matters

Emerging regulations — including the EU AI Act's transparency obligations (phasing in across 2025–2027) and audit frameworks like SOC 2 — are increasing pressure to document AI involvement in production systems. No existing supply-chain format provides this evidence natively. AIIR was designed specifically for this gap: a content-addressed, tamper-evident receipt that records what AI did, when, and how much — at the commit level.

The format's simple, low-dependency design means it can be adopted without introducing new supply-chain risk. The published test vectors and multi-language reference implementations mean any engineering team can build a conforming verifier without relying on a single vendor.

Evaluate for yourself

The spec, test vectors, and threat model are all public. Audit them.

Read the spec → Conformance guide →