← Research & Evidence

What "cryptographic proof of AI inference" actually means

The phrase sounds stronger than most systems can honestly support. Here is what it actually requires.

Borys Tsyrulnikov · April 2026
Read ↓
Three geometric lines intersecting on dark background

Many AI vendors can tell you that a request was logged, that a model endpoint responded, or that a security control was enabled. Those are useful signals. They are not the same as cryptographic proof.

Cryptographic proof of AI inference means that a verifier can examine signed evidence after the fact and check concrete claims about a specific inference run. The important part is not marketing language like "trusted," "secure," or "private." The important part is whether an independent party can verify what happened without depending entirely on the vendor's internal logs.

In practice, there are three different questions hiding inside the phrase.

Environment trust

Did the inference run inside a genuine confidential-computing environment, or just on an ordinary host that says it was secure? This is where hardware attestation matters. Attestation can prove properties about the platform and execution environment.

Workload identity

Even if the hardware is real, was the intended workload actually running? Was it the expected container image, the expected project or tenant, the expected issuer, the expected nonce or freshness window? These are workload identity questions, not just hardware questions.

Model identity

Which model actually ran? This is where many systems become vague. Hardware attestation does not automatically prove which model weights were loaded into memory. If you want cryptographic proof of model identity, you need a separate binding story. That usually means hashing model artifacts in the trusted environment and carrying those hashes into a signed receipt.

Concentric rectangles representing three trust layers
Three layers of trust: environment attestation, workload identity, and model integrity are separate claims.

This is why "cryptographic proof of AI inference" should not be treated as one single claim. It is a bundle of narrower claims:

That still does not prove everything.

A receipt does not prove that a system is perfect. It does not eliminate all side-channel risk. It does not prove that data was irrecoverably deleted. It does not prove that the model behaved correctly in a semantic sense. It proves specific technical facts, under specific trust assumptions, with specific cryptographic evidence.

That distinction matters for regulated teams. Compliance and risk functions usually do not need magic. They need a defensible evidence story. If an organization is going to use cloud AI for sensitive workflows, the real question is not "is it absolutely trustless?" The real question is whether the system can produce evidence that survives audit, vendor review, and security review.

That is the practical value of AI receipts. They turn "we say this happened" into "here is what we can prove happened."

The bar should be simple: if a claim is important, it should be verifiable. If it is not verifiable, it should be described as an assumption, not as proof.