← Research & Evidence

Why GPU attestation does not prove model identity

GPU attestation is valuable. It is also easy to overstate. Here is the trust-boundary distinction between hardware state and model identity.

Borys Tsyrulnikov · April 2026
Read ↓
Data grid with horizon trust boundary

A common mistake in confidential AI discussions is to assume that if a GPU or TEE is attested, then the model identity is automatically proven. That is not how the trust boundary works.

Attestation proves properties about the execution environment. Depending on the platform, it can prove things like firmware state, security mode, measurement values, or the identity of the workload that booted into a confidential execution flow. That is environment evidence.

Model identity is different

Model identity asks: what weights, tokenizer, config, or adapters were actually used for the inference run? That question lives at the application layer unless the hardware explicitly measures those artifacts and exposes that measurement in a verifiable way.

In practice, current confidential computing stacks often stop short of that. They attest the platform and sometimes the workload image, but they do not directly attest the model artifacts loaded at inference time.

That matters because "the environment is trusted" and "the model is identified" are not interchangeable claims.

What honest attestation looks like

If a system only has hardware attestation, the strongest honest statement is usually something like this: the inference ran in a genuine confidential environment with expected platform properties. That is useful, but it is not the same as proving which model weights were present.

To prove model identity, a system needs additional binding. One approach is to hash the model weights inside the trusted execution environment after decryption and before load. A stronger variant is to bind not just the weight file, but additional model artifacts through a signed manifest. That helps cover tokenizer and configuration drift as well.

Structured square versus scattered fragments
Hardware attestation covers the environment. Model identity requires a separate binding step.
Attestation is still critical. It just does not solve every layer by itself.

What a careful reviewer should ask

Suppose a vendor says, "our GPU is attested, so the model in this inference is proven." A careful reviewer should ask:

If those answers are vague, the system may still have strong environment assurance, but weak model assurance.

The right framing

The right way to explain this is simple:

That is not a weakness in attestation. It is a trust-boundary fact.

For buyers, this means the useful question is not "do you have attestation?" The useful question is "what exactly does your attestation prove, and how do you bind model identity on top of it?"

That is where systems diverge. Some stop at infrastructure trust. Better systems turn infrastructure trust into verifiable inference evidence.