A common mistake in confidential AI discussions is to assume that if a GPU or TEE is attested, then the model identity is automatically proven. That is not how the trust boundary works.
Attestation proves properties about the execution environment. Depending on the platform, it can prove things like firmware state, security mode, measurement values, or the identity of the workload that booted into a confidential execution flow. That is environment evidence.
Model identity is different
Model identity asks: what weights, tokenizer, config, or adapters were actually used for the inference run? That question lives at the application layer unless the hardware explicitly measures those artifacts and exposes that measurement in a verifiable way.
In practice, current confidential computing stacks often stop short of that. They attest the platform and sometimes the workload image, but they do not directly attest the model artifacts loaded at inference time.
That matters because "the environment is trusted" and "the model is identified" are not interchangeable claims.
What honest attestation looks like
If a system only has hardware attestation, the strongest honest statement is usually something like this: the inference ran in a genuine confidential environment with expected platform properties. That is useful, but it is not the same as proving which model weights were present.
To prove model identity, a system needs additional binding. One approach is to hash the model weights inside the trusted execution environment after decryption and before load. A stronger variant is to bind not just the weight file, but additional model artifacts through a signed manifest. That helps cover tokenizer and configuration drift as well.
What a careful reviewer should ask
Suppose a vendor says, "our GPU is attested, so the model in this inference is proven." A careful reviewer should ask:
- Which artifacts are actually measured?
- Where are those measurements exposed?
- Are the model weights included?
- Are tokenizer and config included?
- Is the evidence in the signed receipt, or only in internal logs?
If those answers are vague, the system may still have strong environment assurance, but weak model assurance.
The right framing
The right way to explain this is simple:
- hardware attestation proves environment state
- workload identity proves what software stack was launched
- model identity must be bound separately
That is not a weakness in attestation. It is a trust-boundary fact.
For buyers, this means the useful question is not "do you have attestation?" The useful question is "what exactly does your attestation prove, and how do you bind model identity on top of it?"
That is where systems diverge. Some stop at infrastructure trust. Better systems turn infrastructure trust into verifiable inference evidence.