TL;DR: “Keys never leave the silicon”, “you must own is a trust claim, not a cryptographic proof. A provable hardware wallet uses TEE remote attestation to make it verifiable. No consumer product does this today, but the primitives exist.
In Web3, a hardware wallet is a dedicated device that stores private keys in isolation from the host machine. Manufacturers describe this isolation with the claim that keys “never leave the device.” This post examines whether that claim can be made verifiable through cryptographic proof, what the required construction looks like, and how close current systems come to realizing it.
Problem Statement
A private key in Web3 is the exclusive proof of ownership. Whoever holds the key controls the associated assets permanently and irrevocably. Hardware wallets such as Ledger and Trezor generate and store private keys in dedicated hardware, isolated from the connected computer. The security argument rests on the assumption that the private key cannot be extracted from the device.
This assumption, however, is a trust claim, not a verifiable guarantee. No third party can currently verify, through a cryptographic proof, that a given public key corresponds to a private key that was generated on hardware, exists only in hardware, and is unexportable. Users must trust the hardware manufacturer’s firmware, the supply chain, and the update mechanism.
The core question is: how can one prove to a third party that a specific private key was generated or written inside hardware and cannot be extracted without physically destroying the device?
Required Properties
A provable hardware wallet must satisfy three properties simultaneously.
Property 1: Hardware-bound key generation. The private key must be generated inside the secure hardware. A key generated outside and then imported into hardware with a seed phrase, carries no guarantee about how many copies are made before or after import. Generation must originate inside the trusted boundary.
Property 2: Verifiable non-exportability. A third party must be able to verify, through a cryptographic proof, that the private key cannot exit the hardware. This proof must be inspectable by anyone who holds the corresponding public key, without requiring access to the device itself.
Property 3: Auditable execution. The code that generates and manages the key must be inspectable. Without code auditability, the proof of hardware execution says nothing about whether the code itself exports the key through a covert channel. An attestation over secret or proprietary code is not sufficient.
These three properties together constitute provability. Current hardware wallets satisfy the first property in practice but not the second or the third.
Blueprint Solution with TEEs
The solution relies on remote attestation, a mechanism provided by modern Trusted Execution Environments (TEEs) such as Intel SGX, AMD SEV-SNP, and AWS Nitro Enclaves. Remote attestation allows a piece of hardware to sign a statement of the form: “code with measurement hash H is currently executing inside a genuine, unmodified secure enclave on this hardware.” The signature is rooted in a key provisioned by the manufacturer at fabrication time and is verifiable against a public certificate chain.
A provable hardware wallet uses this mechanism in three steps.
First, an open-source program runs inside the TEE and generates a key pair. The private key never exits the enclave boundary. This boundary is enforced at the CPU level: memory belonging to an enclave is encrypted and inaccessible to the host operating system, hypervisor, or any code running outside the enclave.
Second, at key generation time, the enclave produces a remote attestation report. The report contains the measurement hash of the executing code, the freshly generated public key, and a signature over both produced using the hardware’s attestation key. This report is the proof that this specific code generated this specific public key inside a genuine enclave.
Third, any party that receives the public key and the attestation report can verify the proof using the manufacturer’s root certificate. If the enclave code is open source and its measurement hash is published, the verifier can audit that the code does not export the private key. The verification requires no trust in the device manufacturer beyond trust in the hardware’s attestation key.
This construction turns the claim “this key is only on hardware” from an assertion into a verifiable statement.
Discussion
Trust in the silicon manufacturer. Remote attestation is rooted in a hardware key provisioned by the CPU manufacturer (Intel, AMD, Amazon) at fabrication. The security guarantee is contingent on trusting this manufacturer. A compromised root key, a backdoored attestation service, or a fabrication-time implant would break the guarantee. This is a fundamental limit shared with all HSM-based security.
Side-channel vulnerabilities. SGX enclaves have been broken by multiple microarchitectural attacks (Foreshadow in 2018, SGAxe in 2020, Plundervolt in 2019) that extract secrets without detection by the enclave code. AMD SEV-SNP and Intel TDX address some of these vectors through stronger isolation, but their research record is shorter. A provable hardware wallet must be paired with hardware that has a credible and continuously maintained side-channel resistance profile.
Open-source code is necessary. The attestation proves that code with hash H ran inside the enclave. The verifier can confirm behavior only if H is published and the code is auditable. Proprietary enclave code provides authenticity (this code ran here) but not correctness (this code does not export the key). Open source is a prerequisite for the security argument to hold.
Comparison with HSMs. Hardware Security Modules certified under FIPS 140-2/3 Level 3 or 4 provide non-exportability guarantees enforced through physical tamper resistance evaluated by an accredited laboratory. They do not in general produce a remote attestation statement that a given public key was generated inside a specific codebase and is non-exportable. Certified HSMs rely on the evaluation process for trust; provable hardware wallets rely on cryptographic proof exportable to arbitrary verifiers.
Does It Exist Today?
Consumer hardware wallets do not implement remote attestation. Ledger devices use secure elements (ST33, SE050) and Trezor uses a microcontroller with firmware verification. These enforce physical resistance to extraction but do not produce attestation statements a remote party can verify independently.
Ledger. Ledger devices store private keys in a certified secure element (ST33 or SE050), satisfying Property 1. The firmware is partially proprietary: the BOLOS operating system and secure element code are not fully auditable, so Property 3 does not hold. Ledger does not implement remote attestation, so Property 2 does not hold either. The Ledger Recover feature (firmware 2.2.1, May 2023), which splits and transmits the seed off the device, demonstrates that non-exportability depends on Ledger’s firmware policy rather than on a verifiable cryptographic guarantee.
Trezor. Trezor devices (Model One and Model T) use a general-purpose microcontroller (STM32) and publish their full firmware as open source, satisfying Property 3. They do not use a hardware-isolated secure element, so the private key resides in standard flash memory. Physical access is sufficient to extract keys through fault injection or direct memory access, as demonstrated by Kraken Security Labs in 2020. Property 1 holds in practice, but Property 2 does not: there is no remote attestation and no silicon-level isolation. Trezor trades hardware isolation for software transparency, the inverse of Ledger’s tradeoff. Neither device satisfies all three properties.
TEE-based key management services are the closest existing realization of the concept. Turnkey uses AWS Nitro Enclaves with remote attestation: users can verify through a published attestation chain that their keys are managed by a specific, versioned codebase running inside a genuine Nitro Enclave. Lit Protocol uses TEE-based nodes and publishes attestation data allowing independent verification. These systems target programmable, server-side key management rather than the user-held model of a hardware wallet, but the underlying cryptographic construction implements the proposed scheme.
No consumer-grade, self-custodal product currently combines all three required properties: key generation inside a TEE, remote attestation exported to the user, and open-source auditable enclave code. The technical components exist; the integrated product does not.
Conclusion
A provable hardware wallet is a system that generates a private key inside a Trusted Execution Environment and produces a remote attestation report allowing any third party to verify that the key was generated by auditable code inside genuine hardware. The claim “this key is only on hardware” becomes a verifiable cryptographic statement rather than a trust assumption about the manufacturer.
The required primitives are available today: remote attestation in Intel SGX, AMD SEV-SNP, and AWS Nitro Enclaves; open-source enclave frameworks; and the IETF RATS attestation verification standard. TEE-based key services such as Turnkey and Lit Protocol implement this model for server-side custody. The gap is in consumer self-custody products, where the hardware wallet market has not adopted remote attestation. Closing this gap would replace trust-in-manufacturer with trust-in-hardware-and-verifiable-code, a meaningful and achievable improvement for users who require strong, auditable guarantees about key custody.
Martin Monperrus
March 2026