Welcome to the 7th iteration of the Confidential Computing devroom! In this welcome session, we will give a very brief introduction to confidential computing and the devroom, and we will give an honorable mention to all the folks that contributed to this devroom, whether they are presenting or not.
Hardware extensions for confidential computing establish a strict trust boundary between a virtual machine and the host hypervisor. From the guest’s perspective, any interaction crossing this boundary must be treated as untrusted and potentially malicious. This places significant hardening demands on guest operating systems, especially around firmware interfaces, device drivers, and boot components.
This talk explores how COCONUT-SVSM can act as a trusted proxy between the hypervisor and the Linux guest, restoring trust in key firmware and memory-integrity interfaces. By offloading sensitive interactions to the SVSM, we can simplify guest OS hardening and provide a more secure boot process for confidential VMs.
Currently QEMU hypervisor based confidential guests on SEV-SNP, SEV-ES and TDX are not at-par with other non-confidential guests in terms of restartability. For confidential guests, once their initial state is locked-in and its private memory pages are encrypted, its state is finalized and it cannot be changed. This means, in order to restart a confidential guest, a new confidential guest context must be created in KVM and private memory pages re-encrypted with a different key. Today, this means that upon restart, the old QEMU process terminates and the only way to achieve a reset is to instantiate a new guest with a new QEMU process on these systems.
Resettable confidential guests are important for reasons beyond bringing them at par with non-confidential guests. For example, they are a key requirement for implementation of the F-UKI idea [1][2]. This talk will describe some of the challenges we have faced and our experiences in implementing SEV-SNP and TDX guest reset on QEMU. A demo will be shown that reflects the current state of progress of this work. A link for the demo video will also be shared. This will be mostly a QEMU centric presentation so we will also describe some fundamental concepts of confidential guest implementation in QEMU.
WIP patches based on which the demo will be shown are here [3]. These patches are posted in the qemu-devel mailing list for review and inclusion into QEMU [4].
In this talk, I will first introduce Intellectual Property Encapsulation, the confidential computing feature of Texas Instruments MSP430 microcontrollers, and multiple vulnerabilities we have found in it. Then, I will propose two methods of mitigating these vulnerabilities: first, a software-only solution that can be deployed on existing devices; second, a standard-compliant reimplementation of the hardware on an open-source CPU with more advanced security features and an extensive testing framework.
Attacks and software mitigation: https://github.com/martonbognar/ipe-exposure Open-source hardware design and security testing: https://github.com/martonbognar/openipe
Confidential computing is rapidly evolving with Intel TDX, AMD SEV-SNP, and Arm CCA. However, unlike TDX and SEV-SNP, Arm CCA lacks publicly available hardware, making performance evaluation difficult. While Arm's hardware simulation provides functional correctness, it lacks cycle accuracy, forcing researchers to build best-effort performance prototypes by transplanting their CCA-bound implementations onto non-CCA Arm boards and estimating CCA overheads in software. This leads to duplicated efforts, inconsistent comparisons, and high barriers to entry.
In this talk, I will present OpenCCA, our open research framework that enables CCA-bound code execution on commodity Arm hardware. OpenCCA systematically adapts the software stack—from bootloader to hypervisor—to emulate CCA operations for performance evaluation while preserving functional correctness. Our approach allows researchers to lift-and-shift implementations from Arm’s simulation to real hardware, providing a framework for performance analysis, even without publicly available Arm CPUs with CCA.
I will discuss the key challenges in OpenCCA's design, implementation, and evaluation, demonstrating its effectiveness through life-cycle measurements and case studies inspired by prior CCA research. OpenCCA runs on an affordable Armv8.2 Rockchip RK3588 board ($250), making it a practical and accessible platform for Arm CCA research.
https://github.com/opencca
Confidential Computing poses a unique challenge of Attestation Verification. The reason is, Attester in Confidential Computing is infact a collection of Attesters, what we call as Composite Attester. One Attester is a Workload which runs in a CC Environment, while the other Attester is the actual platform on which the Workload is executed. The two Attesters have separate Supply Chains (one been the Workload Owner deploying the Workload) while the Platform is a different Supplier, say Intel TDX or Arm CC. Another deployment could be a Workload been trained on a GPU (via means of Integrated TEE) attached to a CPU, to create an end-to-end secure environment. How can one trust such a Workload, along with the CPU which is feeding the training data to it?? To trust a Composite Attester, through remote attestation one needs multiple Remote Attestation Verifiers, for example one coming from CPU Vendor the other from a GPU Vendor. How do the Verifiers coordinate? Are there topological patterns of coordination that can be standardized.
The presentation will highlight the Work done in IETF Standards & Open Source Project Veraison to highlight: 1. Composite Attesters 2. Remote Attestation though Multiple Verifiers 3. Open-Source Work done in Project Veraison to highlight how Composition of Attesters can be constructed in a standardized manner 4. Open Source Work done in Project Veraison to highlight how Multiple Verifiers can coordinate to produce a Combined Attestation Verdict for a Composite Attester.
Please see the following links- https://datatracker.ietf.org/doc/draft-richardson-rats-composite-attesters/
https://datatracker.ietf.org/doc/draft-deshpande-rats-multi-verifier/
Composition of Attesters using Concise Message Wrappers:
Golang Implementation:
https://github.com/veraison/cmw
Rust Implementation:
https://github.com/veraison/rust-cmw
Attestation results required for constructing compositional semantics: Golang Implementation: https://github.com/veraison/ear
Rust Implementation: https://github.com/veraison/rust-ear
Verification of Composite Attesters - Arm-CCA https://github.com/veraison/services
We have released the sample codes for remote attestation on cloud confidential computing services. I report the lessons learned from them. https://github.com/iisec-suzaki/cloud-ra-sample The samples cover multiple types of Trusted Execution Environments (TEEs): (1) Confidential VMs, including AMD SEV-SNP on Azure, AWS, and GCP, and Intel TDX on Azure and GCP; (2) TEE enclaves using Intel SGX on Azure; and (3) hypervisor-based enclaves using AWS Nitro Enclaves. As verifiers, the samples make use of both open-source attestation tools and commercial services such as Microsoft Azure Attestation (MAA). This talk aims to share these observations to support developers and researchers working with heterogeneous TEE environments and to help avoid common pitfalls when implementing remote attestation on cloud platforms.
A decade after Intel SGX’s public release, a rich ecosystem of shielding runtimes has emerged, but research on API and ABI sanitization attacks shows that their growing complexity introduces new vulnerabilities. What is still missing is a truly minimal and portable way to develop enclaves.
In this talk, we will introduce our recent work on "bare-sgx", a lightweight, fully customizable framework for building SGX enclaves directly on bare-metal Linux using only C and assembly. The initial code was forked from the Linux kernel's selftests framework and explicitly encouraged by prominent kernel developers. By interfacing directly with the upstream SGX driver, bare-sgx removes the complexity and overhead of existing SGX SDKs and library OSs. The result is extremely small enclaves, often just a few pages, tailored to a specific purpose and excluding all other unnecessary code and features. Therefore, bare-sgx provides a truly minimal trusted computing base while avoiding fragile dependencies that could hinder portability or long-term reproducibility.
Although still young, bare-sgx aims to provide a long-term stable foundation for minimal-trust enclave development, reproducible research artifacts, and rapid prototyping of SGX attacks and defenses.
Attested TLS is a fundamental building block of confidential computing. We have defended our position (cf. expat BoF) to standardize the attested TLS protocols for confidential computing in the IETF, and a new Working Group named Secure Evidence and Attestation Transport (SEAT) has been formed to exclusively tackle this specific problem. We would like to present the candidate draft for standardization and gather feedback from the community, so that it can be accommodated in the standardization. We demonstrate that the alternative standard candidate draft-fossati-tls-attestation is vulnerable to diversion and relay attacks.
We propose a specification that defines a method for two parties in a communication interaction to exchange Evidence and Attestation Results using exported authenticators, as defined in RFC9261. Additionally, we introduce the cmw_attestation extension, which allows attestation credentials to be included directly in the Certificate message sent during the Exported Authenticator-based post-handshake authentication. The approach supports both the passport and background check models from the RATS architecture while ensuring that attestation remains bound to the underlying communication channel.
WiP Implementation uses the veraison/rust-cmw implementation of RATS conceptual messages wrapper. It includes a test which demonstrates using it with QUIC (for transport) and Intel TDX (as confidential compute platform): tests/quic_tdx.rs.
This talk presents a practical approach to building a high‑assurance core infrastructure for home and small business environments, using modern open firmware on commodity server hardware.
As AI workloads move from cloud to on‑premise, the need for trustworthy and attestable hardware platforms for running models and handling sensitive data becomes critical. But what does "trustworthy" actually mean at the hardware/firmware level, and can we realistically achieve it with today’s platforms?
We will walk through how to build a system based on a modern AMD server board combined with open‑source firmware (coreboot[1] and OpenSIL[2]) to gain more control and transparency across the boot chain. We will discuss:
The goal is to show how open firmware can complement security and confidentiality computing features to create a platform you can actually inspect, reason about, and attest from top to bottom - rather than treating the hardware and firmware as opaque, trusted black boxes.
[1] https://www.coreboot.org/ [2] https://github.com/openSIL/openSIL