systemd supports a number of integration features that allow VMMs certain access to the inner state of VM guests for provisioning, synchronization and interaction, and many of them are little known, even though very very useful. In this talk I'd like to shed some light on many such integration points, such as SMBIOS type 11 based system credential provisioning; state propagation/readiness notification via AF_VSOCK; SSH support via AF_VSOCK, and so on.
Modern confidential computing technologies like AMD SEV-SNP and Intel TDX provide a reliable way to isolate guest workload and data in use from the virtualization or cloud infrastructure. Protecting data at rest is, however, not something you get ‘by default’. The task is particularly challenging for traditional operating systems where users expect to get full read/write experience.
The good news is that Linux OS already offers a number of great technologies which can be combined to achieve the goal: dm-verity and dm-integrity, LUKS, discoverable disk images and others. Doing it all right, however, is left as an “exercise to the reader”. In particular, the proposed solution must allow for meaningful remote attestation at any time in the lifetime of the guest.
The talk will focus on the recent developments in various upstream projects like systemd and dracut which are focused on making full disk encryption consumable by confidential computing guests running in a cloud.
It has been several years since the last rust-vmm update at FOSDEM, but the community has continued to grow. Our goal remains the same: to provide reusable Rust crates that make it easier and faster to build virtualization solutions.
This talk will present the main progress and achievements from the past few years. It reviews how rust-vmm crates integrate into a variety of projects such as Firecracker, Cloud Hypervisor, Dragonball, and libkrun. We will discuss the ongoing efforts to consolidate all crates into a single monorepo and how we expect this to simplify development and releases. The talk will also cover recent work supporting new architectures like RISC-V and additional operating systems. Finally, we will review the support for virtio and vhost-user devices that can be used by any VMM.
QEMU 10.2 will introduce MSHV as a new accelerator option for Linux hosts.
MSHV is a kernel driver maintained by Microsoft's Linux System Group that aims to expose HyperV capabilities to users in various virtualization topologies: on bare metal, in nested virtualization and most recently via a new model called "Direct Virtualization".
Direct virtualization will allow owners of an L1 VM to commit parts of their assigned resources (CPU, RAM, Peripherals) to virtual L2 guests, that are technically L1 siblings. Users can take advantage of the hypervisor's isolation boundaries without the performance and functional limitations of a nested guest. Untrusted code can be sandboxed with near-native performance and access to GPUs or NVMe controllers.
Adding support for MSHV acceleration to QEMU aims to broaden the reach of this technology to a Linux audience. The talk will cover the current state of the implementation, challenges that remain and future plans for both MSHV and QEMU.
VIRTIO is the open standard for virtual I/O, supported by a wide range of hypervisors and operating systems. Typically, device emulation is performed directly inside the Virtual Machine Monitor (VMM), like QEMU. However, modern virtualization stacks support multiple implementation models: keeping the device in the VMM, moving it to the kernel (vhost), offloading it to an external user-space process (vhost-user), or offloading it directly to the hardware (vDPA).
Each approach comes with specific trade-offs. Emulating in the VMM is straightforward but can be a bottleneck. In-kernel emulation offers high performance but increases the attack surface of the host system. External processes provide excellent isolation and flexibility, but introduce complexity. Finally, vDPA (vhost Data Path Acceleration) enables wire-speed performance with standard VIRTIO drivers, but introduces hardware dependencies.
So, how do we decide which approach is best for a specific use case?
In this talk, we will explore all four methods for emulating VIRTIO devices. We will analyze the architectural differences, discuss the pros and cons regarding performance and security, and provide guidance on how to choose the right architecture for your use case.
This talk shows how a Raspberry Pi can run a complete open-source cloud using OpenNebula. With MiniONE handling the installation and KVM doing the virtualization, a Raspberry Pi becomes a small but fully functional cloud node capable of running VMs, containers, lightweight Kubernetes clusters and edge services. The goal is simple: demonstrate that homelab users can build a full cloud stack with compute, networking, storage and orchestration on affordable hardware using only open-source tools. A short demo will show a VM launching on a Pi-based OpenNebula cloud, highlighting how the platform scales down to tiny devices while keeping the same clean and unified experience found on larger deployments.
To address the challenge of providing seamless Layer 2 connectivity and mobility for KubeVirt virtualized applications distributed across multiple clusters (for reasons like disaster recovery, scaling, or hybrid cloud), we integrated OpenPERouter, an open-source project that provides EVPN-based VXLAN overlays, solving the critical need for distributed L2 networking.
OpenPERouter's declarative APIs and dynamic BGP-EVPN control plane enable L2 networks to stretch transparently between clusters, maintaining VM MAC/IP consistency during migrations and disaster recovery. This architecture facilitates deterministic cross-cluster live migrations, better supports legacy workloads needing broadcast/multicast, and enables migrating workloads into the KubeVirt cluster while preserving original networking using open components. Routing domains are also supported for traffic segregation and to provide direct routed ingress to VMs, eliminating the need for Kubernetes services to expose ports.
Attendees will gain the practical knowledge to design and implement resilient, operationally safe, EVPN-based overlays with OpenPERouter, receiving actionable design patterns and configuration examples.
"GPU clouds" for AI application are the hot topic at the moment, but often these either end up being just big traditional HPC-style cluster deployments instead of actual cloud infrastructure or are built in secrecy by hyperscalers.
In this talk, we'll explore what makes a "GPU cloud" an actual cloud, how requirements differ from traditional cloud infrastructure, and most importantly, how you can build your own using open source technology - all the way from hardware selection (do you really need to buy the six-figures boxes?) over firmware (OpenBMC), networking (SONiC, VPP), storage (Ceph, SPDK), orchestration (K8s, but not the way you think), OS deployment (mkosi, UEFI HTTP netboot), virtualization (QEMU, vhost-user), performance tuning (NUMA, RDMA) to various managed services (load balancing, API gateways, Slurm etc.
In addition to the purely technical side, we'll also go into some of the non-technical challenges of actually running your own infrastructure and how to decide whether this is something that's actually worth doing yourself.
What if your container image were a few megabytes instead of hundreds of megabytes? WebAssembly (WASM) offers a radically lighter approach to running workloads on Kubernetes — right alongside your existing containers. In this talk, we'll dive deep into how WASM modules using the WebAssembly System Interface (WASI) integrate into Kubernetes through containerd shims like runwasi. Using a Rust example, we'll demonstrate the dramatic reduction in image size and startup time compared to traditional containers. We'll explore the current state of WebAssembly in the cloud-native ecosystem: what's production-ready today, and where you should wait before adopting. Beyond the basics, we'll look at real-world Cloud-Native Compute Foundation (CNCF) projects already running WASM in production and discuss the two areas where WebAssembly shines: plugin architectures that benefit from small, secure, sandboxed extensibility, and event-driven systems that can quickly scale from zero. Whether you're optimizing for resource efficiency or exploring new isolation patterns, this session provides insights into WebAssembly on Kubernetes and serves as a great starting point.
With KubeVirt, Virtual Machines become first-class citizens in Kubernetes, allowing VMs and containers to be managed through a unified control plane. As organizations evolve their infrastructure, many early adopters face the challenge of upgrading systems without disrupting essential services. To support this, KubeVirt provides powerful mobility capabilities for both compute and storage.
KubeVirt enables non-shared storage live migration through QEMU's block migration feature, which is orchestrated by libvirt and KubeVirt components. The core process involves copying the VM's disk data and memory state to the destination node while the VMI remains running.
Cross-Cluster Live Migration (CCLM) extends this capability across Kubernetes clusters, allowing a running VM to be moved seamlessly between clusters. This enhances flexibility and resilience in multi-cluster environments and is especially useful for load balancing, maintenance operations, and infrastructure consolidation – all without interrupting critical workloads. CCLM requires L2 network connectivity between clusters and compatible CPU architectures.
Storage Live Migration (SLM) allows you to move the VM’s disk data from one storage backend to another while the VM remains running. This is particularly valuable when rebalancing storage usage, retiring legacy systems, or adopting new storage classes – all without disrupting the applications inside the VM. SLM requires at least two compatible nodes.
Both CCLM and SLM work with any storage backend, including those using the ReadWriteOnce access mode.
After the session, you’ll be ready to migrate your running VMs – across clusters and across storage – with confidence, like a seasoned scheduler placing pods exactly where they need to be.
Lima (Linux Machines) is a command line tool to launch a local Linux virtual machine, with the primary focus on running containers on a laptop.
While Lima was originally made for promoting containers (particularly containerd) to Mac users, it has been known to be useful for a variety of other use cases as well. One of the most edgy use cases is to run an AI coding agent inside a VM, in order to isolate the agent from direct access to host files and commands. This setup ensures that even if an AI agent is deceived by malicious instructions searched from the Internet (e.g., fake package installations), any potential damage is confined within the VM, or limited to files specified to be mounted from the host.
This talk introduces the updates in Lima v2.0 (November 2025) that facilitates using Lima with AI: - Plugin infrastructure - GPU acceleration - MCP server - CLI improvements
Web site: https://lima-vm.io GitHub: https://github.com/lima-vm/lima
In this session, we will present a new extension to Prowler, the widely adopted open-source cloud security auditing tool, adding native support for the OpenNebula cloud management platform.
Our contribution delivers a modular, non-intrusive, and scalable auditing framework that integrates essential services and a growing catalogue of security checks aligned with established reference standards. This extension enables operators to detect misconfigurations and vulnerabilities more effectively, strengthening the overall security posture of OpenNebula deployments.
We will walk through the design and implementation of the tool, share validation results from real test scenarios, and outline how this effort helps democratize cloud security within the open-source ecosystem. Finally, we will discuss opportunities for community-driven collaboration to expand and evolve this new security auditing capability.
KubeVirt allows running VMs and containers on Kubernetes, but traditional Kubernetes networking - which uses NAT (Network Address Translation) to expose workloads outside the cluster - can still lead to complex, opaque, and brittle setups that prevent direct integration and reachability.
This presentation introduces a BGP-based solution to simplify KubeVirt networking. Kubernetes nodes dynamically exchange routes with the provider network, exposing workloads via their actual IPs, eliminating NAT and manual configurations.
This BGP approach simplifies network design, speeds up troubleshooting, and ensures consistent connectivity for virtualized workloads.
Attendees will learn practical, standard networking principles to simplify real-world Kubernetes environments and gain immediate, actionable insights to improve platform connectivity.
Platform engineering teams tackle complex, multi domain challenges, balancing governance and iterating quickly to enable developers. In this session we’ll detail how SUSE IT uses Kubewarden as a policy controller across both RKE2 and SUSE Virtualization environments. We’ll show how enforcing organizational policies with Kubewarden automatically integrates compliance and operational excellence into the core of the platform. We’ll discuss practical examples, e.g., how to restrict usage of resources, GPUs or VLANs to specific customers while providing the platform to a wider audience.
https://www.kubewarden.io/
https://www.rancher.com/
https://docs.rke2.io/
In this talk, we will present the current state of remote VM access in KubeVirt [0] and the challenges associated with it. We will discuss in-guest approaches such as running an RDP server on Windows or Linux, as well as host-side mechanisms like QEMU’s built-in VNC server exposed through KubeVirt’s virt-api. Finally, we will introduce a new proposal that leverages QEMU’s display-over-D-Bus interface [1], a direction that could enable additional vendors to build their own remote-display integrations.
[0] https://kubevirt.io/ [1] https://www.qemu.org/docs/master/interop/dbus-display.html
Serving large video diffusion models to multiple concurrent users sounds challenging till you partition a GPU correctly.
This talk is a deep technical exploration of running large-scale video generation inference on modern GPUs across Hopper and Blackwell with Multi-Instance GPU (MIG) isolation.
We'll explore:
Whether you're building a multi-tenant inference platform, optimizing GPU utilization for your team, or exploring how to serve video diffusion models cost-effectively, this talk provides practical configurations for your AI workloads.
Virtualization has transformed low-level debugging, system analysis, and malware research. By placing a thin hypervisor beneath the OS, developers gain a vantage point the OS cannot access. This blue-pill approach enables fine-grained control over CPU state, memory, interrupts, and hardware events without relying on OS components, supporting transparent breakpoints, VM-exit triggers, memory shadowing, and instruction tracing with minimal interference.
We present HyperDbg, an open-source hypervisor-based debugger. Leveraging the former characteristics, unlike kernel debuggers that depend on drivers, APIs, or software breakpoints, HyperDbg operates entirely below the OS, combining virtualization-based introspection with interactive debugging. It inspects memory, CPU execution, and traps events without OS cooperation, bypassing anti-debugging and anti-analysis techniques.
Using modern virtualization extensions like Mode Based Execution Control (MBEC) on top of Second Level Address Translation (SLAT), HyperDbg enforces breakpoints and traps through hardware transitions, independent of OS APIs or exceptions. This allows stealthy, artifact-free binary analysis, providing a powerful platform for reverse engineering and research. In its first iteration, HyperDbg introduced a hypervisor-powered kernel debugger. With the recent release of v0.15, HyperDbg enables cross-boundary debugging from kernel-mode into user-mode. For this talk, we will add special focus on how we implemented cross-boundary debugging, and how it enables users to intercept user-mode process execution using virtualization techniques.
Resources: - HyperDbg repository: https://github.com/HyperDbg/HyperDbg/ - Documentation: https://docs.hyperdbg.org/ - Kernel-mode debugger design: https://research.hyperdbg.org/debugger/kernel-debugger-design/ - Research paper: https://dl.acm.org/doi/abs/10.1145/3548606.3560649