Virtually Attend FOSDEM 2026

API Remoting for llama.cpp: Near-Native GPU Speed in macOS Containers

2026-01-31T11:00:00+01:00 for 00:20

Running modern Large Language Model (LLM) workloads on macOS presents a unique challenge: reconciling powerful local hardware with a mature, Linux-first AI tooling and container ecosystem.

The Problem: Bridging the OS and Acceleration Gap

While containerization offers macOS developers access to Linux-centric tools like Ramalama and the Podman Desktop AI Lab, introducing a virtualization layer immediately compromises GPU acceleration. Direct device passthrough is infeasible, as it monopolizes the display and host system resources. Consequently, achieving high-performance LLM inference requires a sophisticated bridging mechanism.

The Solution: Para-Virtualization via GGML API Remoting

We present the implementation of an API-remoting, para-virtualized execution path for llama.cpp. This novel design is built directly on the GGML API, allowing us to selectively offload compute-intensive operations to execute on the macOS host. Crucially, this execution path natively leverages the GGML-Metal backend, achieving full-speed host performance. This strategy preserves a fully containerized Linux developer experience inside the virtual machine while achieving hardware-level acceleration outside of it.

Performance and Implications

We will share concrete performance results demonstrating near-native inference speed compared to running llama.cpp directly on the host. The talk will examine the architectural trade-offs of this split-execution model, providing insight into: * The implementation details of the para-virtualized GGML API bridge. * The latency overhead of API remoting versus the throughput gain from host GPU execution. * How this open-source approach fundamentally changes the future of containerized and accelerated AI tooling on non-Linux platforms, specifically addressing the needs of macOS users in the open-source AI community.

View on FOSDEM site