Virtually Attend FOSDEM 2026

Music Production Track

2026-02-01T13:15:00+01:00

What does it take nowadays to get the most out of your Linux system so that it can be used as a music production power house? This talk will explore the possibilities and hand some guidelines to squeeze out as much headroom your system has for all those resource hungry plugins. Along the way some myths might get debunked and some helpful tools will get introduced.

During the talk I will walk through how to set up your system so it can do low-latency real-time audio. With low-latency I mean round-trip latencies below 10 ms. I will show which tools can help with getting your system to perform better for doing music production. Such tools include rtcqs and Millisecond for finding and fixing possible bottlenecks, jack_iodelay or Ardour for measuring round-trip latencies and xruncounter to do DSP load stress tests.

I will also look backward briefly to 15 years ago when I did a similar talk at the Linux Audio Conference in Maynooth, what has changed since then, what has improved? I will also glare a bit at the future as the Linux Audio Conference will be held in Maynooth again this year and chances are I will dive deeper into this matter during that conference.

After the talk you will hopefully have a better grasp of what the key factors are for getting a better performing machine that has as little of those dreaded xruns as possible!

2026-02-01T13:40:00+01:00

This talk demonstrates how to build a wireless MIDI controller using Elixir, ESP32 microcontrollers, and AtomVM, proving that functional programming can run efficiently on resource-constrained embedded devices.

We'll explore how BEAM VM's lightweight processes and message-passing model naturally fit embedded systems programming, particularly for real-time applications like MIDI. The session covers practical implementation details: WiFi connectivity, UDP networking, MIDI message generation, and interfacing with physical controls like knobs and faders on ESP32-C3 hardware with just 400KB RAM.

Attendees will learn about AtomVM's subset of the BEAM VM designed for microcontrollers and the potential for building distributed music applications. We'll discuss how networked MIDI enables new possibilities for multi-device music systems and collaborative performance setups built on BEAM's distributed computing capabilities.

The project is fully open source and demonstrates a compelling use case for Elixir beyond traditional web services, showing how the language's concurrency model excels in IoT and real-time embedded systems.

Links: - GitHub: https://github.com/nanassound/midimesh_esp32 - Video demo: https://www.youtube.com/shorts/djaUUPquI_E

2026-02-01T14:05:00+01:00

Over the past years we developed Cardinal, an open-source eurorack simulation audio plugin based on VCV Rack. It integrates over 1300 modules, is available under the GPL-3.0-or-later license and comes in various plugin formats (lv2/vst2/vst3/clap/au) and configurations (synth/fx/main).

In this talk we explain the reasons for starting the project and how we think this improves the original Rack for running as an audio plugin. We will also showcase some tips and tricks for integrating with the plugin host and some advanced use cases like running it on embedded hardware.

2026-02-01T14:30:00+01:00

MBROLA and eSpeak NG are two speech synthesizers that can be used as MIDI instruments. MBROLA has been often uses for singing synthesis, because it allows you to control timing and pitch via its text interface. It became free software in 2018. Before 2018 I was already listening to a lot of VOCALOID and UTAU music, and I began researching how to implement my own singing speech synthesizer by reading "An introduction to text-to-speech synthesis" by Thierry Dutoit (author of MBROLA) and many other papers related to VOCALOID and UTAU. With a deep understanding how the MBROLA algorithm works I began to implement my own independant singing voice synthesizer, with eSpeak as an optional frontend.

2026-02-01T14:55:00+01:00

A couple of years ago I made a presentation called "Become a rockstar using FOSS!": it was a clickbait-y title, since I'm (obviously) not a rockstar at all, but it was a nice opportunity to introduce people to the music production ecosystem in Linux, which is huge and yet not that known to most. At the time, I mostly talked about the typical workflow for creating and recording music with either real or virtual instruments, but with a focus more on rock/pop music, in order to keep things simpler.

In this presentation I'll address a different point of view, that is how you can have a full symphonic orchestra in your laptop, write music for it and then have it performed in ways that are hopefully realistic enough to be used within the context of your compositions (unless you know 80 people that can play your music for you, that is!). I'll present my typical workflow, and the different pieces of software I used to make it possible for me to write purely classical music (symphonic poems), but also orchestral arrangements for songs in different genres (e.g., folk, progressive rock or metal) that I published as a hobby in my free time over the years.

Again, a clickbait title because I'm not really an orchestra composer... but FOSS definitely helped make me feel like one, and it can help you too!

2026-02-01T15:20:00+01:00

How to produce music with Linux/FLOSS professionally

Real penguins do not need apples to make music...

A case study on how an entirely Linux/FLOSS based production chain can be a viable alternative to the proprietary/paid one(s). I will concentrate on the production of a pop song, from the draft to the full-fledged, platform-ready master.

Many topics will be briefly discussed here: hardware, tools, practices, objectives, comparisons and interoperability and whatever; all you need to know to get the job done, professionally.

Here are some links related to this talk:

2026-02-01T15:40:00+01:00

JavaScript is a great language for it’s ease and low barrier to entry, fast turnaround workflows, and trying quick experiments. It’s generally not so great for real-time tasks, such as music playback or for working with live musicians.

And yet, that’s what this library does.

In this talk we look at how the midi-live-performer library can act as a real-time MIDI looper, echo unit, and auto-accompaniment system. There’s a slight detour to show midi-info, which provides user-friendly names for MIDI messages, both in real-time and not. Then we explain how it works, where the weaknesses in timing lay, and how it formed the basis for a solo recording of the multi-instrumentalist work “In C”

2026-02-01T16:00:00+01:00

Over the past few years, I've been prototyping PAW, a DAW based on ideas from live coding and bidirectional programming. Like with live coding, in PAW you write code to describe a piece of music incrementally. As part of this, you also build a GUI for direct manipulation of that same code, providing similar affordances to traditional DAWs.

PAW stems from my observations that regular DAWs tend to be limited in what they let users do, due to fundamental limitations with traditional GUIs. I believe that mixing in ideas from live coding, and programming at large, can help savvy users shed those limitations, while retaining familiar GUI affordances and usability.

The software is open source, but not yet quite usable. My goal with this talk is to share ideas with other people in the field of music production software.

2026-02-01T16:20:00+01:00

Music ensembles are moving from sheet music to tablets with PDFs. Many apps exists but all are focuses on individual musicians, not on bands. Rehorse is a web app with offline support that can be self-hosted by a band. The band librarian makes the music available to the band members. They can annotate the sheet music, practice with recordings with convenient section repeats. The app lists rehearsal and concert playlists and has (optional) access management to prevent members from downloading all parts, but e.g. only those from their section.

Recordings and sheet music are stored offline so the sheet music is available even at performances where no network is available.

Rehorse has been under development and in use in several bands for years. This talk invites new users and potential contributors. It goes into the workflows that musicians expect and the high standards that they are used to from other apps.

https://codeberg.org/vandenoever/rehorse

2026-02-01T16:45:00+01:00

Faircamp is a static site generator for audio producers - a system that creates fast, maintenance-free and indestructible websites for music, podcasts and other types of audio on the web. In this talk I will introduce you to Faircamp - where it comes from, what it offers, and where the project is heading in 2026 and beyond.

At the end of this talk I would also like to introduce you to other projects and communities that are working hard to bring back dignity, agency, control and a chance for a livelihood to independent musicians, podcasters, labels and audio producers all over the world.

Links: - Website - Article: «Faircamp lets you put sound, podcasts, and music on your own site» (CDM) - Blog post: «2026 will be the year of the Faircamp desktop application»