We need to talk about war. And we need to talk about companies building bots that propose to rewrite our source code. And about the people behind both, and how we preserve what is great about FOSS while avoiding disruption. How do geopolitical conflicts on the one hand and the risk of bot-generated (adversarial) code on the other influence the global community working together on Free and Open Source software?
The immense corpus of free and open source software created by a global community of researchers and engineers, developers, architects and designers is an impressive achievement of human collaboration at an unprecedented scale. An even bigger circle of users, translators, writers, creatives, civil society advocates, public servants and private sector stakeholders has helped to further develop and spread this technological Gesamtkunstwerk far and wide - with the help of the internet and the web. With individual freedoms and user empowerment at its center, these jointly created digital public goods have removed many economic and societal barriers for a large part of the world's population. Users are not just allowed to benefit from technology, but each and every user can in principle actively help shape it. On top of the FOSS ecosystem our global economy has been propelled to unprecedented levels.
Much of this incredible growth was achieved within a (relatively) calm geopolitical situation, in the wake of the cold war which ended in the very same year that also saw the genesis of the World Wide Web at CERN in Switzerland. Economists, philosophers and other observers at the time spoke of the 'end of history' and expected no more big conflicts at the superpower level. We could now globalise the economy and all work together. The flood of innovation taking place all around us promised a bright future for all, with room for altruism and collaboration. In retrospect it certainly was an ideal situation for an optimistic and constructive global movement like the FOSS community to take over the helm.
But apart from the fact that under the surface that narrative was already flawed (with some actors like the USA having a double agenda, as the Snowden and Shadowbrokers revelations exposed) history didn't end. To some ironic extent we are now becoming victim of our own success. In recent years we've seen geopolitical stability punctured by war effort levering low cost technology that includes heaps of FOSS. Social media powered by FOSS infrastructure promote disinformation and have successfully stirred large scale polarisation. Within some of the largest and most populous countries on the planet authoritarian regimes have successfully used technology to break oppression in a new race towards totalitarianism. While for instance Europe has tried to regulate 'dual use' technology, "any use" technology (which our libre licenses guarantee) has escaped our attention. Even in countries which had stable non-authoritarian regimes there is a visible technology-assisted relapse towards anti-democratic movements. On the back of a tech stack which consists of FOSS with a thin crust of proprietary special sauce, unprecedented private capital (sometimes referred to as 'hypercapitalism') is interfering with global politics at an alarming rate. Apart from the direct democratic disbalance the resulting oligarchy is giving rise to overt nepotism, corruption and a new global protectorate for predatory business models and unethical extractive behaviour. Expecting peace in cyberspace any time soon is probably naive, and free and open source technology stands to make up for a significant part of the battleground.
At the same time we are facing other challenges, such as climate change and an imminent scarcity of non-renewable resources. We have more people living on the surface of the planet than ever before, and they are consuming more raw materials and more energy than ever. This won't go on indefinitely. And right at that point we see an army of next generation Trojan horses galloping through the gates of our global commons villages, accelerating our use of both. Generative pre-trained transformers (also known as Large Language Models) kindly offer to take cumbersome and boring coding work off our hands. They can liberate us from responsibility and allow us to do other things or move even faster.
But is it really wise to accept this apparent gift, or should we be a little more suspicious? Just as it has proven way too easy for AI to poison the web with fake content, our software supply chain is vulnerable to manipulation. The attack surface is immense. Due to the inherent complexity of software it is easier to achieve and harder to detect manipulation before it is too late. While many talented and committed people have spent years reverse engineering binary blobs to avoid the associated risks, those blobs were at least isolated and clearly marked. AI is the ultimate black box and it introduces significantly more uncertainty: it rewrites the truth from the inside.
AI in its current form has no actual sense of truth or ethics. Like with Russian roulette, once in a while the models completely bork up and create phantom code and real risk - and that is even in a best case scenario, without assuming malicious intent and manipulation from the outside. In an adversarial scenario (and this adversity can come from traditional nation state actors with non-aligned interests but also from corporate or even private individuals with some determination - like Cambridge Analytica illustrated so vividly) manipulation only requires subtle changes. At the frantic scale at which any available learning content is ingested from the internet these days one can expect targeted adversarial training to manipulate specific code with subtle triggers to go unnoticed.
As a community we have spent billions of hours of careful coding and software engineering to make free and open source technology as trustworthy as it is today. Geopolitical conflict is an incentive to hollow out that trust. AI is an additional leap of faith, and if you look at the forces driving its adoption and their interests, are we really sure those black boxes are safe to invite into our trusted coding base? It is clear that the end game of AI coding is not a healthy FOSS ecosystem, but its total displacement. The threat of machine crafted and man-crafted malicious code in war-time FOSS are equally realistic. Perhaps we can find a middle ground, where we combine some of AI and human skill - and add enough checks and balances, and a variety of assurances through compartementalisation, formal and symbolical proofs and other traditional means of quality assurance.
This talk is an open exploration of some of the challenges the FOSS community will have in the years ahead, working towards a hopeful notion of maximal defendable FOSS.
In 2025, the Git project has turned 20 years old, and in these 20 years it has taken over the world of version control systems by storm: nowadays, almost every developer uses Git. But that doesn't mean that Git is perfect and "done", or even close to it. It still has many warts: user experience, arbitrary limitations, performance issues and no good support for large binary files are just some of the issues that users commonly complain about.
In this talk you'll learn what is happening in the Git project to address these issues. The talk will cover both recent additions to Git that make your life easier, as well as ongoing development that is expected to land in the not-too-distant future.
Mercurial is a Distributed Version Control System created in 2005.
The project has been constantly active since then, fostering modern tooling, introducing new ideas, spawning multiple recent tools from its community, keeping itself competitive, and with sustained funding for its development. However nowadays, most people we encounter remember Mercurial for losing the popularity battle to its sibling Git in the 2010s and think the project dead.
This talk confronts this paradox. How did Mercurial get itself in such a situation? What can everyone learn from it? What does this mean for the future of version control?
Using our first hand knowledge of Mercurial's history, we look at a selection of events, contributor profiles, technical and community aspects, to see how they've affected the project's course.
We will focus on topics that we have been asked about most frequently, such as: * How has Mercurial weathered the Git storm? * Which impacts has Mercurial had on your life, unbeknownst to you? * How has the involvement from behemoth companies reshaped the project? * What brings people to Mercurial in 2025?
Finally, we leverage the knowledge extracted from our past, to assess the present state of version control, try to predict its future, and highlight how community-based open-source remains as relevant as ever.
Git is a tool most programmers rely on, whether for work or personal projects. It’s far more than just a method for syncing local and remote changes. Git embodies a way of thinking that serves as the foundation for development workflows and steers project evolution.
At its core, Git has essential concepts such as commits, change history, branching, rebasing, and merging. While Git offers many features, these are its heart. Misusing them can lead to significant opportunity costs, while striking the right balance simplifies development at all levels and benefits the project’s community (if it has any).
In this talk I am sharing my own experience how applying these core concepts in real projects significantly accelerates development, especially in mission-critical systems. I’ll cover specifically the following topics with true examples from my work places, both open and closed source:
All of that combined into a framework that I call "Atomic Flow".
Many teams enforce strict Git practices based on these key principles, and for good reason. I’ve worked on projects that fully harnessed Git’s potential from the outset, as well as those that initially overlooked its strengths but later embraced them. My goal is to help more teams achieve greater efficiency by adopting these best practices, provided their project highly depends on uncompromising code quality and easy maintenance. This is what "Atomic Flow" is about.
Does your project get pull requests that you dread reviewing? Have you ever submitted a pull request that got ignored by project maintainers?
Putting together a pull request that presents proposed changes in a clear, well-organized way is nearly impossible for newer contributors to do on their own. Maintainers must take the lead in providing specific guidelines for pull requests for their project.
This talk will give maintainers a toolkit for teaching contributors how to produce PRs they’ll love to review. It’s derived from our experience onboarding hundreds of contributors to the Zulip open-source team chat project (https://github.com/zulip). I’ll cover:
Key takeaways for current and future project maintainers:
Key takeaways for contributors:
The state of the internet, c 1990:
The state of the internet, c 2025:
These three significant changes drastically change the threat model for OSS communities. In the beginning, someone had to have both knowledge and resources to harm or otherwise compromise a community of developers. Now, anyone with a grudge can make a bot army with seamless integrations and gracious freemium tiers for AI/LLMs. Likewise, when open source was small, the "who" who would be motivated to harm and otherwise disrupt those communities was limited. Now there is both massive social and economic benefit to harm and disrupt. This means that risks and threats now still include the motivated and resourced with the addition of those who are scarce in both.
We need to come together to build new organizational threat models that account for how this consequence has posed new risks to our communities. With care and attention to detail, we can introduce responsible friction that will protect our communication infrastructure, the lifeblood of what allows open source to grow.
There will also be a workshop with this presentation, with the outcome of creating an ongoing working group dedicated to helping OSS Foundations of all sizes protect their communities.
In this talk, we'll explore the hot debated terminology and meaning around "sovereign AI". We'll look at what the major AI vendors say, what open source communities are producing and how EU stakeholders, politicians and activists are navigating the debate. At the end, we'll address significant open questions and calls for action as to how to better create and support open-source, private and secure AI systems.
The regular FOSDEM lightning talk track isn't chaotic enough, so this year we're doubling down on Lightning Lightning Talks (now with added lightning!).
Thought of a last minute topic you want to share? Got your interesting talk rejected? Has something exciting happened in the last few weeks you want to talk about?
This is the first of two sessions for participants to speak about subjects which are interesting, amusing, or just something the FOSDEM audience would appreciate.
Selected speakers line up and present in one continuous automated stream, with an SLO of 99% talk uptime.
Presenters who attempt to speak for longer than 256 seconds risk being swept off the stage by our Lightning Lightning Talk Cleanup Crew.
Featuring:
I stumbled upon a forum post of Jean, tasked in 2010 with recovering text documents for a friend from their broken Windows 95 computer. Their friend used Microsoft Bob, and the Microsoft Bob letter writer exclusively.
I was fascinated by this story, and then spent months reverse-engineering the Microsoft Bob APIs. Ultimately producing the first third-party Microsoft Bob application. During this I learned that Microsoft Bob isn't only a laughable flop, (The word "only" is doing a lot of work here), it also had strong ideas on what computing should be.
We will take a tour of Bob and its history and what I discovered about its technical underpinnings while reverse engineering.
Hopefully I can convince you that these stories are still relevant today, and can inform us on how we think about the software we use. But most importantly; the software we recommend our (non-techy) friends and family to use.
Introducing Hard-working, Easy going Software Everyone Will Use
raylib began as a simple and easy-to-use graphics library to teach graphics programming. Over 12 years it has become one of the most popular C open-source graphics libraries in software world, supporting more than 20 platforms and operating systems, with bindings for over 60 programming languages. Remarkably, raylib is still actively developed and maintained by a single person: its original author.
In this talk, we will explore this 12-year adventure directly from its creator. We will look at how the library has evolved, how a tools ecosystem and community has been formed around it, what challenges and decisions shaped raylib direction and how the project has influenced the creator’s life. This is the story of a passionate open-source adventure.
How much do you know about Free Software, Open Source, Developers, and European meetings?
We interrupt your regularly scheduled programming to bring you a lively TV-style quiz show featuring several rounds of questions deemed too geeky for TV - about kernels, software projects, and digital culture. (Some will offer mass audience participation!)
As a fun game it will separate the Red Hat's from the Red Star's, the Linus Torvalds from "some random person in Nebraska", and is a fantastic way to enjoy our community even more.
Open Source powers nearly everything in our digital lives, from web servers to smartphones. In many ways, we could say Open Source has "won". But can we really celebrate that victory when so many maintainers are burning out, while users and companies continue to depend on their unpaid labor? The current sustainability models, from corporate sponsorships to paid support, often fall short, leaving creators overwhelmed and users with unrealistic expectations. In this talk, we’ll take a critical look at how Open Source has been funded (or not funded), why many existing models are failing, and what new paths we might explore to ensure the long-term health of the ecosystem. We’ll dissect funding approaches like donations, sponsorships, and open core, and ask some uncomfortable questions: Why are we still relying on volunteers to power global infrastructure? Is it time for an Open Source tax? Would paying volunteers actually motivate or demotivate them? This is not just a talk about money. It’s a call to radically rethink what sustainability really means for Open Source, and how we can build a future that doesn’t run on burnout.
In today's world everyone just gets open source ... at least you don't have to explain what it is any more. However, the way a corporation runs is based on transaction needs rather than deep philosophical beliefs, so Open Source and your place within a corporation (and often your value to it) depend on your ability to translate between these two positions. This talk aims to equip modern open source developers with the ability to navigate this translation effectively. And, although the transaction nature means trust is fleeting, constantly adjusting to the transactional needs can build fleeting trust into a longer term reliance.
Although Linux isn't the first open source project, it is the first one to begin successfully co-opting corporations into its development model. In the beginning Linux was a wholly volunteer operation that leaked into corporate data centres via subversive means. When the leak became a flood those in charge of the data centres reluctantly blessed Linux deployment to avoid being swept away. This meant that all the ancillary producers (drivers for hardware, databases, industrial applications etc.) all had to come to Linux on its own terms (which we, the Linux community hadn't actually thought about at the time). It also meant that relationships that began completely antagonistically usually ended up being harmonious (yesterday's enemy almost always became today's friend). The result was a years long somewhat collaborative project to develop rules of engagement between open source projects and corporations.
This talk will cover three things that came out of these rules of engagement:
agency: a corporation deals with open source through its developers at the code face. They are empowered to make decisions on its behalf way beyond any proprietary developer ever was and this empowerment changes the internal as well as external dynamics of employer to employee interaction.
Mutual Development: As an open source contributor you become responsible for deciding what's best for the project (and persuading your employer to agree).
Strategic misalignment: although corporations understand that they have to do open source, internally there's often an uneven penetration of how to do it. Thus a significant part of a good open source employees time is spent doing internal alignment to make sure internal lack of comprehension doesn't get it the way of sound execution.
We'll give examples of how to leverage these rules, an understanding of which will allow you to build a shifting transactional trust between you want your employer.
“Who pays your bills?” was the first question my now-CTO asked me,when we didn’t even know each other. Years later, we co-founded OPENGIS.ch, a 40-person company that thrives on geospatial open-source software and gives back by contributing heavily to the projects we build upon.
In this talk, I’ll share our journey building QField, an open-source mobile app with more than 2 million downloads, and creating a sustainable business model around it and QGIS. I’ll explain how the QGIS.org community works, and show why open source is not just a philosophy, but a real business opportunity.
We’ll explore sustainability, community, and business, the three pillars that allow open-source software to flourish and its contributors to make a living from it.
Every successful open source project starts small, but not every small project gets the chance to succeed. Here’s the paradox: funders prefer to back proven impact, yet impact requires early support. If new initiatives can’t access resources until they already “look successful,” we risk starving the very ecosystem that keeps open source innovative and diverse.
This talk explores this chicken-and-egg dilemma and proposes ways to flip the script. What would it look like if we invested not only in impact already proven, but also in potential?
Drawing from my experience building Pre-Seeds: Research 101, I’ll share insights into the struggles early-stage projects face (e.g., limited visibility, lack of credibility, and difficulty accessing networks). I'll also highlight the kinds of support that make a difference, sharing lessons on how these projects can be supported beyond grants. From spotlighting them on community stages and amplifying their voice online, to connecting them with mentors and fellowships — these “non-monetary investments” can bridge the gap until traditional funding becomes viable.
Attendees will leave with concrete ideas for how they — as individuals, organisations, or communities — can sustain the pipeline of emerging projects, ensuring the long-term resilience of open source.
If we want an open, thriving, and continuously renewing FOSS ecosystem, we need to get better at nurturing the eggs, not just celebrating the chickens.
How do we find and nurture the next generation of open source contributors? Unlike commercial companies, open source projects don’t have legions of recruiters to bring people into the fold—and yet our projects need a steady stream of new contributors. Or should open source projects assume that new contributors (and future committers) will continue to “self-select” onto the project?
The PostgreSQL open source project turns 30 in 2026 (Happy Birthday!). It has evolved from a small project that some referred to as “just a toy”—to today, where Postgres is thriving with an active community and a vast ecosystem of extensions and tooling. The project has clearly done some things right. Postgres is hugely popular, with a healthy upstream open source community plus a host of companies and products built around Postgres itself.
And Postgres is owned by no one company; instead, a multitude of competing interests align as people from different countries and continents roll up their sleeves to get the work done. But what happens when the current generation of Postgres committers step back or retire—where will the next generation of Postgres contributors come from?
Postgres isn’t special in needing new contributors. It just happens to be a 30-year-old project whose successes, experiments, and failures might apply to other communities too. In this talk, we’ll look at how contributors find their way into Postgres: what worked, what didn’t, and where we’re still struggling. And having the conversation at FOSDEM will help us think together about a challenge common to all of us—how successful open source projects need to evolve as they get older.
Introduction – Why FOSS compliance matters today: legal exposure, rising regulatory demands under the Cyber Resilience Act (CRA), and growing supply chain accountability.
Legal Framework – Overview of license obligations, liability risks, and the intersection of open source compliance with regulatory requirements (CRA, AI Act, product safety law).
Risk-Based Approach – How organizations can tailor the depth and scope of compliance to project risk, software use, and supply chain complexity.
Practice and Tools – SBOMs, scanning tools, policy frameworks, and OpenChain implementation: what actually works to make compliance efficient and auditable.
CRA Integration – How FOSS compliance measures support CRA obligations, especially regarding documentation, security updates, and traceability.
Conclusion and Outlook – From obligation to opportunity: compliance as a mark of quality and a driver of market trust.
In 2024 we gave a talk called "The Regulators are Coming". This year, the regulators are here!
This will be a talk to update the broader community on how the implementation of the CRA is advancing, and how we have made efforts to include the open source community so far. There will be a short update from the European Commission, the European Standardisation Organisations (CEN/CENELEC and ETSI) and a representative from a Market Surveilllance Authority (BSI Germany).
Questions? https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act
Want to participate? https://www.stan4cra.eu/
Open source represents 70% to 90% of modern software codebases and this is today seen as as a crucial global public infrastructure by many players. Given its ubiquity, this is increasingly part of geopolitical discussions and national-security agendas. This presentation will analyze the risks and governance challenges at the intersection of open source and global politics, with a focus on the recent European discourse on digital sovereignty and supply-chain security.
The core dilemma is that open source's power lies in the mutualization of risk (collective maintenance and faster vulnerability detection), but this is being undermined by fragmentation along national and corporate lines. We will explore:
The Weaponization of Open Source: How jurisdictional control over key platforms (like GitHub and PyPI, largely hosted by US entities) translates into geopolitical tools (the "Panopticon" and "Chokepoint" effects), as seen in the 2019 GitHub sanctions.
Lack of Investment: The crisis of critical components being maintained by small, under-resourced teams, creating an ecosystem that powers the global economy but lacks the resources to secure itself (e.g., the Log4j incident, XZ, and others).
The Fragmentation Trend: The response from nations like China, which are building domestic repositories (Gitee, OpenAtom Foundation) as part of a plan for technological self-sufficiency. This fragmentation reduces interoperability and shared visibility. This makes open source more weak and less resilient.
The presentation will conclude by openly discussing a shared call to action for the FOSS community: How can we forge a stronger shared responsibility between developers, policymakers, and industry to mitigate these losses and keep open source secure, interoperable, and globally accessible?.
Over the past twenty years, I've written about esolangs as a hacker folk art for the blog esoteric.codes, bringing the voice of many esolangers together, to find crossover in their approach to computation as a medium. Meanwhile, I've produced esolangs of my own, recently brought together by MIT Press in Forty-Four Esolangs.
This talk brings together both projects, to show the potential of this hacker folk art to go far beyond the listicles of puzzle languages and joke languages with which it is often associated. Its languages ask programmers to write code as a series of photographs, or by two programmers typing in tandem, or using global variables that are global across the world. It presents esolangs as challenges to conventional ideas about code: everything from "the cognitive gap between the text and performance of code should be as small possible" to "languages should lead to runnable programs" or even "code should be written with intent."
This talk emphasizes esolangs as a community form built on dialogue between esolangers and the esoprogrammers who explore their ideas and find the limits of their languages. I hope to inspire more programmers to recognize esolangs as our own space of play and embrace it as an experimental medium. In this moment, when AI tools help reinforce a particular, corporatized vision of how code should look, esolangs offer resistance to this monostyle and against the de-skilling of programming as art.
The regular FOSDEM lightning talk track isn't chaotic enough, so this year we're doubling down on Lightning Lightning Talks (now with added lightning!).
Thought of a last minute topic you want to share? Got your interesting talk rejected? Has something exciting happened in the last few weeks you want to talk about?
Submissions are open, see the news post for details: fosdem.org/2026/news/2026-01-26-lightning-talks
This is the second of two sessions for participants to speak about subjects which are interesting, amusing, or just something the FOSDEM audience would appreciate.
Selected speakers line up and present in one continuous automated stream, with an SLO of 99% talk uptime.
Presenters who attempt to speak for longer than 256 seconds risk being swept off the stage by our Lightning Lightning Talk Cleanup Crew.
The curl project has been bombarded by large volumes of low quality AI slop security reports and Daniel shows examples. Sloppy humans causing Denial-of-Service attacks by overloading maintainers with quickly produced almost-real-looking rubbish.
At the same time, upcoming new AI powered tools find flaws and mistakes in existing code in ways no previous code analyzers have been able to. Daniel names names and shows examples of findings, some that even feels almost human. Next level bug-hunting for sure.
AI now simultaneously brings us the worst and the best.