The world of SBOMs and software transparency artefacts - In-Toto attestations, VEX updates and much more - all mention digital signatures. But not with what and how we should validate these. One thing is for sure - we don't want to use the existing WebPKI. There are some interesting initiatives, like SigStore, but they do not solve all issues. It's time that we work on solving this problem and define a solution for digital signatures that is distributed, secure and trustworthy. This is a call for help!
harvest-now-decrypt-later becoming more relevant. The widely deployed classical cryptographic algorithms such as RSA and ECC face a real risk of being broken by quantum attacks, most notably through Shor’s algorithm. This looming threat makes the transition to Post-Quantum Cryptography (PQC) urgent, not as a future project, but as a present-day migration challenge. ML-KEM (key-exchange), ML-DSA and SLH-DSA (digital signatures) in modern cryptographic infrastructures.To make this transition concrete, we will demonstrate a TLS connection with hybrid key-exchange and post-quantum signature, showing how post-quantum and classical algorithms can operate together.
https://openssl-foundation.org/post/2025-04-29-ml-kem/index.html
Most container images in production are still unsigned, and even when signatures exist, they often provide no clear guarantee about where the artifact came from or what threat the signature is supposed to protect against. Supply-chain attacks exploit this gap and become an increasingly important issue when publishing or importing open-source software.
This talk presents security capabilities in Docker and Moby BuildKit that address these issues. BuildKit executes all build steps in isolated, immutable sandboxes strictly defined by the build definition, and produces SLSA attestations with complete snapshots of the build’s source material.
Additionally, Docker will provide a trusted BuildKit instance running inside GitHub Actions infrastructure. Artifacts produced there include signed attestations tied to a well-defined security boundary. The talk explains what guarantees this environment provides and how this differs from traditional approaches.
The session also covers how to update container-based pipelines to always validate all BuildKit inputs (images, Git sources, HTTP sources) using Rego policies and BuildKit attestations. These checks apply both to artifacts coming from the new trusted builder instance and to any other verifiable artifacts.
These improvements are designed to strengthen container security and raise the baseline for how open-source projects should sign, attest, and verify artifacts.
It is widely considered good practice to sign commits. But leveraging those signatures is hard. Sequoia git is a system to authenticate changes to a VCS repository. A project embeds a signing policy in their git repository, which says who is allowed to add commits, make releases, and modify the policy. sq-git log can then authenticate a range of commits using the embedded policy. Sequoia git distinguishes itself from projects like sigstore in that all of the information required to authenticate commits is available locally, and no third-party authorities are required. In this talk, I'll present sequoia git's design, explain how it enforces a policy, and how to use it in your project.
Endpoints are where most security incidents begin. Compromises often start with phishing, software vulnerabilities, or misconfigurations on individual laptops and servers. Modern security teams need rich endpoint telemetry for detection, investigation, and response. Commercial products often act as black boxes that limit flexibility, collect data in proprietary ways, and create vendor lock-in.
This talk presents a practical blueprint for building a scalable endpoint telemetry and security pipeline using open technologies. At the foundation is osquery, a Linux Foundation project that turns every endpoint into a high-fidelity sensor. On top of this, we build four layers: a control layer for managing endpoints, an ingestion, streaming, and storage layer for moving and retaining data, a detection and intelligence layer for applying rules and enrichment, and a correlation, visualization, and hunting layer for analysis and response.
We will walk through architectural patterns, real-world lessons, and tradeoffs. Attendees will learn how to assemble their own endpoint telemetry stack from collection to correlation without relying on closed products.
HyperDbg is a modern, open-source hypervisor-based debugger supporting both user- and kernel-mode debugging. Operating at the hypervisor level, it bypasses OS debugging APIs and offers stealthy hooks, unlimited simulated debug registers, fine-grained memory monitoring, I/O debugging, and full execution control, enabling analysts to observe malware with far greater reliability than traditional debuggers.
When it comes to debugger stealthiness and sandboxing, environment artifacts can reveal the presence of analysis tools - particularly under nested virtualization. To address this issue, we present HyperEvade, a transparency layer for HyperDbg. HyperEvade intercepts hypervisor-revealing instructions, normalizes timing sources, conceals virtualization-specific identifiers, and emulates native hardware behavior, reducing the observable footprint of the hypervisor.
While perfect transparency remains a future endeavour, HyperEvade significantly raises the bar for stealthy malware analysis. By suppressing common detection vectors, it enables more realistic malware execution and reduces evasion, making HyperDbg a more dependable tool for observing evasive or self-protective malware. This talk covers HyperDbg’s architecture and features, HyperEvade’s design, and practical evaluation results.
Resources: - HyperDbg repository: https://github.com/HyperDbg/HyperDbg/ - Documentation: https://docs.hyperdbg.org/ - Kernel-mode debugger design: https://research.hyperdbg.org/debugger/kernel-debugger-design/ - Research paper: https://dl.acm.org/doi/abs/10.1145/3548606.3560649
This is a live tutorial of hacking against keyboards of all forms. Attacking the keyboard is the ultimate strategy to hijack a session before it is encrypted, capturing plaintext at the source and (often) in much simpler ways than those required to attack network protocols.
In this session we explore available attack vectors against traditional keyboards, starting with plain old keyloggers. We then advance to “Van Eck Phreaking” style attacks against individual keystroke emanations as well as RF wireless connections, and we finally graduate to the new hotness: acoustic attacks by eavesdropping on the sound of you typing!
Use your newfound knowledge for good, with great power comes great responsibility!
A subset of signal leak attacks focusing on keyboards. This talk is compiled with open sources, no classified material will be discussed.
OAuth tokens are the new crown jewels. Once issued, they bypass MFA and give API-level access that is hard to monitor. The opaque nature of their use and the difficulty in monitoring their activity create a dangerous blind spot for security teams, making them a primary target for attackers. This presentation will delve into the lifecycle of OAuth tokens, explore real-world attack vectors, and provide actionable strategies for protecting these high-value assets. We will also review the tactics, techniques, and procedures (TTPs) of notorious gangs like ShinyHunters and Scattered Spider, as demonstrated in the 2025 Salesforce attacks.
Bots generate roughly half of all Internet traffic. Some are clearly malicious (password crackers, vulnerability scanners, application-level/L7 DDoS), and others are merely unwanted (web scrappers, carting, appointment etc) bots. Traditional challenges (CAPTCHAs, JavaScript checks) degrade user experience, and some vendors are deprecating them. An alternative is traffic and behavior analytics, which is much more sophisticated, but can be far more effective.
Complicating matters, there are cloud services not only helping to bypass challenges, but also mimic browsers and human behavior. It's tough to build a solid protection system withstand such proxy services.
In this talk, we present WebShield, a small open-source Python daemon that analyzes Tempesta FW, an open-source web accelerator, access logs and dynamically classifies and blocks bad bots.
You'll learn: * Which bots are easy to detect (e.g., L7 DDoS, password crackers) and which are harder (e.g., scrapers, carting/checkout abuse). * Why your secret weapon is your users’ access patterns and traffic statistics—and how to use them. * How to efficiently deliver web-server access logs to an analytics database (e.g., ClickHouse). * Traffic fingerprints (JA3, JA4, p0f): how they’re computed and their applicability for machine learning * Tempesta Fingerprints: lightweight fingerprints designed for automatic web clients clustering. * How to correlate multiple traffic characteristics and catch lazy bot developers. * Baseline models for access-log analytics and how to validate them. * How to block large botnets without blocking half the Internet. * Scoring, behavioral analysis, and other advanced techniques are not yet implemented
Backdoors in software are real. We’ve seen injections creep into open-source projects more than once. Remember the infamous xz backdoor? That was just the headline act. Before that, we have seen the PHP backdoor (2021), vsFTPd (CVE-2011-2523), and ProFTPD (CVE-2010-20103). And it doesn’t stop at open-source projects: network daemons baked into router firmware have been caught red-handed too—think Belkin F9K1102, D-Link DIR-100, and Tenda W302R. Spoiler alert: this is likely just the tip of the iceberg. Why is this so scary? Because a single backdoor in a popular open-source project or router model is basically an all-you-can-eat buffet for attackers—millions of systems served on a silver platter.
Finding and neutralizing backdoors means digging deep into large codebases and binary firmware. Sounds heroic, right? In practice, even for a seasoned analyst armed with reverse-engineering tools (and maybe a good Belgian beer), it’s a royal pain. So painful that, honestly, almost nobody does it. Some brave souls tried building specialized reverse tools—Firmalice, HumIDIFy, Stringer, Weasel—but those projects have been gathering dust for years. And when we tested Stringer (which hunts for hard-coded strings that might trigger backdoors), the results were… let’s say “meh”: tons of noise, so many missed hits.
This is where ROSA (https://github.com/binsec/rosa) comes in. Our mission? Make backdoor detection practical enough that people actually want to do it—no Belgian beer required (but appreciated!). Our secret weapon: fuzzing. Standard fuzzers like AFL++ (https://github.com/AFLplusplus/AFLplusplus) bombard programs with massive input sets to make them crash. It’s brute force, but it works wonders for memory-safety bugs. Backdoors, though, play a different game: they don’t crash—they hide behind secret triggers and valid behaviors. So we built a mechanism that teaches fuzzers to spot the difference between “normal” and “backdoored” behavior. We integrated it into AFL++, and guess what? It nailed 7 real-world backdoors and 10 synthetic ones in our tests.
In this talk, we’d like to show you how ROSA works, demo it live, and share ideas for making it even better. If you’re into fuzzing, reverse engineering, or just love geeking out over security, you’re in for a treat.
Landlock is a Linux Security Module that empowers unprivileged processes to securely restrict their own access rights (e.g., filesystem, network). While Landlock provides powerful kernel primitives, using it typically requires modifying application code.
Island makes Landlock practical for everyday workflows by acting as a high-level wrapper and policy manager. Developed alongside the kernel feature and its Rust libraries, it bridges the gap between raw security mechanisms and user activity through: - Zero-code integration: Runs existing binaries without modification. - Declarative policies: Uses TOML profiles instead of code-based rules. - Context-aware activation: Automatically applies security profiles based on your current working directory. - Full environment isolation: Manages isolated workspaces (XDG directories, TMPDIR) in addition to access control.
In this talk, we will provide a brief overview of the related kernel mechanisms before diving into Island. We'll explain the main differences with other mechanisms and tools, and we'll explain Island's design and how it works, with a demo.
The Capslock project was started within Google to provide a capability analysis toolkit for Go packages, and has since been open sourced and is being extended to support other languages.
In this talk, we'll walk through using the experimental cargo-capslock tool developed through a grant from Alpha-Omega to analyse the capabilities of Rust services. We'll then use the result of that analysis to create seccomp profiles that can be applied using container orchestration systems (such as Kubernetes) to restrict services and ensure that updates are unable to silently open new attack vectors, and discuss how this technique can be applied to services written in other languages as well.
Open-weight LLMs (like LLaMA, Mistral, and DeepSeek-R1) have triggered a "Cambrian explosion" of innovation, but they have also democratized offensive cyber capabilities. Recent evaluations, such as MITRE’s OCCULT framework, show that publicly available models can now achieve >90% success rates on offensive cyber knowledge tests, enabling targeted phishing, malware polymorphism, and vulnerability discovery at scale.
For the Open Source community, this presents an existential crisis. Traditional security models (API gating, monitoring, rate limiting) rely on centralized control, which vanishes the moment weights are published. Furthermore, emerging regulations like the EU AI Act risk imposing impossible compliance burdens on open model developers for downstream misuse they cannot control, such as post-market monitoring.
In this talk, Alfonso De Gregorio (Pwnshow) will deconstruct the "Mitigation Gap"—the technical reality that once a model is downloaded, safety filters can be trivially fine-tuned away. Drawing on his direct consultation work with the European Commission, he will explain how we can navigate this minefield. We will discuss:
1/ The Threat Reality: A look at tools like Xanthorox AI and DeepSeek-R1 to understand the actual offensive capabilities of current open weights, and the state of the art in offensive AI.
2/ The Policy Trap: Why "strict" interpretations of the EU AI Act could stifle open innovation, and the fight to shift liability to the modifier and deployer rather than the open-source developer.
3/ The Way Forward: Technical solutions for "Responsible Release" (Model Cards, capability evaluations) and the necessity of AI-enabled defenses to counterbalance the offensive drop in barrier-to-entry.
This session is for security practitioners and open-source advocates who want to ensure the future of AI remains open, while pragmatically addressing the security chaos it unleashes.
Achieving improved security in the open source ecosystem is more than a theoretical goal but a plausible reality as shown by the track record of nonprofit Open Source Technology Improvement Fund, Inc. Following a best practice of independent code review with a process specifically tailored to open source projects and communities, OSTIF has worked on over 100 security audits of projects ranging from git, cURL, kubernetes, php, sigstore, and has audit reports and numerous vulnerability fixings to demonstrate effectiveness.
Everyone's excited (sarcasm) that AI coding tools make developers more productive. Security teams are excited too - they've never had this much job security.
LLMs and AI-assisted coding tools are writing billions of lines of code, so teams can ship 10x faster. They're also inheriting vulnerabilities 10x faster.
We need to detect AI-generated code and trace it back to its FOSS origins. The challenge: exact matching doesn't work for AI-generated code since each generation may have small variations given the same input prompt.
AI-Generated Code Search (https://github.com/aboutcode-org/ai-gen-code-search) introduces a new approach using locality-sensitive hashing and content-defined chunking for approximate matching that actually works with AI output variations. This FOSS project delivers reusable open source libraries, public APIs, and open datasets that make AI code detection accessible to everyone, not just enterprises with massive budgets.
In this talk, we'll explain how we fingerprint code fragments for fuzzy matching, build efficient indexes that don't balloon to terabytes, and trace AI-generated snippets back to their training data sources. We'll demo real examples of inherited vulnerabilities, show how it integrates with existing FOSS tools for SBOM and supply chain analysis, and explain how this directly supports CRA compliance for tracking code origin.
Bottom line: if AI-generated code is in your dependencies (and it probably is), you need visibility into what it's derived from and what risks it carries. This project gives you the FOSS tools and data to find out.
Your AI model is a new attack surface! Unlike traditional applications where threats are well-documented, ML systems face unique vulnerabilities: adversarial inputs crafted to fool classifiers, data poisoning during training, prompt injection in LLM applications, model extraction through API probing, and membership inference attacks that leak training data. Most security teams monitor network traffic and system logs. Few monitor the AI layer itself. This talk shows how to build security-focused observability for production ML systems using open source tools.
I'll demonstrate during the track 3 Threat detection patterns: 1. Adversarial input detection 2. Model behavior monitoring 3. LLM-specific security monitoring
everything.... with a fully open source. stack Prometheus for metrics (custom security-focused exporters) Loki for structured logging with retention policies Grafana for security dashboards and alerting OpenTelemetry for distributed tracing
Attendees will leave with the following materials: Threat model framework for production ML systems Prometheus alerting rules for common AI attack patterns Log analysis queries for security investigation Architecture for integrating AI monitoring with existing SOC workflows
As cyber threats grow in sophistication, the “trust but verify” model is no longer enough. Organizations are rapidly shifting toward Zero Trust Architecture (ZTA) — a security paradigm where no user or device is inherently trusted, inside or outside the network.
Zero Trust Architecture (ZTA) is no longer a buzzword—it’s a necessity. With traditional perimeter-based security models failing to address modern threats like lateral movement and insider attacks, organizations are increasingly adopting ZTA’s "never trust, always verify" philosophy.
This architecture is built on several pillars: - Identity-centric protection defining identity as the new perimeter. - Dynamic micro segmentation and contextual access controls to isolate resources. - Continuous monitoring and behavioural analytics to detect sophisticated lateral movements and insider threats. Modern ZTA implementations employ AI and automation for adaptive threat detection and response, dramatically reducing breach costs and attack surfaces for distributed enterprises. Adoption of Zero Trust is rapidly increasing, with industry research indicating that over 70% of organizations are integrating ZTA in their cybersecurity frameworks and at least 70% of new remote access deployments will rely on these principles by the end of 2025. Despite its robust security benefits, ZTA demands substantial investment in identity management, policy enforcement, and ongoing operational monitoring.
But how do we move from theoretical principles to practical implementation?
This talk explores the why and how of ZTA adoption for mid-level engineers and security practitioners. We’ll break down core ZTA components—identity-centric access, micro segmentation, and continuous monitoring—using real-world examples .
Attendees will leave with: • A clear roadmap for phased ZTA adoption, starting with high-value assets. • Strategies to balance security and user experience (e.g., just-in-time access). • Lessons from industry leaders like IBM on overcoming common pitfalls.
Whether you’re in DevOps, cloud security, or IT governance, this session will equip you to champion ZTA in your organization