Virtually Attend FOSDEM 2026

AI Security Monitoring: Detecting Threats Against Production ML Systems

2026-01-31T18:00:00+01:00 for 00:25

Your AI model is a new attack surface! Unlike traditional applications where threats are well-documented, ML systems face unique vulnerabilities: adversarial inputs crafted to fool classifiers, data poisoning during training, prompt injection in LLM applications, model extraction through API probing, and membership inference attacks that leak training data. Most security teams monitor network traffic and system logs. Few monitor the AI layer itself. This talk shows how to build security-focused observability for production ML systems using open source tools.

I'll demonstrate during the track 3 Threat detection patterns: 1. Adversarial input detection 2. Model behavior monitoring 3. LLM-specific security monitoring

everything.... with a fully open source. stack Prometheus for metrics (custom security-focused exporters) Loki for structured logging with retention policies Grafana for security dashboards and alerting OpenTelemetry for distributed tracing

Attendees will leave with the following materials: Threat model framework for production ML systems Prometheus alerting rules for common AI attack patterns Log analysis queries for security investigation Architecture for integrating AI monitoring with existing SOC workflows

View on FOSDEM site