Events in room UB5.132

Sat

 "Please sign your artefacts. WITH WHAT?" ( 2026 )

Saturday at 10:30, 25 minutes, UB5.132, UB5.132, Security Olle E. Johansson , slides , video

The world of SBOMs and software transparency artefacts - In-Toto attestations, VEX updates and much more - all mention digital signatures. But not with what and how we should validate these. One thing is for sure - we don't want to use the existing WebPKI. There are some interesting initiatives, like SigStore, but they do not solve all issues. It's time that we work on solving this problem and define a solution for digital signatures that is distributed, secure and trustworthy. This is a call for help!

 "Demystifying Post-Quantum Cryptography: The Hybrid Approach" ( 2026 )

Saturday at 11:00, 25 minutes, UB5.132, UB5.132, Security Rutvik Kshirsagar Shreyas Mahangade Clemens Lang , slides , video

  • The pace at which quantum computing is evolving right now, threats of harvest-now-decrypt-later becoming more relevant. The widely deployed classical cryptographic algorithms such as RSA and ECC face a real risk of being broken by quantum attacks, most notably through Shor’s algorithm. This looming threat makes the transition to Post-Quantum Cryptography (PQC) urgent, not as a future project, but as a present-day migration challenge.
  • You may have questions whether the transition to PQC is even necessary at the moment. It is true that quantum computers are years away, but it hardly matters because so many governments, telecom, defense entities worldwide are now requiring a transition.
  • In this talk, we would focus on the practical hybrid transition from classical to quantum-resistant cryptography. We would explore NIST’s PQC standardization efforts through newly selected algorithms particularly ML-KEM (key-exchange), ML-DSA and SLH-DSA (digital signatures) in modern cryptographic infrastructures.
  • The transition from classical crypto to a hybrid model enable organizations to begin adopting PQC today without breaking interoperability or relying on fully quantum-resistant stacks before they’re ready.
  • To make this transition concrete, we will demonstrate a TLS connection with hybrid key-exchange and post-quantum signature, showing how post-quantum and classical algorithms can operate together.

 "Streamlining Signed Artifacts in Container Ecosystems" ( 2026 )

Saturday at 11:30, 25 minutes, UB5.132, UB5.132, Security Tonis Tiigi , slides , video

Most container images in production are still unsigned, and even when signatures exist, they often provide no clear guarantee about where the artifact came from or what threat the signature is supposed to protect against. Supply-chain attacks exploit this gap and become an increasingly important issue when publishing or importing open-source software.

This talk presents security capabilities in Docker and Moby BuildKit that address these issues. BuildKit executes all build steps in isolated, immutable sandboxes strictly defined by the build definition, and produces SLSA attestations with complete snapshots of the build’s source material.

Additionally, Docker will provide a trusted BuildKit instance running inside GitHub Actions infrastructure. Artifacts produced there include signed attestations tied to a well-defined security boundary. The talk explains what guarantees this environment provides and how this differs from traditional approaches.

The session also covers how to update container-based pipelines to always validate all BuildKit inputs (images, Git sources, HTTP sources) using Rego policies and BuildKit attestations. These checks apply both to artifacts coming from the new trusted builder instance and to any other verifiable artifacts.

These improvements are designed to strengthen container security and raise the baseline for how open-source projects should sign, attest, and verify artifacts.

 "Sequoia git: Making Signed Commits Matter" ( 2026 )

Saturday at 12:00, 25 minutes, UB5.132, UB5.132, Security Neal H. Walfield , slides , video

It is widely considered good practice to sign commits. But leveraging those signatures is hard. Sequoia git is a system to authenticate changes to a VCS repository. A project embeds a signing policy in their git repository, which says who is allowed to add commits, make releases, and modify the policy. sq-git log can then authenticate a range of commits using the embedded policy. Sequoia git distinguishes itself from projects like sigstore in that all of the information required to authenticate commits is available locally, and no third-party authorities are required. In this talk, I'll present sequoia git's design, explain how it enforces a policy, and how to use it in your project.

 "An Endpoint Telemetry Blueprint for Security Teams" ( 2026 )

Saturday at 12:30, 25 minutes, UB5.132, UB5.132, Security Victor Lyuboslavsky , video

Endpoints are where most security incidents begin. Compromises often start with phishing, software vulnerabilities, or simple misconfigurations on individual laptops and servers. Modern security teams rely on endpoint telemetry for detection, investigation, and response. But for many engineers, this part of the stack remains opaque and difficult to reason about.

This talk presents a practical, open-source blueprint for building an endpoint telemetry pipeline that engineers can actually understand and evolve. We start with osquery, a Linux Foundation project that exposes endpoint state as structured, queryable data. On top of that, we build a layered system with clear responsibilities. This includes a control layer for intent and coordination, a data layer responsible for ingestion, buffering, streaming, and storage, a detection and intelligence layer with inspectable logic, and a correlation and response layer designed for humans in the loop.

Rather than pitching a product, this talk focuses on boundaries, contracts, and tradeoffs. We walk through real-world design decisions and common failure modes. We also explore why ownership of telemetry matters more than any single tool. Attendees will leave with a mental model they can adapt, a stack they can run locally, and the confidence to build endpoint security systems that are transparent, flexible, and defensible without relying on closed platforms.

 "Invisible Hypervisors: Stealthy Malware Analysis with HyperDbg" ( 2026 )

Saturday at 13:00, 25 minutes, UB5.132, UB5.132, Security Björn Ruytenberg Sina Karvandi , slides , video

HyperDbg is a modern, open-source hypervisor-based debugger supporting both user- and kernel-mode debugging. Operating at the hypervisor level, it bypasses OS debugging APIs and offers stealthy hooks, unlimited simulated debug registers, fine-grained memory monitoring, I/O debugging, and full execution control, enabling analysts to observe malware with far greater reliability than traditional debuggers.

When it comes to debugger stealthiness and sandboxing, environment artifacts can reveal the presence of analysis tools - particularly under nested virtualization. To address this issue, we present HyperEvade, a transparency layer for HyperDbg. HyperEvade intercepts hypervisor-revealing instructions, normalizes timing sources, conceals virtualization-specific identifiers, and emulates native hardware behavior, reducing the observable footprint of the hypervisor.

While perfect transparency remains a future endeavour, HyperEvade significantly raises the bar for stealthy malware analysis. By suppressing common detection vectors, it enables more realistic malware execution and reduces evasion, making HyperDbg a more dependable tool for observing evasive or self-protective malware. This talk covers HyperDbg’s architecture and features, HyperEvade’s design, and practical evaluation results.

Resources:

  • HyperDbg repository: https://github.com/HyperDbg/HyperDbg/

  • Documentation: https://docs.hyperdbg.org/

  • Kernel-mode debugger design: https://research.hyperdbg.org/debugger/kernel-debugger-design/

  • Research paper: https://dl.acm.org/doi/abs/10.1145/3548606.3560649

 "All Your Keyboards Are Belong To Us!" ( 2026 )

Saturday at 13:30, 25 minutes, UB5.132, UB5.132, Security Federico Lucifredi , video

This is a live tutorial of hacking against keyboards of all forms. Attacking the keyboard is the ultimate strategy to hijack a session before it is encrypted, capturing plaintext at the source and (often) in much simpler ways than those required to attack network protocols.

In this session we explore available attack vectors against traditional keyboards, starting with plain old keyloggers. We then advance to “Van Eck Phreaking” style attacks against individual keystroke emanations as well as RF wireless connections, and we finally graduate to the new hotness: acoustic attacks by eavesdropping on the sound of you typing!

Use your newfound knowledge for good, with great power comes great responsibility!

A subset of signal leak attacks focusing on keyboards. This talk is compiled with open sources, no classified material will be discussed.

 "The invisible key: Securing the new attack vector of OAuth tokens" ( 2026 )

Saturday at 14:00, 25 minutes, UB5.132, UB5.132, Security Gianluca Varisco , video

OAuth tokens are the new crown jewels. Once issued, they bypass MFA and give API-level access that is hard to monitor. The opaque nature of their use and the difficulty in monitoring their activity create a dangerous blind spot for security teams, making them a primary target for attackers. This presentation will delve into the lifecycle of OAuth tokens, explore real-world attack vectors, and provide actionable strategies for protecting these high-value assets. We will also review the tactics, techniques, and procedures (TTPs) of notorious gangs like ShinyHunters and Scattered Spider, as demonstrated in the 2025 Salesforce attacks.

 "Dynamic Bot Blocking with Web-Server Access-Log Analytics" ( 2026 )

Saturday at 14:30, 25 minutes, UB5.132, UB5.132, Security Alexander Krizhanovsky , slides , video

Bots generate roughly half of all Internet traffic. Some are clearly malicious (password crackers, vulnerability scanners, application-level/L7 DDoS), and others are merely unwanted (web scrappers, carting, appointment etc) bots. Traditional challenges (CAPTCHAs, JavaScript checks) degrade user experience, and some vendors are deprecating them. An alternative is traffic and behavior analytics, which is much more sophisticated, but can be far more effective.

Complicating matters, there are cloud services not only helping to bypass challenges, but also mimic browsers and human behavior. It's tough to build a solid protection system withstand such proxy services.

In this talk, we present WebShield, a small open-source Python daemon that analyzes Tempesta FW, an open-source web accelerator, access logs and dynamically classifies and blocks bad bots.

You'll learn: * Which bots are easy to detect (e.g., L7 DDoS, password crackers) and which are harder (e.g., scrapers, carting/checkout abuse). * Why your secret weapon is your users’ access patterns and traffic statistics—and how to use them. * How to efficiently deliver web-server access logs to an analytics database (e.g., ClickHouse). * Traffic fingerprints (JA3, JA4, p0f): how they’re computed and their applicability for machine learning * Tempesta Fingerprints: lightweight fingerprints designed for automatic web clients clustering. * How to correlate multiple traffic characteristics and catch lazy bot developers. * Baseline models for access-log analytics and how to validate them. * How to block large botnets without blocking half the Internet. * Scoring, behavioral analysis, and other advanced techniques are not yet implemented

 "Finding backdoors with fuzzing" ( 2026 )

Saturday at 15:00, 25 minutes, UB5.132, UB5.132, Security Michaël Marcozzi Dimitri Kokkonis Stefano Zacchiroli , video

Backdoors in software are real. We’ve seen injections creep into open-source projects more than once. Remember the infamous xz backdoor? That was just the headline act. Before that, we have seen the PHP backdoor (2021), vsFTPd (CVE-2011-2523), and ProFTPD (CVE-2010-20103). And it doesn’t stop at open-source projects: network daemons baked into router firmware have been caught red-handed too—think Belkin F9K1102, D-Link DIR-100, and Tenda W302R. Spoiler alert: this is likely just the tip of the iceberg. Why is this so scary? Because a single backdoor in a popular open-source project or router model is basically an all-you-can-eat buffet for attackers—millions of systems served on a silver platter.

Finding and neutralizing backdoors means digging deep into large codebases and binary firmware. Sounds heroic, right? In practice, even for a seasoned analyst armed with reverse-engineering tools (and maybe a good Belgian beer), it’s a royal pain. So painful that, honestly, almost nobody does it. Some brave souls tried building specialized reverse tools—Firmalice, HumIDIFy, Stringer, Weasel—but those projects have been gathering dust for years. And when we tested Stringer (which hunts for hard-coded strings that might trigger backdoors), the results were… let’s say “meh”: tons of noise, so many missed hits.

This is where ROSA (https://github.com/binsec/rosa) comes in. Our mission? Make backdoor detection practical enough that people actually want to do it—no Belgian beer required (but appreciated!). Our secret weapon: fuzzing. Standard fuzzers like AFL++ (https://github.com/AFLplusplus/AFLplusplus) bombard programs with massive input sets to make them crash. It’s brute force, but it works wonders for memory-safety bugs. Backdoors, though, play a different game: they don’t crash—they hide behind secret triggers and valid behaviors. So we built a mechanism that teaches fuzzers to spot the difference between “normal” and “backdoored” behavior. We integrated it into AFL++, and guess what? It nailed 7 real-world backdoors and 10 synthetic ones in our tests.

In this talk, we’d like to show you how ROSA works, demo it live, and share ideas for making it even better. If you’re into fuzzing, reverse engineering, or just love geeking out over security, you’re in for a treat.

 "Island: Sandboxing tool powered by Landlock" ( 2026 )

Saturday at 15:30, 25 minutes, UB5.132, UB5.132, Security Mickaël Salaün , slides , video

Landlock is a Linux Security Module that empowers unprivileged processes to securely restrict their own access rights (e.g., filesystem, network). While Landlock provides powerful kernel primitives, using it typically requires modifying application code.

Island makes Landlock practical for everyday workflows by acting as a high-level wrapper and policy manager. Developed alongside the kernel feature and its Rust libraries, it bridges the gap between raw security mechanisms and user activity through:

  • Zero-code integration: Runs existing binaries without modification.
  • Declarative policies: Uses TOML profiles instead of code-based rules.
  • Context-aware activation: Automatically applies security profiles based on your current working directory.
  • Full environment isolation: Manages isolated workspaces (XDG directories, TMPDIR) in addition to access control.

In this talk, we will provide a brief overview of the related kernel mechanisms before diving into Island. We'll explain the main differences with other mechanisms and tools, and we'll explain Island's design and how it works, with a demo.

 "Using Capslock analysis to develop seccomp filters for Rust (and other) services" ( 2026 )

Saturday at 16:00, 25 minutes, UB5.132, UB5.132, Security Adam Harvey , video

The Capslock project was started within Google to provide a capability analysis toolkit for Go packages, and has since been open sourced and is being extended to support other languages.

In this talk, we'll walk through using the experimental cargo-capslock tool developed through a grant from Alpha-Omega to analyse the capabilities of Rust services. We'll then use the result of that analysis to create seccomp profiles that can be applied using container orchestration systems (such as Kubernetes) to restrict services and ensure that updates are unable to silently open new attack vectors, and discuss how this technique can be applied to services written in other languages as well.

 "The Open-Weight Dilemma: Mitigating AI Cyber Risks Without Killing Open Source" ( 2026 )

Saturday at 16:30, 25 minutes, UB5.132, UB5.132, Security Alfonso De Gregorio , video

Open-weight LLMs (like LLaMA, Mistral, and DeepSeek-R1) have triggered a "Cambrian explosion" of innovation, but they have also democratized offensive cyber capabilities. Recent evaluations, such as MITRE’s OCCULT framework, show that publicly available models can now achieve >90% success rates on offensive cyber knowledge tests, enabling targeted phishing, malware polymorphism, and vulnerability discovery at scale.

For the Open Source community, this presents an existential crisis. Traditional security models (API gating, monitoring, rate limiting) rely on centralized control, which vanishes the moment weights are published. Furthermore, emerging regulations like the EU AI Act risk imposing impossible compliance burdens on open model developers for downstream misuse they cannot control, such as post-market monitoring.

In this talk, Alfonso De Gregorio (Pwnshow) will deconstruct the "Mitigation Gap"—the technical reality that once a model is downloaded, safety filters can be trivially fine-tuned away. Drawing on his direct consultation work with the European Commission, he will explain how we can navigate this minefield. We will discuss:

1/ The Threat Reality: A look at tools like Xanthorox AI and DeepSeek-R1 to understand the actual offensive capabilities of current open weights, and the state of the art in offensive AI.

2/ The Policy Trap: Why "strict" interpretations of the EU AI Act could stifle open innovation, and the fight to shift liability to the modifier and deployer rather than the open-source developer.

3/ The Way Forward: Technical solutions for "Responsible Release" (Model Cards, capability evaluations) and the necessity of AI-enabled defenses to counterbalance the offensive drop in barrier-to-entry.

This session is for security practitioners and open-source advocates who want to ensure the future of AI remains open, while pragmatically addressing the security chaos it unleashes.

 "It's Time to Audit Open Source: Success Stories with OSTIF" ( 2026 )

Saturday at 17:00, 25 minutes, UB5.132, UB5.132, Security Amir Montazery , video

Achieving improved security in the open source ecosystem is more than a theoretical goal but a plausible reality as shown by the track record of nonprofit Open Source Technology Improvement Fund, Inc. Following a best practice of independent code review with a process specifically tailored to open source projects and communities, OSTIF has worked on over 100 security audits of projects ranging from git, cURL, kubernetes, php, sigstore, and has audit reports and numerous vulnerability fixings to demonstrate effectiveness.

 "Supply chain security meets AI: Detecting AI-generated code" ( 2026 )

Saturday at 17:30, 25 minutes, UB5.132, UB5.132, Security Philippe Ombredanne

Everyone's excited (sarcasm) that AI coding tools make developers more productive. Security teams are excited too - they've never had this much job security.

LLMs and AI-assisted coding tools are writing billions of lines of code, so teams can ship 10x faster. They're also inheriting vulnerabilities 10x faster.

We need to detect AI-generated code and trace it back to its FOSS origins. The challenge: exact matching doesn't work for AI-generated code since each generation may have small variations given the same input prompt.

AI-Generated Code Search (https://github.com/aboutcode-org/ai-gen-code-search) introduces a new approach using locality-sensitive hashing and content-defined chunking for approximate matching that actually works with AI output variations. This FOSS project delivers reusable open source libraries, public APIs, and open datasets that make AI code detection accessible to everyone, not just enterprises with massive budgets.

In this talk, we'll explain how we fingerprint code fragments for fuzzy matching, build efficient indexes that don't balloon to terabytes, and trace AI-generated snippets back to their training data sources. We'll demo real examples of inherited vulnerabilities, show how it integrates with existing FOSS tools for SBOM and supply chain analysis, and explain how this directly supports CRA compliance for tracking code origin.

Bottom line: if AI-generated code is in your dependencies (and it probably is), you need visibility into what it's derived from and what risks it carries. This project gives you the FOSS tools and data to find out.

 "AI Security Monitoring: Detecting Threats Against Production ML Systems" ( 2026 )

Saturday at 18:00, 25 minutes, UB5.132, UB5.132, Security samuel desseaux , video

Your AI model is a new attack surface! Unlike traditional applications where threats are well-documented, ML systems face unique vulnerabilities: adversarial inputs crafted to fool classifiers, data poisoning during training, prompt injection in LLM applications, model extraction through API probing, and membership inference attacks that leak training data. Most security teams monitor network traffic and system logs. Few monitor the AI layer itself. This talk shows how to build security-focused observability for production ML systems using open source tools.

I'll demonstrate during the track 3 Threat detection patterns: 1. Adversarial input detection 2. Model behavior monitoring 3. LLM-specific security monitoring

everything.... with a fully open source. stack Prometheus for metrics (custom security-focused exporters) Loki for structured logging with retention policies Grafana for security dashboards and alerting OpenTelemetry for distributed tracing

Attendees will leave with the following materials: Threat model framework for production ML systems Prometheus alerting rules for common AI attack patterns Log analysis queries for security investigation Architecture for integrating AI monitoring with existing SOC workflows

 "Zero Trust in Action: Architecting Secure Systems Beyond Perimeters" ( 2026 )

Saturday at 18:30, 25 minutes, UB5.132, UB5.132, Security Samvedna Jha Suneetha , slides , video

As cyber threats grow in sophistication, the “trust but verify” model is no longer enough. Organizations are rapidly shifting toward Zero Trust Architecture (ZTA) — a security paradigm where no user or device is inherently trusted, inside or outside the network.

Zero Trust Architecture (ZTA) is no longer a buzzword—it’s a necessity. With traditional perimeter-based security models failing to address modern threats like lateral movement and insider attacks, organizations are increasingly adopting ZTA’s "never trust, always verify" philosophy.

This architecture is built on several pillars: - Identity-centric protection defining identity as the new perimeter. - Dynamic micro segmentation and contextual access controls to isolate resources. - Continuous monitoring and behavioural analytics to detect sophisticated lateral movements and insider threats. Modern ZTA implementations employ AI and automation for adaptive threat detection and response, dramatically reducing breach costs and attack surfaces for distributed enterprises. Adoption of Zero Trust is rapidly increasing, with industry research indicating that over 70% of organizations are integrating ZTA in their cybersecurity frameworks and at least 70% of new remote access deployments will rely on these principles by the end of 2025. Despite its robust security benefits, ZTA demands substantial investment in identity management, policy enforcement, and ongoing operational monitoring.

But how do we move from theoretical principles to practical implementation?

This talk explores the why and how of ZTA adoption for mid-level engineers and security practitioners. We’ll break down core ZTA components—identity-centric access, micro segmentation, and continuous monitoring—using real-world examples .

Attendees will leave with: • A clear roadmap for phased ZTA adoption, starting with high-value assets. • Strategies to balance security and user experience (e.g., just-in-time access). • Lessons from industry leaders like IBM on overcoming common pitfalls.

Whether you’re in DevOps, cloud security, or IT governance, this session will equip you to champion ZTA in your organization

Sun

 "The state of Go" ( 2026 )

Sunday at 09:00, 30 minutes, UB5.132, UB5.132, Go Maartje Eyskens , video

What is new since Go 1.25. In this talk we'll bring you up to date with all upcoming changes to the Go language and the community! Go 1.26 will be released in February 2026, we will be taking a look to all upcoming features as well as give an update on important changes in Go 1.235 This includes traditionally updates about the language, tooling, libraries, ports and most importantly the Go Community.

 "Modularizing a 10-Year Monolith: The Architecture, the People, and the Pain" ( 2026 )

Sunday at 09:30, 30 minutes, UB5.132, UB5.132, Go Victor Lyuboslavsky , video

Most Go codebases begin as straightforward layered monoliths, but years of growth often turn those layers into a web of hidden coupling, unclear ownership, and hard-to-predict side effects. Small changes start requiring deep context, and compile and test times slowly creep up, turning routine work into risky work. Rewrites promise a clean slate but rarely succeed in practice. What the Go community lacks are real examples of large open source Go projects that have successfully evolved toward a modular monolith.

This talk presents a major open source Go project in the middle of that evolution. It covers how we are moving from a decade-old layered architecture toward a modular design while continuing to ship features to multiple production environments that teams actively depend on every day. This is not theory. This is architectural change in a live system, with real contributors, long CI pipelines, social constraints, and legacy assumptions embedded throughout the code.

What began as an investigation into slow build and test times exposed deeper problems: oversized packages, uncontrolled dependencies, unclear boundaries, and an architecture that no longer matched how engineers actually reasoned about the system. We walk through familiar pain points in large Go monoliths, including tight coupling, long feedback cycles, and frequent merge conflicts, and explain why these problems persist even in well-intentioned codebases. You’ll see where our early attempts stalled, how architectural changes regressed quietly over time, why good ideas alone were not enough, and how steady, incremental refactoring helped us regain control without freezing development. Architecture only works when people agree to carry it together.

Whether you are working on a fast-growing Go project or maintaining a mature production system, this talk will give you concrete techniques and mental models for evolving your architecture safely. You’ll leave with a clearer understanding of how to introduce boundaries that hold, align ownership with code, and make a large system feel smaller, safer, and easier to change, so teams can move faster without needing to hold the entire system in their heads.

 "Brewed for Speed: How Go’s Green Tea GC Works" ( 2026 )

Sunday at 10:00, 30 minutes, UB5.132, UB5.132, Go Jesús Espino , video

Go’s runtime has always prided itself on efficient, low-latency garbage collection. But modern hardware brings new challenges. More cores, bigger caches, and heavier workloads. In this talk, we’ll start by exploring how Go’s current garbage collector and memory allocator work together to manage memory. Then we’ll dive into the new GreenTea GC, an experimental redesign that cleans memory in groups (“spans”) instead of object-by-object. You’ll learn how it works, why it matters, and what this “span-based” approach could mean for your Go programs in the future.

I'll be exploring the Go source code: https://go.dev

 "Inside Reflection" ( 2026 )

Sunday at 10:30, 30 minutes, UB5.132, UB5.132, Go Valentyn Yukhymenko , slides , video

Reflection is a form of metaprogramming that often feels like magic — letting you inspect and manipulate your code at runtime. But there's no magic here at all — just clever engineering that makes your programs simpler and more flexible.

In this talk, we'll take a look at how reflection actually works under the hood in Go. We'll explore how types and values are represented at runtime, what really happens when you call reflect.ValueOf or reflect.TypeOf, and how the compiler keeps this dynamic capability simple, yet powerful in its implementation.

After this talk, reflection will look a little less mysterious — and a lot more elegant.

 "Understanding Why Your CPU is Slow: Hardware Performance Insights with PerfGo" ( 2026 )

Sunday at 11:00, 30 minutes, UB5.132, UB5.132, Go Christian Simon , slides , video

The Problem

Go's pprof tells you where your CPU time is spent, but not why the CPU is slow. Is it cache misses? Branch mispredictions? These hardware-level performance characteristics are invisible to pprof but critical for optimisation.

The Solution

perf-go bridges this gap by leveraging Linux's perf tool and CPU Performance Monitoring Units (PMUs) to expose hardware performance counters for Go programs. It translates perf's low-level observations into pprof's familiar format, giving Go developers hardware insights without leaving their existing workflow.

What You'll Learn

In this talk, we'll: - Demonstrate the limitations of pprof for understanding performance bottlenecks - Show how perf-go exposes CPU cache behaviour, branch prediction, and memory access patterns - Walk through real benchmarks where we identify and fix cache-line contention issues - Explore how hardware counters can guide improvements that pprof alone wouldn't reveal

Target Audience

Go developers who want to optimise performance-critical code and understand the "why" behind their bottlenecks. Basic familiarity with profiling concepts helpful but not required.

 "Concurrency + Testing = synctest" ( 2026 )

Sunday at 11:30, 30 minutes, UB5.132, UB5.132, Go Ronna Steinberg , video

Go 1.25 introduced testing/synctest, a package that brings deterministic scheduling and control over concurrency during tests. For developers who struggle with flaky tests, hidden data races, or hard-to-reproduce timing issues, synctest offers a powerful solution: it lets you run concurrent code in a bubble, so you can efficiently explore interleavings, force edge cases, and prove correctness.

In this talk, we’ll explore the motivation behind synctest, and dive into the testing patterns it enables. We’ll walk through practical examples of converting existing tests to use synctest. The session includes a demo illustrating how synctest can turn an intermittently failing test into a deterministic one, and surface bugs that traditional tests might miss.

Whether you build concurrent systems, maintain production Go services, or simply want more reliable tests, this talk will give you a solid understanding of what synctest brings to Go—and how you can start using it today.

 "gomodjail: library sandboxing for Go modules" ( 2026 )

Sunday at 12:00, 30 minutes, UB5.132, UB5.132, Go Akihiro Suda , slides , video

Open source is under attack. Most notably the xz/liblzma backdoor incident (CVE-2024-3094) has shown how even trusted and widely adopted libraries can be compromised. Also, since February 2025, the Go language community has been observing an enormous amount of malicious Go modules being published with fake GitHub stars and very plausible contents.

This session introduces gomodjail, an experimental tool that “jails” Go modules by applying syscall restrictions using seccomp and symbol tables, so as to mitigate potential supply chain attacks and other vulnerabilities. In other words, gomodjail provides a "container" engine for Go modules but in finer granularity than Docker containers, FreeBSD jails, etc.

gomodjail focuses on simplicity; a security policy for gomodjail can be applied just by adding // gomodjail:confined comment to the go.mod file of the target program.

The session will discuss its design, implementation details, limitations (e.g., support for modules that use "unsafe" pointers or reflections), and the plan to improve its robustness and performance.

Repository: https://github.com/AkihiroSuda/gomodjail

 "Resilient file uploading with Go" ( 2026 )

Sunday at 12:30, 30 minutes, UB5.132, UB5.132, Go Marius Kleidl , video

File uploads are a ubiquitous and fundamental part of modern applications. While simple at first, they become increasingly challenging as file sizes grow. Users expect reliable data transfers, even when uploading multi-gigabyte files over unreliable mobile networks.

Conventional file uploads over HTTP fail unrecoverably when the underlying connection is interrupted. Resumable uploads, on the other hand, allow an application to continue uploading a file exactly where it left off. This preserves previously transferred data and greatly improves the user experience.

Tusd is an open-source file-upload server written in Go that makes it easy to add resumable uploads to any application - even those written in languages other than Go.

This talk explores why Go is a natural fit for such use cases. In particular, we dive into how contexts help coordinate concurrent, long-running HTTP requests, how the net/http package provides fine-grained control over request handling, and how Go’s tooling assists in testing various failure scenarios.

Additional links: - Tus homepage: https://tus.io/ - Tusd upload server: https://github.com/tus/tusd

 "Profile-Guided Optimization (PGO) in Go: current state and challenges" ( 2026 )

Sunday at 13:00, 30 minutes, UB5.132, UB5.132, Go Alexander Zaitsev , slides , video

Profile-Guided Optimization (PGO) is a well-known compiler optimization technique that brings runtime statistics about how an application is executed to the Ahead-of-Time (AoT) compilation model, which is quite recently added to the Go compiler. However, this technique is not widely used nowadays.

In this talk, I want to discuss the current PGO state in the Go ecosystem. During my work on the Awesome PGO project, I gathered a lot of interesting data points and insights about various PGO issues and discussed many related quirks with different stakeholders like end-users, maintainers, and application developers. We will talk about:

  • PGO state across Go compilers
  • PGO awareness across the Go industry
  • PGO tooling issues
  • Strengths and weaknesses of PGO modes for different use cases in real-world
  • Top blockers for PGO adoption
  • And many other things!

I believe that after the talk more people will be aware of PGO, aware of usual PGO blockers, and know more about how to avoid these limitations in practice.

Target audience: performance-oriented Go users and Go compiler engineers

 "How to Instrument Go Without Changing a Single Line of Code" ( 2026 )

Sunday at 13:30, 30 minutes, UB5.132, UB5.132, Go Kemal Akkoyun Hannah Kim , slides , video

Zero-touch observability for Go is finally becoming real. In this talk, we’ll walk through the different strategies you can use to instrument Go applications without changing a single line of code, and what they cost you in terms of overhead, stability, and security.

We’ll compare several concrete approaches and projects:

Beyond what exists today, we’ll look at how ongoing work in the Go runtime and diagnostics ecosystem could unlock cleaner, safer hooks for future auto-instrumentation, including:

Throughout the talk, we’ll use benchmark results and small, realistic services to compare these strategies along three axes:

  • Performance overhead (latency, allocations, CPU impact)
  • Robustness and upgradeability across Go versions and container images
  • Operational friction: rollout complexity, debugging, and failure modes

Attendees will leave with a clear mental model of when to choose eBPF, compile-time rewriting, runtime injection, or USDT-based approaches, how OpenTelemetry’s Go auto-instrumentation fits into that picture, and where upcoming runtime features might take us next. The focus is strongly practical and open-source: everything shown will be reproducible using publicly available tooling in the Go and OpenTelemetry ecosystems.

 "Making of GoDoctor: an MCP server for Go development" ( 2026 )

Sunday at 14:00, 30 minutes, UB5.132, UB5.132, Go Daniela Petruzalek , video

This session will explore the development of GoDoctor, a Model Context Protocol server designed to improve the experience of coding with AI agents. This is not a product presentation, GoDoctor is actually a playground to test different types of tools applied to coding with LLMs, and in this talk I will focus on the different experiments I made and reporting on both successes and failures. The ultimate goal is to understand what works and what doesn’t for improving code generation for Go projects.

 "Systems Programming: Lessons from Building a Networking Stack for Microcontrollers" ( 2026 )

Sunday at 14:30, 30 minutes, UB5.132, UB5.132, Go Patricio WHITTINGSLOW , video

Developing Go for micocontrollers with 32kB of RAM requires a big shift in thinking, moreso if you are trying to get a complete networking stack with Ethernet, TCP/IP, HTTP to run on said device.

Over the past years we've learned how to minimize memory usage and make programs run performantly on these small devices by adopting patterns common in the embedded industry, some of which make working with Go a even better experience than the norm.

This talk explores the tried and tested "Embedded Go" programming patterns we've found to work and make developing in Go a pleasure, no matter the hardware context: - Avoiding pointers to structs within structs: rethinking the Configure() method - Zero-value programming - Eliminating heap allocations during runtime - Reusing slice capacity - Bounded memory growth on your program - Safe pointer-to-slice with generational index handles

 "Extending sqlc: augmented generation of repositories in Go" ( 2026 )

Sunday at 15:00, 30 minutes, UB5.132, UB5.132, Go Nikolay Kuznetsov , video

This talk explores how to bridge sqlc (SQL-compiler)'s type-safe generated queries with a clean service architecture using Crush coding agent. It is open source and built entirely in Go.

Sqlc generates strongly typed database access, but using its structs directly can couple business logic to schema details. Crush can automate the creation of repository layer on top of sqlc-generated artifacts. Repositories work with domain entities, orchestrate transactions while preserving compile-time type safety.

In this talk, Crush leverages augmented generation (reference implementation + custom command or skills) to keep the produced code consistent and idiomatic. It also generates tests first, using testify/suite, testcontainers-go, gofakeit and go-cmp, then refines repositories code until tests pass.

The result is a practical Go-based workflow that reduces boilerplate, ensures consistency across repositories, and demonstrates how open source LLM tooling can enhance real-world Go development — without sacrificing simplicity or type safety.

 "My old trains have a second life, with TinyGo!" ( 2026 )

Sunday at 15:30, 30 minutes, UB5.132, UB5.132, Go Florian Forestier , video

In the 1970s, model trains were a popular hobby. Thanks to low production costs, anyone could afford a small HO gauge layout. The system was simple: a train, a motor, 12V DC in the track, and off you go!

Fifty years later, we took our trains out of the attic with a big idea in mind: to convert them to digital. With some Seeed Studio, Bluetooth, and TinyGo, we managed to get a functional railway network, were we can manage speed, direction, and lights of each train individually.

 "Go Around The World Without Wires" ( 2026 )

Sunday at 16:00, 30 minutes, UB5.132, UB5.132, Go Ron Evans , video

In this next edition of the "Go Wireless" saga, we will take Go to new heights...

 "Go Lightning Talks" ( 2026 )

Sunday at 16:30, 30 minutes, UB5.132, UB5.132, Go Maartje Eyskens , video

Come speak! As every edition the last hour of the Go Devroom will be open for 5 minute lightning talks. The CfP for this will open shortly before the event and close 90 minutes before the session starts.