Events in room UB2.252A (Lameere)

Sat

 "PostgreSQL and MySQL, Two Databases, Three Perspectives" ( 2026 )

Saturday at 10:30, 50 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Rohit Nayak Shlomi Noach Ben Dicken Pep Pla

In this session, four seasoned database administrators with sound knowledge of both PostgreSQL and MySQL present an unbiased comparison of the two technologies. Attendees will learn about the architectural and DX differences between the world's two most popular databases.

Pep Pla, with his peculiar sense of humour, will open the session with a deep dive into the MVCC architectures between the two. The audience will learn why we need MVCC. Postgres and MySQL take very different approaches to implementation: Postgres relies on row versioning and vacuuming dead tuples, while MySQL does in-place changes and tracks versions with the undo log.

A broad-strokes overview from Ben Dicken, who has worked closely with both, will emphasize where ecosystem cross-pollination would help. This includes differences in table storage, bloat management, replication, and process-per-connection vs thread-per-connection architecture.

Postgres and MySQL take fundamentally different approaches to logical replication. Rohit Nayak and Shlomi Noach will examine how these designs affect WAL/binlog retention, backpressure, and CDC workloads, explore their failover implications, and highlight key feature-parity gaps between the two systems.

 ""Drop-in Replacement": Defining Compatibility for Postgres and MySQL Derivatives" ( 2026 )

Saturday at 11:25, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Jimmy Angelakos Daniël van Eeden , slides , video

The success of open source databases like PostgreSQL and MySQL/MariaDB has created an ecosystem of derivatives claiming "drop-in compatibility." But as the distance between upstream and these derivatives grows, user confusion and brand dilution can follow.

To address this, we explore the challenge of compatibility with de facto standards from two distinct angles: a governance perspective on defining the compatibility criteria, and a systems engineering case study on implementing them.

  1. The Standard: We present the findings from the "Establishing the PostgreSQL Standard" working group held at PGConf.EU 2025. This progress report details the community's consensus on the hard requirements needed to fix the "wild west" of marketing claims, including:
    • Core SQL: Defining the non-negotiable functions, types, and PL/pgSQL.
    • Protocol: Why wire compatibility is insufficient without consistent transactional and pg_catalog behaviour.
    • Ecosystem: The critical requirements for integration with logical replication and tools like Patroni.
  2. The Implementation: Maintaining compatibility with MySQL/MariaDB in TiDB, a distributed database engine, is far more complex than matching syntax for an evolving SQL dialect:
    • We explore the architectural friction of making TiDB speak the MySQL wire protocol and support the MySQL syntax.
    • We cover compatibility with the MySQL binary log based replication.

 "Jack of all trades: query federation in modern OLAP databases" ( 2026 )

Saturday at 11:55, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Nicoleta Lazar , slides , video

As analytics ecosystems grow more diverse, organisations increasingly need to query data across warehouses, data lakes, and operational systems without excessive movement or duplication. Query federation has become essential by enabling unified SQL access and intelligent pushdown into heterogeneous sources. This talk introduces the core principles of federation and why it matters for modern OLAP workloads and how it is different to Trino.

Using StarRocks as a model system, we highlight its vectorized execution engine, native connectors, and deep Apache Iceberg integration that together deliver high-performance lakehouse querying. We examine common lakehouse challenges—schema evolution, file fragmentation, and object-storage latency—and show how federation and hot/cold data separation help address them.

Finally, we explore federating additional sources such as Elasticsearch, PostgreSQL, and Apache Paimon to build a unified analytical architecture.

 "Cracking Down the Code: What Really Happens When You Run a SELECT?" ( 2026 )

Saturday at 12:20, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Charly Batista , video

We all write SQL, but how many of us have looked under the hood of a relational database like PostgreSQL? This talk is a deep dive into the guts of the database engine, tracking a simple SELECT statement from the moment you hit "Enter" to the final result set.

We'll lift the veil on the core components: the parser, the planner (and the optimizer's black magic!), and the executor, and see how they transform a text string into a low-level, high-performance operation. Using a live, interactive session on a PostgreSQL instance, we'll expose the role of the shared buffer cache, explain why an index works (or doesn't), explore the true cost of I/O, and understand the significance of the binary log (WAL) on read operations.

Whether you're a developer frustrated with slow queries or a database administrator looking to squeeze out every millisecond of performance, you'll leave this talk with a mental model that demystifies query execution and gives you the knowledge to write queries that fly.

 "Server, Storage Engine, Protocol, Client: Suspects of a MySQL Performance Mystery" ( 2026 )

Saturday at 12:45, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Vitor Oliveira , slides , video

While optimizing a new heap storage engine across both MySQL and a PostgreSQL-based database we encountered a puzzling result: while on MySQL the throughput stalled below 500k tpmC, on the other database it achieved over 1 million tpmC. The mystery deepened when three different TPC-C benchmarks each told a conflicting story about MySQL’s speed.

This talk details the systematic investigation to resolve these contradictions and reclaim the lost performance. We’ll walk through the methodical process of isolating variables across the entire software stack, dissecting benchmark implementations, profiling execution end-to-end with advanced tools, analyzing client/server protocol behavior, and comparing query optimization plans.

The investigation revealed that the performance gap was not caused by a single flaw, but by a cascade of inefficiencies, in multiple areas of the stack. Subtle issues in query planning, protocol handling, and client-side implementation conspired to create overwhelming overhead. By addressing these interconnected problems holistically – through optimizer fixes, protocol enhancements, and client improvements – we transformed MySQL’s performance profile to reveal the engine’s true potential.

The outcome was a dramatic turnaround: with additional improvements the performance of the new engine on MySQL reaches almost 2 million tpmC now.

This case study underscores a critical lesson: database performance, for OLTP workloads in particular, is determined not by any single component, but by the precise alignment of the entire database stack, from the client down to the storage engine.

 "Real-Time AI Powered by RonDB" ( 2026 )

Saturday at 13:10, 5 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Mikael Ronström , video

RonDB is a high-performance, MySQL-compatible distributed database engineered for real-time, latency-critical workloads. Built on decades of development in the MySQL NDB Cluster—led by the original founder of the NDB product—RonDB extends the NDB storage engine with new capabilities, cloud-native automation, and modern APIs tailored for large-scale AI and online services.

This talk will describe how RonDB consistently delivers 1–4 ms latency even for large batched operations involving hundreds of rows and multi-megabyte payloads, and will explain the architectural techniques that make such performance possible. We will highlight RonDB’s role as the online feature store powering the Hopsworks Real-Time AI platform, deployed in production at companies such as Zalando for personalized recommendations and other low-latency machine-learning applications.

The session will also introduce key components of the RonDB ecosystem:

rondb-helm – Kubernetes and Helm tooling for deploying, managing, and scaling RonDB clusters in cloud environments.

rondb-tools – Scripts and automation utilities for quickly setting up local or distributed RonDB testbeds.

New API layers, including: • A REST API server offering batch key operations, batch scans, and aggregated SQL queries. • An experimental Redis-compatible interface, enabling RonDB to act as a durable, high-throughput backend behind standard Redis commands.

We will outline the active collaboration between the RonDB team and Oracle’s MySQL NDB Cluster engineers, and how RonDB extends and complements the upstream NDB ecosystem. In addition, we will present ongoing cooperation with Datagraph to build a SPARQL interface to RonDB, leveraging Datagraph’s Common Lisp NDB API.

Attendees will come away with a clear understanding of how RonDB achieves its performance characteristics, how it integrates with modern real-time AI pipelines, and how to deploy, operate, and experiment with RonDB using the available open-source tools.

GitHub repositories: https://github.com/logicalclocks/rondb https://github.com/logicalclocks/rond-helm https://github.com/logicalclocks/rondb-tools https://github.com/datagraph/cl-ndbapi/

Web sites of note: https://rondb.com https://docs.rondb.com https://hopsworks.ai https://blog.dydra.com/@datagenous/blog-catalog

 "DuckDB in the Cloud: A Simple, Powerful SQL Engine for Your Lakehouse" ( 2026 )

Saturday at 13:15, 5 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Gábor Szárnyas Guillermo Sanchez Tom Ebergen , slides , video

DuckDB has traditionally been seen as a last-mile analytics powerhouse, the fastest way to run a SQL query on your laptop. But DuckDB offers more than just fast SQL, of course; it supports full database semantics and ACID transactions, behaving like a fully fledged, in-process OLAP database. The in-process component has sometimes been viewed as a limitation when considering DuckDB as a data warehouse.

However, DuckDB now supports reading and writing to most Open Table Formats (OTFs), including Iceberg, Delta, and DuckLake. This capability puts DuckDB in a very different position: it allows DuckDB to act as a SQL engine in the cloud (or on your local machine) and run queries against any OTF stored in remote cloud storage. DuckDB can now be the all-mighty, single-node query engine that powers your data analytics use cases.

 "Cube, dbt and Grafana: the OSS stack that blends Data Analytics with Observability data" ( 2026 )

Saturday at 13:20, 5 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Sam Jewell , video

Observability data isn’t typically blended with the data that your Analysts are working with. These data types are typically stored in entirely separate databases, and interrogated through different tools.

But that needn’t be the case. At Grafana Labs we’ve started blending this data together, to answer questions that we or our customers have, such as: - How much revenue did that downtime cost me? - How did latency impact on sales last Black Friday? - Which customers were impacted by that incident, and which ones are the highest priority to follow up with?

The FOSS projects we’re combining to get there are: - The LGTM stack (github.com/grafana) for Observability - Cube core (cube.dev/docs/product/getting-started/core) for Semantic Layer - dbt core (github.com/dbt-labs/dbt-core) for transforming SQL data - Grafana itself to blend, visualise and even alert on the end-result

During this talk I’ll describe how you too can fit these pieces together and use them to answer similar questions for your own context.

 "Data on Kubernetes / stateless storage" ( 2026 )

Saturday at 13:25, 5 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Matthias Crauwels , video

Everyone is running their applications on Kubernetes these days, most of the time the application servers are stateless so it is easy to do so because the database behind the application is responsible for storing the state. What if you would also want to run you database on the same Kubernetes stack. Will you use stateful sets? Will you use network attached storage? These types of storage are introducing a lot of disk latency because of the mandatory network hops. This is why in many environments the database servers still are dedicated machines that are treated as pets while the rest of the fleet is more like cattle.

In this session I will speak about how we run our databases on Kubernetes by using the local ephemeral storage to store your data and also how we are confident we will not loose it in the process of doing so!

 "Delegating SQL Parsing to PostgreSQL" ( 2026 )

Saturday at 13:35, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Greg Potter , slides , video

PostgreSQL already knows how to parse SQL, track object dependencies, and understand your schema. Most tools that work with schemas reimplement this from scratch. What if you just asked Postgres instead?

This talk digs into the techniques that make that possible. We’ll start with the shadow database pattern: applying schema files to a temporary PostgreSQL instance and letting Postgres handle all parsing and validation. Then we’ll explore pg_depend and the system catalogs, where PostgreSQL tracks that your view depends on a function, which depends on a table, which depends on a custom type. I’ll show the exact catalog queries that extract this dependency graph, the edge cases that make it interesting (extension-owned objects, implicit sequences, array types, function bodies that pg_depend can’t see), and how to turn it all into a correct topological ordering for migration generation.

I learned this while building pgmt, a tool that diffs PostgreSQL schemas to generate migrations. But the techniques apply to anything that needs to understand a Postgres schema -- linters, drift detectors, visualization tools, CI validation -- and they let you build on Postgres’s own knowledge instead of reinventing it.

 "Replicating Transactional Databases to ClickHouse : Transaction Log Analysis and Time Travel" ( 2026 )

Saturday at 14:00, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Arnaud Adant , slides , video

This talk discusses the design choices behind this open source project leveraging Debezium : https://github.com/Altinity/clickhouse-sink-connector It reliably replicates data to ClickHouse, a well known open source real time analytics database that can be deployed anywhere. The sink-connector provides an alternative to proprietary solutions that typically lock people in or are only available on the cloud. It works with MySQL, MariaDB. Postgres, Oracle (experimental) and MongoDB. As a bonus, Binary logs analysis and Time Travel will also be presented.

 "You do not need an ORM" ( 2026 )

Saturday at 14:25, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Giacomo , slides , video

Using SQL from other programming languages can prove to be quite the hassle: wrangling the database rows into the host's language types is tedious and error prone, and making sure the application code stays up to date with the ever-changing database schema is just as challenging.

To address these developer experience shortcomings ORMs try to shield the developer from ever having to write any SQL at all. This doesn't feel totally satisfying though: as developers we are always keen on using the right language for the job, so what would it look like to fully embrace SQL instead of trying to abstract it away?

In this talk we'll look at Squirrel (https://github.com/giacomocavalieri/squirrel), a library that tackles database access in Gleam (https://gleam.run): a functional, statically-typed language. We'll explore how code generation from raw SQL can help bridge the gap between the database and a functional language without compromising on type-safety, performance or developer experience.

 "Working with Filesystem in Time Series Database" ( 2026 )

Saturday at 14:50, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Aliaksandr Valialkin , slides , video

Time Series databases face the significant challenge of processing vast amounts of data. At VictoriaMetrics, we are actively developing an open-source Time Series database entirely from scratch using Go. Our average installation handles between 2 to 4 million samples per second during ingestion, with larger setups managing over 100 million samples per second on a single cluster. In his presentation, we will explore various techniques essential for constructing write-heavy applications such as: - Understanding and mitigating write amplification. - Implementing instant database snapshots. - Safeguarding against data corruption post power outages. - Evaluating the advantages and disadvantages of utilizing Write Ahead Log. - Enhancing reliability in Network File System (NFS) environments. Throughout the talk, we will illustrate these concepts with real code examples sourced from open-source projects.

 "Contributing to MariaDB & Postgres" ( 2026 )

Saturday at 15:15, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Kevin Biju Georgi Kodinov , slides , video

1) Contributing to MariaDB (Georgi): Learn how to contribute to the MariaDB server codebase. And be prepared for what it takes. And see what you will learn along the way.

Have you ever wondered what it would take to actually get your contribution into the MariaDB server codebase?

We will take one specific contribution and follow through its processing. It's a bug fix contribution. 2 lines of actual code change. On smaller codebases, used by less people, this would have probably taken minutes to process. It is somewhat different with the MariaDB server's codebase. But for a very good reason!

2) Contributing to Postgres (Kevin): Contributing to open source can feel intimidating early in your career, especially with a project as widely used and critical as Postgres. Often, confidence comes after action; the first patch is the hardest. Even small contributions can reach thousands of people.

This talk traces my path from setting up a local build and gaining familiarity with the codebase to contributing bug-fix patches and documentation updates. Also, it outlines how the Postgres development process and community operate. The aim is to demystify the process so more engineers feel confident contributing to Postgres, and leave with the context and practical steps to make their first (or next) patch.

 "Magical Mystery Tour: A Roundup of Observability Datastores" ( 2026 )

Saturday at 15:45, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Josh Lee , slides , video

From plain-old Postgres to the Grafana stack (Loki, Grafana, Tempo, and Mimir), OpenSearch, Cassandra, and ClickHouse, the landscape of telemetry storage options is as vast as it is overwhelming. With so many choices, how do we decide which datastore is right for the job? In this talk, Joshua will guide attendees through the foundational principles of telemetry—covering metrics, traces, logs, profiles, and wide events—and break down the strengths and limitations of different database technologies for each use case. We’ll examine how traditional relational databases like Postgres can still hold their own, where OpenSearch and Prometheus fit into the picture, and why specialized stacks like LGTM (Loki, Grafana, Tempo, Mimir) are so popular in modern observability pipelines. And, of course, we’ll highlight the growing role of ClickHouse as a versatile and high-performance option for logs, traces, and more and VictoriaMetrics as a drop-in replacement for Prometheus. By the end of this session, attendees will have a clearer understanding of the trade-offs between these datastores and how to make informed decisions based on the unique requirements of their systems. Whether you’re building an observability stack from scratch or looking to optimize an existing setup, this tour of the observability datastore landscape will leave you better equipped to navigate the options.

 "Multi writer CDC Challenges" ( 2026 )

Saturday at 16:10, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Sunny Bains , slides , video

Change Data Capture (CDC) has become foundational for real-time analytics, cross-region replication, event-driven systems, and streaming ingestion pipelines. Databases like MySQL and Postgres replication expose change streams through a single-writer log. CDC in these systems is trivial. Modern distributed SQL databases like TiDB require a fundamentally different design and handle bigger challenges because they need to order the writes of multiple writers and deal with millions of tables,

This talk is about TiCDC’s architecture and how it handles multiple writers, millions of tables, 1000s of writers with its event-driven pipeline. To preserve total order, TiCDC must merge, order, and stream updates coming concurrently from multiple regions, Raft groups, and storage nodes—while preserving correctness and low latency.

This talk will explore the challenges and the evolution of TiCDC design over several iterations, With lessons learnt the hard way.

 "Inverted database indexes: The why, the what, and the how." ( 2026 )

Saturday at 16:35, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Deleted User , slides , video

Database usage in practice often involves heavy text processing. For example, in "observability" use cases, databases must extract, store, and search billions of log messages daily. Most databases, including many column-oriented OLAP databases, struggle with such massive amounts of text data. The only way to process text data at scale is by using specialized inverted indexes in databases.

This presentation explains how inverted indexes work and which (text) search patterns they support. Where appropriate, we describe our experience and the gotchas we encountered when adding an inverted index to ClickHouse, one of the most popular open-source databases for analytics.

 "Apache Arrow, Hostage Negotiator: Revisiting the case for Client Protocol Redesign" ( 2026 )

Saturday at 17:00, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Matthew Topol , video

In 2017, Mark Raasveldt and Hannes Mühleisen (who went on to create DuckDB presented a VLDB paper entitled “Don’t Hold My Data Hostage – A Case For Client Protocol Redesign.” Their paper proposed the use of columnar serialization to achieve order-of-magnitude improvements in query result transfer performance. Eight years later, this talk revisits Raasveldt and Mühleisen’s argument and describes the central role that the Apache Arrow project has played in realizing this vision—through the dissemination of Arrow IPC, Arrow Flight, Arrow Flight SQL, Arrow over HTTP, and ADBC across numerous open source and commercial query systems. The talk concludes with a call to action to introduce Arrow-based transport to the systems that continue to “hold data hostage.”

 "From Disks to Distributed: Our Journey of Database Evolution in the Cloud" ( 2026 )

Saturday at 17:25, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Thor , slides , video

Our database had reached a point where failure scenarios were becoming increasingly complex and time consuming. A single node could take up to 15 minutes to recover. It was expensive to run and operate, and it simply couldn’t scale to meet the customer demand we were facing. It became clear that we needed a new design. By leveraging a modern architecture and the latest open-source technologies, we rebuilt our database for the cloud era. Recoveries that once took 15 minutes now complete in seconds. Operational costs dropped by 50%, and query latencies improved by 200%. These gains weren’t the result of any single change, but of a holistic redesign powered by technologies like Vortex, DataFusion, Delta Lake, and Rust.

In this talk, Thor will walk you through the end-to-end journey of this evolution the failure patterns and scaling limits that forced a rethink,

the architectural principles that guided the redesign,

the trade-offs and dead ends along the way,

how modern open-source components were evaluated and integrated, and

the concrete performance and reliability improvements unlocked by the new design.

You’ll leave with a blueprint for modernizing a legacy data system: how to identify when your architecture is holding you back, and how to apply today’s open-source ecosystem to build a cloud-native database that’s fast, resilient, and ready for the future.

 "Federating Databases with Apache DataFusion: Open Query Planning and Arrow-Native Interoperability" ( 2026 )

Saturday at 17:50, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Michiel De Backker Ghasan Mohammad (hozan23) , slides , video

Apache DataFusion is emerging as a powerful open-source foundation for building interoperable data systems, thanks to its strongly modular design, Arrow-native execution model, and growing ecosystem of extension libraries. In this talk, we'll explore our contributions to the DataFusion ecosystem—most notably DataFusion Federation for cross-database query execution and DataFusion Table Providers that connect DataFusion to a wide range of backends.

We'll show how we use these components to federate queries to databases such as TiDB and InfluxDB 2, and how this fits into a broader data fabric/API generation work we're doing at Twintag. We'll also discuss our work on Arrow-native interfaces, including an Arrow Flight SQL Server implementation for DataFusion and a prototype Flight SQL endpoint for TiDB, which together enable a fully Arrow-based pipeline spanning query submission, execution, and federated dispatch.

The session highlights practical patterns for building distributed data infrastructure using open libraries rather than monolithic systems, and offers a look at where Arrow and DataFusion are headed as shared interoperability layers for modern databases.

 "LSM vs. B‑Tree: RocksDB and WiredTiger for Cloud‑Native Distributed Databases" ( 2026 )

Saturday at 18:15, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Franck Pachot , slides , video

Cloud-native databases often use open-source embedded key-value stores on each node or shard. OLTP workloads are read- and write-intensive, typically relying on indexes for data access. Two main on-disk structures are prevalent: B-Trees, such as WiredTiger, and LSM-Trees, like RocksDB. This talk explores the similarities and differences in their internal implementations, as well as the trade-offs among read, write, and storage amplification. It also compares these structures to traditional fixed-size block storage in RDBMS and discusses the differences in caching the working set in memory and ensuring durability through write-ahead logging.

 "How to Prevent Your AI from Returning Garbage: It Starts and Ends with Data Engineering" ( 2026 )

Saturday at 18:40, 20 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Databases Matt Yonkovit ( The Yonk ) , video

Your AI application returns wrong answers. Not because of your LLM choice or vector database—but because of the data engineering ( or lack there of) nobody wants to talk about.

This technical deep dive shows why embedding models, chunking strategies, and search filtering have more impact on AI accuracy than switching from one model to another. Using real production data, we'll demonstrate how naive vector search returns Star Trek reviews when users ask about Star Wars, how poor chunking strategies lose critical context (Who want's their AI to respond to how to fix a headache with a head transplant?), and why "just use a vector" without proper data engineering guarantees hallucinations.

We'll cover:

  • Embedding model selection: dimensions, token limits, and silent truncation failures
  • Chunking strategies: when to chunk, how to preserve context, and the double-embedding approach
  • Hybrid search: combining Full Text/BM25 keyword matching with vector similarity
  • Filtering architecture: pre-filter vs post-filter performance trade-offs
  • Production gotchas: triggers, performance, batch processing, and cold start problems

While many of the examples will be for PostgreSQL, This is talk will be database-agnostic, no matter if you are using PostgreSQL, MariaDB, ClickHouse, or others you will learn something! In AI Land, the hard problem is always data engineering, not database selection.

Users don't care about inference speed—they care about accuracy. This talk shows how to engineer your data pipeline so your AI doesn't lie.

Sun

 "Bringing WebAssembly to constrained devices with Rust: Runtimes, tooling, and real-world tradeoffs" ( 2026 )

Sunday at 09:00, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Fedor Smirnov , slides , video

In this talk, we will share the insights we gained while building Myrmic, our open-source Rust middleware for distributed systems, with a particular focus on our microcontroller firmware that enables running WebAssembly on resource-constrained targets such as Nordic and ESP devices. Our entire stack is Rust-based—from Embassy firmware and the embedded HAL to the Wasm toolchain and the runtimes themselves. We will outline the requirements that running Wasm in no_std environments imposes on runtimes, particularly in the context of distributed systems with constrained devices. We will then share our experience with Rust-native runtimes such as wasmtime, wasmi, and tinywasm and with embedding WAMR into Rust firmware, focusing on how each runtime aligned with these requirements and the modifications or integration work needed to support our use case. We will also discuss how we structure and compile our Wasm modules, and the trade-offs we make between developer ergonomics, code portability, and the memory footprint of the resulting binaries. The goal of this talk is to provide practical lessons for Rust developers, highlight gaps in today’s embedded-Wasm tooling, and point out opportunities for new open-source contributions.

Links to relevant projets: - wasmtime (https://github.com/bytecodealliance/wasmtime) - wamr (https://github.com/bytecodealliance/wasm-micro-runtime) - wasmi (https://github.com/wasmi-labs/wasmi) - tinywasm (https://github.com/explodingcamera/tinywasm) - A link to Myrmic is not yet available since it will be open-sourced early 2026

 "Rust meets cheap bare-metal RISC-V" ( 2026 )

Sunday at 09:30, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Marcel Ziswiler , slides , video

With the Rust embedded-hal v1.0 now released almost two years ago, Rust is really ready for bare metal embedded deployments. This talk will look at cheap RISC-V MCUs like the CH32V003 featuring (RPi Pico or Teensy style for less than EUR 1.30) and show you hands-on how you may develop your embedded system using bare metal Rust. From zero to main, using probe-rs for flashing and debugging, IDE integration thereof and some tricks like successfully using no-std for the tiniest footprint possible.

 "RustBoy: A Rust journey into Game Boy dev" ( 2026 )

Sunday at 10:00, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Federico Bassini , slides , video

Tired of the endless “search, copy, paste, test” routine, last year’s FOSDEM helped me break out of that loop thanks to two unexpected sparks: Rust and retrogaming. In this talk, I’ll share how discovering Rust and the Game Boy homebrew scene rekindled my passion for creating software bare metal, bringing back the artistic and playful side of programming. We’ll explore the current state of the Rust ecosystem for GB, GBC, and GBA development — from compilers and minimalist engines to demo ROMs and the ongoing work bridging these two worlds.

Links https://github.com/ffex/rust-boy

 "Async Rust in Godot 4: Leveraging the engine as a runtime" ( 2026 )

Sunday at 10:30, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Jovan Gerodetti , video

Godot 4’s built-in async support for GDScript has long been a powerful feature - but what about Rust? In February 2025, I implemented async support for Godot’s Rust bindings (godot-rust), enabling Rust developers to write async code without introducing external runtimes.

In this talk, I’ll walk you through the architecture: how we adapted Godot’s async execution (already designed for GDScript) to work with Rust’s Future and async abstractions, and how we minimized cost by reusing the engine’s existing event loop and task scheduler. We’ll dive into the implementation details - including how Godot-specific futures are constructed, polled, and scheduled - and discuss the challenges we faced when implementing async while interacting with the engine's C++ code over FFI on potentially multiple threads.

This talk will be especially valuable for developers interested in embedding async logic godot and other game engines or understanding how async can be made “engine-native” rather than externally bolted on.

 "Common Expression Language (CEL) in Rust" ( 2026 )

Sunday at 11:00, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Alex Snaps , slides , video

The Common Expression Language (CEL) is an expression language that’s fast, portable, and safe to execute in performance-critical applications. The CEL crate provides a parser and interpreter for the language that emerged from Google, but never provided an implementation for Rust. Given its traits, CEL is the perfect match for any Rust project that requires some sort of expression evaluation. We'll cover why that is the case and where these needs emerged from, then dive into the state of the Rust port of the interpreter, covering some of the challenges met along the way, like reviving the Rust runtime for antlr4.

 "Calling JIT-compiled Roto scripts from Rust" ( 2026 )

Sunday at 11:30, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Terts Diepraam , slides , video

Roto is a statically-typed and compiled scripting language for Rust applications that integrates very tightly with Rust. To achieve that integration, it needs to interface directly with Rust types and functions. Implementing that boundary turned out to be quite tricky! We had many obstacles to overcome, such as Rust providing very few mechanisms for reflection and not providing a stable ABI by default. This talk will explain how Rust-Roto boundary works and the tricks we have to pull along the way. You can expect lots of unsafe code, deep dives into the Rust Reference and coercions from slices to function pointers.

 "Clickhouse’s C++ and Rust journey" ( 2026 )

Sunday at 12:00, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Alexey Milovidov , video

Full rewrite from C++ to Rust or gradual integration with Rust libraries? For a large C++ codebase, only the latter works, but even then, there are many complications and rough edges. In my presentation, I will describe our experience integrating Rust and C++ code and some weird and unusual problems we had to overcome.

 "Profiling Rust applications with Parca" ( 2026 )

Sunday at 12:30, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Brennan Vincent , video

This talk introduces Parca, a general-purpose CPU, GPU and memory profiler for Linux. The main unique feature of Parca is the fact that unwinding happens in an eBPF program, and so is low-overhead enough to be constantly running in production: it doesn't require building with frame pointers or copying large sections of the stack between memory spaces. The primary mode of visualization in the Parca UI is the flame graph.

Rust-specific features in Parca include:

(1) For those projects that use jemalloc, memory profiling via rust-jemalloc-pprof (2) "Custom labels" feature for associating arbitrary application-relevant tags with stack traces (for example: allowing the user to filter profiles by trace ID or any other value they choose to instrument).

 "Building performance-critical Python tools with Rust: Lessons from production" ( 2026 )

Sunday at 13:00, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Cian Butler , video

Python dominates web development, but they often comes with performance and scaling issues. Recently, the Python ecosystem has seen massive performance gains from projects written in Rust, such as uv and ruff. But what other projects are out there to help Python scale thanks to Rust? At Cloudsmith, we achieved 2x throughput on our 10-year-old Django monolith by integrating Rust-based tools and contributed features back upstream

We'll look at a number of projects that helped us start bringing Rust into our stack. We'll go over our methodology: establishing performance baselines through load testing, identifying bottlenecks, and scaling issues. We integrated existing Rust-based tools with minimal code changes and tuned application server configuration for maximum throughput, consolidating infrastructure and reducing operational complexity.

We'll also share our experience contributing observability features upstream to Granian, ensuring production-ready monitoring that benefits the entire community.

You'll leave with actionable strategies for modernising legacy services using existing Rust tools, understanding when this approach makes sense, and maintaining production reliability throughout the transition.

 "Ty: Adventures of type-checking Python in Rust" ( 2026 )

Sunday at 13:30, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Shaygan Hooshyari , video

Ty is a fast Python type checker and language server. Ty is built with Rust, and there are a lot of interesting technical challenges involved in making it a useful tool.

Building a Python type checker is a hard task; however, the design of Ty provides a joyful developer experience for contributors. In this talk, we will look at how the Ty codebase is structured to make development simple, and what design choices make it pleasant to work on.

We will cover:

  • How Ty goes from a Python program to a diagnostic, and what parts are most important if you want to contribute.

  • Data layout and ownership model of structs in Rust.

  • Salsa solves challenge of incremental type checking across many files.

  • Tooling for tests, snapshot testing during development and type checking open source projects in CI.

 "Rust in Mercurial: The wider benefits" ( 2026 )

Sunday at 14:00, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Raphaël Gomès Pierre-Yves David , video

From its timid introduction to the Mercurial Version Control System back in 2017 to its more than 50k lines of code today, Rust has enabled a wide range of improvements, some of which we wager would have been impossible if not for Rust.

This talk shows how we reach far beyond the obvious point of "Rust runs faster than Python". It discusses aspects like maintainability, dependency management, API re-designs, opportunities for more advanced algorithms, on disk data-structures, safe parallelism, etc.

We present our rare perspective of working on a 20 year-old codebase with half-a-million lines of Python code, in a software niche with quite extreme goals. Mercurial aims to provide instant-feeling commands with short lived processes for a local database of tens of millions of revisions for millions of files with fully distributed replication.

 "Taming Git complexity with Rust and Gitoxide" ( 2026 )

Sunday at 14:30, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Kiril Videlov , video

Git's internal design is both elegant and notoriously complex. Building reliable tooling on top of it means dealing with purpose-built data structures, performance trade-offs, and years of historical quirks.

In this talk, we’ll explore how Rust, together with Gitoxide[0], makes it possible to create fast, correct, and ergonomic version-control tooling. We’ll look at how Rust’s ownership model and type system help avoid whole classes of errors, and how Gitoxide exposes a safe and composable interface to the raw Git data structures.

Using some real-world examples, we’ll walk through: - How Git stores its data and why interacting with it is non-trivial - How the Gitoxide APIs to make this tractable - Patterns for building high-level Git workflows - A short demo of how these pieces come together in the GitButler CLI

Attendees will come away with a deeper understanding of Git’s inner workings, practical insights into using Gitoxide, and perhaps ideas for creating next-gen developer tooling using Rust.

[0] https://github.com/GitoxideLabs/gitoxide

 "Rust Coreutils in Ubuntu: Yes, we rewrote /bin/true in Rust — Here’s what really happened" ( 2026 )

Sunday at 15:00, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Sylvestre Ledru , video

Ubuntu’s plan to “carefully but purposefully oxidise” the distro has given us the perfect playground to see what really happens when you swap decades-old GNU coreutils for their shiny Rust equivalents. Spoiler: everything relies on way more weird flags than you think — and significantly more than the internet’s finest armchair kernel engineers believe.

In this talk, I’ll share the fun, the sharp edges, and the truly unexpected lessons from bringing Rust Coreutils (https://github.com/uutils/coreutils ) into Ubuntu: which obscure behaviours scripts secretly depend on, how packaging Essential tools can turn one missing corner-case into a boot failure, what benchmarks actually taught us (as opposed to what Reddit said they would), and how tools like oxidizr (https://github.com/jnsgruk/oxidizr ) let us safely flip between GNU and Rust without breaking the universe.

Along the way, we’ll look at some of the best online troll predictions — the “Rust will destroy Linux”, “this is rewriting for the sake of CVs”, and “it will be 100× slower forever” genre — and compare them with what happened in the real world. Some were wrong, some were surprisingly insightful, and some were… educational, in their own way.

If you’re curious about modernizing the Linux system, if you enjoy data-driven myth-busting, or if you simply want field notes from the frontier of “C → Rust” rewrites, this session is for you. Links:

Ubuntu “oxidising” initiative: https://discourse.ubuntu.com/t/carefully-but-purposefully-oxidising-ubuntu/56995

uutils/coreutils: https://github.com/uutils/coreutils

 "Rethinking network services: Freedom and modularity with Rama" ( 2026 )

Sunday at 15:30, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Glen De Cauwsemaecker , slides , video

Modern networking software often forces developers to choose between rigid, off-the-shelf frameworks and the painstaking effort of building everything from scratch. Rama takes a different path. It’s a modular Rust framework that lets you move and transform packets across the network stack, without giving up control, safety, or composability.

In this talk, I’ll explore together with the audience how Rama’s philosophy of layers, services, and extensions turns network programming into a flexible and enjoyable experience. You’ll see how its building blocks span multiple layers of abstraction. From transport and TLS up to HTTP, and a lot more in between. All while you can still easily plug in your own logic or replace existing components. It also shows how you can build network stacks that aren't possible anywhere else, and all without a sweat. For example socks5 over TLS. Why not.

Through practical examples, we’ll look at how Rama empowers developers to build everything from proxies and servers to custom network tools, while still benefiting from Rust’s performance and safety guarantees. Whether you’re curious about programmable networking, Rust’s async ecosystem, or just want to build things your own way, this talk will show you how Rama helps you do it, all with elegance and confidence.

More information about rama itself can be found at https://ramaproxy.org/, which is developed and maintained by https://plabayo.tech/, a FOSS, consulting and commercial technology (small family) company from Ghent.

https://github.com/plabayo/rama

 "Random seeds and state machines: An approach to deterministic simulation testing in Rust" ( 2026 )

Sunday at 16:00, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Frederic Branczyk , video

Deterministic simulation testing (DST) is a method that explores as many random execution paths of a system as possible, injects random failures, and lets developers reproduce the exact same execution path on failure given an initial random seed. This testing approach shakes out many difficult to find bugs before they reach production and greatly increases developer confidence in system correctness when making new changes.

DST was first popularized by the FoundationDB team, and is slowly finding its way into the testing arsenal of many products like TigerBeetle, Resonate, and more recently, Turso’s rewrite of SQLLite in Rust. This talk will cover how we implemented DST of our distributed storage system at Polar Signals by modelling our core components as state machines and why this was the right choice for us over other approaches that use deterministic async runtimes (e.g. https://github.com/madsim-rs/madsim).

Come learn more about DST and how it can help you write better and more resilient software!

 "Syd: Writing an application kernel in Rust" ( 2026 )

Sunday at 16:30, 25 minutes, UB2.252A (Lameere), UB2.252A (Lameere), Rust Ali Polatel , video

Syd (sydbox-3) is an application kernel written in Rust. This talk is a tour of its runtime architecture and the Rust that makes it portable. We’ll walk through the threads and their roles: syd_main (startup, namespaces, policy load, lock), syd_mon (lifecycle, seccomp-notify plumbing), a CPU-sized pool of syd_emu workers (syscall brokering), syd_ipc (UNIX-socket control when lock:ipc is enabled), syd_int (timers/alarms), and syd_aes (AF_ALG crypto for Crypt sandboxing, plus helpers syd-pty and syd-tor. Implementation highlights: minimal unsafe at the syscall edge; per-thread isolation with unshare(CLONE_FS|CLONE_FILES) and per-thread seccomp(2); syscall-argument cookies; forced O_CLOEXEC and randomized FDs; deterministic "last-match-wins" policy; and mseal(2) sealing on lock:on. Portability is first-class: one codebase for Linux ≥ 5.19 with proper multi-arch support (x86-64/x86/x32, arm64/armv7, ppc64{b,l}e, riscv64, s390x, loongarch64), ILP32/LP64 awareness, and MSRV 1.83+. You’ll leave with concrete patterns for building a thread-isolated, multi-arch syscall broker in Rust.