Welcome to the GCC (GNU Toolchain) devroom from the organizers.
RISC-V now spans 100+ extensions and over a thousand instructions. Binutils, QEMU, and other projects maintain separate instruction definitions, leading to duplication, mismatches, and slower support of new features.
UDB provides a machine-readable, validated source of truth covering most of the ISA. Our generator currently produces Binutils and QEMU definitions directly from UDB, cutting the effort for standard and custom extension bring-up. And with automated CI checks against current Binutils data, everything stays aligned as the ecosystem evolves.
In this talk, we’ll show how UDB enables new and custom extension by:
Version 6 of the DWARF debugging information format is still a work in progress, with many changes already accepted. This talk will focus on one fundamental change that has been accepted recently: "Issue 230524.1", also known as "Location Descriptions on the DWARF Stack".
The compiler can emit small programs in a bytecode known as DWARF expressions that a consumer (usually a debugger) can evaluate in order to compute an object's location; where in memory or registers it has been placed. Up until DWARF-5, the execution model of such DWARF expressions was not expressive enough to describe how objects are placed on GPUs, or even on CPUs in some cases too. DWARF 6 addresses this by making DWARF locations regular stack elements on the DWARF expression evaluation stack, which has many interesting cascading consequences, including enabling expressiveness, factorization, and more.
In this presentation, we will discuss the execution model of DWARF expressions, the proposed changes and follow-up extensions this change enables.
We present a DWARF-6 expression evaluator implemented in OCaml. The evaluator is concise and lightweight. It aims to help tool developers learn and understand DWARF by examining the precise definitions of DWARF operators and by running examples. We believe this will be useful in particular with the "locations on the stack" change that's coming in DWARF-6.
The evaluator comes with test cases, which can gradually turn into a reference testsuite. There also exists a web playground to run and share examples easily (see DWARF Issue 251120.1 for several such examples).
Concurrency in pid 1 and systemd in general is a touchy subject. systemd is very trigger happy when it comes to forking and when combined with multithreading this causes all sorts of issues, so there's an unwritten policy to not use threads in systemd. This has lead to (in my opinion) a sprawling callback hell in every daemon and CLI in the project that performs concurrent operations.
In this presentation I'll present my view on the issues with using threads in systemd and why cooperative multitasking implemented using green threads can fix many of them while avoiding callback hell. I'll also briefly go over the unique problems you run into when designing a fiber based system in and the general design for fibers in systemd, finishing with how they're implemented under the hood with ucontext.h.
I'm hoping to get feedback on the approach from the devroom, and bring awareness on how systemd is using the GNU toolchain.
https://github.com/systemd/systemd https://github.com/systemd/systemd/pull/39771
Last year the GCC COBOL runtime library added libxml2 as a dependency because COBOL defines XML parsing and generation as part of the language. Thus was born an engineering challenge and controversy. Should libxml2 become part of GCC? Should it be linked statically or dynamically? Who will be responsible for CVE reports and security updates? Who, indeed, will maintain libxml2, now that the maintainer has stepped down?
Just what every compiler project wonts on their plate on a Monday morning.
This talk is about weighing engineering tradeoffs and what those weights are, what we decided was important, and what, probably, we agreed to do.
A brief introduction to GNU Algol 68 programming language through showcasing a real-world baremetal project. We'll cover: - How to setup GNU Algol 68 toolchain for baremetal platforms (Arm and RISC-V microcontrollers). - How to call C code to access machine's capabilities.
OpenMP is a widely used framework for parallelizing applications, enabling thread-level parallelism via simple source-code annotations. It follows the fork-join model and relies heavily on barrier synchronization among worker threads. Running OpenMP-enabled applications in the cloud is increasingly popular due to elasticity, fast startup, and pay-as-you-go pricing.
In cloud-based execution, worker threads run inside a virtual machine (VM) and are subject to dual levels of scheduling: threads are placed on guest virtual CPUs (vCPUs), and vCPUs run as ordinary tasks on the host’s physical CPUs (pCPUs). The guest scheduler places threads on vCPUs, while the host scheduler places vCPUs on pCPUs. Because these schedulers act independently, a semantic gap emerges that can undermine application performance. Barrier synchronization, whose efficiency depends on timely scheduling decisions, is vulnerable to this semantic gap, and remains under-explored.
This talk presents my PhD thesis project supervised by Julia Lawall and Jean-Pierre Lozi at Inria Paris. The thesis defines Phantom vCPUs to describe problematic host-level preemptions in which guest vCPUs remain queued on busy pCPUs, stalling progress. We show that OpenMP performance can be substantially improved inside oversubscribed cloud VMs by (1) dynamically adapting the degree of parallelism (DoP) at the start of each parallel region and (2) dynamically choosing between spinning versus blocking at barriers on a per-thread, per-barrier basis. We propose paravirtualized, scheduler-informed techniques that accurately guide these decisions and demonstrate their effectiveness in realistic deployments.
The first contribution of this thesis is Phantom Tracker, an algorithmic solution implemented in the Linux kernel that leverages paravirtualized task scheduling to detect and quantify Phantom vCPUs accurately. The second contribution is pv-barrier-sync, a dynamic barrier synchronization mechanism driven by the scheduler insights produced by Phantom Tracker. The third and final contribution is Juunansei, an OpenMP runtime extension that demonstrates the practical utility of Phantom Tracker and pv-barrier-sync with additional optimizations.
The talk discusses the context and motivation of this work, followed by a brief introduction to the Phantom Tracker, and then takes a deep dive into the libgomp implementation of pv-barrier-sync and Juunansei.
A critical challenge in C as a general-purpose language is the absence of the notion of secret data in its abstract machine. This results in information disclosure be poorly detected by compilers that lack the required semantics to model any vulnerability related to secrets leakage. Numerous dedicated tools have exists to overcome this limitation; each of which comes with its own annotation rules, tainting model, and more importantly, its own narrow scope for a specific disclosure vulnerability. Such discrepancy has created confusion for the concerned developers that are mostly unwilling to support multiple external tools, especially when they address one problem at a time. In this talk, we introduce the required C constructions to bring secrets to the GCC compiler through its system of attributes. The resulted framework, that we call GnuSecret, does not only define consistent notations and semantics to designate secrets directly in the Gnu-C language, but also propagates them throughout the program code by leveraging the symbolic execution engine embedded into the GCC Static Analyzer (GSA). Of particular interest, GnuSecret is not bound to a specific vulnerability, as its modular design allows it to virtually model any vulnerability related to the MITRE's CWE-200 and its children.
LLVM has recently gained support for an ELF implementation of the AArch64 Pointer Authentication ABI (PAuthABI) for a Linux Musl target. This talk will cover: * An introduction to the PAuthABI and its LLVM support. * How to experiment with it on any Linux machine using qemu-aarch64 emulation. * How to adapt the Linux Musl target to a bare-metal target using LLVM libc.
The AArch64 Pointer Authentication Code instructions are currently deployed on Linux to protect the return address on hardware that has support for it. This limited use case can be deployed in an ABI neutral way and run on existing hardware. The PAuthABI, based on Apple's Arm64E, takes the hardware and software backwards compatibility gloves off, and makes use of pointer authentication for code pointers such as function pointers and vtables.
The main challenge to adapt the PAuthABI support for bare-metal is initialization of global pointers as on Linux this is done by the dynamic loader. We will need to build our own signer that operates before main.
Ever been debugging a production issue and wished you'd added just one more log statement? Now you have to rebuild, wait for CI, deploy... all that time wasted. We've all been there, cursing our past selves.
We've integrated LLVM's XRay into ClickHouse to solve this. It lets us hot-patch running production systems to inject logging, profiling, and even deliberate delays into any function. No rebuild required.
XRay reserves space at function entry/exit that can be atomically patched with custom handlers at runtime. We built three handler types: LOG to add the trace points you forgot, SLEEP to reproduce (or prevent) timing-sensitive bugs, and PROFILE for deterministic profiling to complement our existing sampling profiler. The performance overhead when inactive is negligible.
Control is simple. Send a SQL query as SYSTEM INSTRUMENT ADD LOG 'QueryMetricLog::startQuery' 'This message will be logged at the start of the function' to patch the function instantly. Results show up in system.trace_log. Remove it just as easily when you're done.
I'll cover the integration challenges (ELF parsing, thread-safety, atomic patching), performance numbers (4-7% binary size, near-zero runtime cost), and real production war stories.
Over the past two years, the LLVM community has been building a general-purpose GPU offloading library. While still in its early stages, this library aims to provide a unified interface for launching kernels across different GPU vendors. The long-term vision is to enable diverse projects—ranging from OpenMP® to SYCL™ and beyond—to leverage a common GPU offloading infrastructure.
Developing this new library alongside the existing OpenMP® offloading infrastructure has introduced several interesting challenges, as both share the same plugin system. This is particularly evident in the implementation of the OpenMP® Tools Interface (OMPT).
In this talk, we’ll explore the journey so far: • Project history – how the effort started and evolved. • Current architecture – the organization of the offloading library today. • API design – what the interface looks like and how it works. • Plugins – the lower-level components that make vendor-specific integration possible. • Challenges – issues encountered in the current OMPT implementation.
LLVM’s ORC JIT [1] is a powerful framework for just-in-time compilation of LLVM IR. However, when applied to large codebases, ORC often exhibits a surprisingly high front-load ratio: we have to parse all IR modules before execution even reaches main(). This diminishes the benefits of JITing and contributes to phenomena as the “time to first plot” latency in Julia, one of ORC’s large-scale users [2].
The llvm-autojit plugin [3] is a new experimental compiler extension for automatic just-in-time compilation with ORC. The project reached a proof-of-concept state, where basic C, C++ and Rust programs build and run successfully. It integrates easily with build systems like CMake, make and cargo, making it practical to apply to real-world projects.
In this talk, we will examine the front-loading issue in ORC and explain how llvm-autojit mitigates it. Attendees will learn about pass plugins, LLVM IR code transformations, callgraphs and runtime libraries. And they will see how to experiment with ORC-based JITing in their own projects.
[1] https://llvm.org/docs/ORCv2.html [2] https://discourse.julialang.org/t/time-to-first-plot-clarification/58534 [3] https://github.com/weliveindetail/llvm-autojit
Every new AI workload seems to need new hardware. Companies spend months designing NPUs (neural processing units), then more months building compilers for them—only to discover the hardware doesn't efficiently run their target workloads. By the time they iterate, the algorithm has moved on.
We present a work-in-progress approach that generates NPU hardware directly from algorithm specifications using MLIR and CIRCT. Starting from a computation expressed in MLIR's Linalg dialect, our toolchain automatically generates synthesizable SystemVerilog for custom NPU architectures and hooks it up automatically to a RISC-V control host with an optimized memory hierarchy.
This "algorithm-first" hardware generation inverts the traditional flow: instead of designing hardware then hoping the compiler can use it effectively, we generate hardware that is provably optimal for specific Linalg operations. The approach enables rapid exploration of the hardware/algorithm co-design space: change the algorithm, regenerate the hardware, and immediately see the impact on area, power, and performance. In this talk, we'll demonstrate: * Live generation of NPU RTL from Linalg operations * The MLIR dialect stack that bridges high-level algorithms to CIRCT hardware representations * Performance comparisons between generated hardware and handmade open-source NPUs * Open questions around generalization vs. specialization trade-offs
This work aims to make hardware generation accessible to compiler engineers and algorithm researchers, not just hardware designers. We'll discuss both the potential and limitations of this approach, and where the research needs to go next.
Target audience: Compiler engineers, hardware architects, ML systems researchers. Basic familiarity with MLIR helpful but not required.
WebAssembly support in Swift started as a community project and became an official part of Swift 6.2. As Swift on WebAssembly matures, developers need robust debugging tools to match. This talk presents our work adding native debugging support for Swift targeting Wasm in LLDB. WebAssembly has some unique characteristics, such as its segmented memory address space, and we'll explore how we made that work with LLDB's architecture. Additionally, we'll cover how extensions to the GDB remote protocol enable debugging across various Wasm runtimes, including the WebAssembly Micro Runtime (WAMR), JavaScriptCore (JSC), and WasmKit.
llvm-mingw is a mingw toolchain (freely redistributable toolchain targeting Windows), built entirely with LLVM components instead of their GNU counterparts, intended to work as a drop-in replacement for existing GNU based mingw toolchains. Initially, the project mainly aimed at targeting Windows on ARM, but the toolchain supports all of i686, x86_64, armv7 and aarch64, and has been getting use also for projects that don't target ARM.
In this talk I describe how the project got started, and how I made a working toolchain for Windows on ARM64 before that even existed publicly.
https://github.com/mstorsjo/llvm-mingw/
C++ remains central to high-performance and scientific computing, yet interactive workflows for the language have historically been fragmented or unavailable. Developers rely on REPL-driven exploration, rapid iteration, rich visualisation, and debugging, but C++ lacked incremental execution, notebook integration, browser-based execution, and JIT debugging. With the introduction of clang-repl, LLVM now provides an upstream incremental compilation engine built on Clang, the IncrementalParser, and the ORC JIT.
This talk presents how the Project Jupyter, Clang/clang-repl, and Emscripten communities collaborated to build a complete, upstream-aligned interactive C++ environment. Xeus-Cpp embeds clang-repl as a native C/C++ Jupyter kernel across Linux, macOS, and Windows, enabling widgets, plots, inline documentation, and even CUDA/OpenMP use cases. Xeus-Cpp-Lite extends this model to the browser via WebAssembly and JupyterLite, compiling LLVM and Clang to WASM and using wasm-ld to dynamically link shared wasm modules generated per cell at runtime.
To complete the workflow, Xeus-Cpp integrates LLDB-DAP through clang-repl’s out-of-process execution model, enabling breakpoints, stepping, variable inspection, and full debugging of JIT-generated code directly in JupyterLab.
The talk will detail how clang-repl, ORC JIT, wasm-ld, LLDB, and LLDB-DAP come together to deliver a modern, sustainable interactive C++ workflow on both desktop and browser platforms, with live demonstrations of native and WebAssembly execution along the way.
LLVM Components Involved : clang, clang-repl, orc jit, wasm-ld, lldb, lldb-dap.
Target Audience : Researchers, Educators, Students, C/C++ Practitioners
Note : Please make sure to check out the demos/links added to the Resource section. These demos would be shown live in the talk.
This year, systemd had a breakup with its bad practice of including unused headers all over the codebase. This resulted in:
I'll present how I went about this work, using clang-include-cleaner, clang-tidy and ClangBuildAnalyzer, and including the challenges I faced:
https://github.com/systemd/systemd https://github.com/llvm/llvm-project github.com/aras-p/ClangBuildAnalyzer
Cross-compiling C and C++ is still a tedious process. It usually involves carefully crafted sysroots, Docker images and specific CI machine setups. The process becomes even more complex when supporting multiple libcs and libc versions, or architectures whose sysroots are hard or impossible to generate.
In this talk, we present toolchains_llvm_bootstrapped, an open-source Bazel module that replaces sysroots with a fully hermetic, self-bootstrapping C/C++ cross-compilation toolchain based on LLVM.
We dive into how the project wires together three Bazel toolchains: - A raw LLVM toolchain based on prebuilt LLVM binaries that cross-compiles all target runtimes from source: CRT objects, libc (glibc or musl), libstdc++/libc++, libunwind, compiler-rt, etc.
A runtime-enabled toolchain that uses those freshly built runtimes to hermetically compile your application code.
An optional self-hosted toolchain used to build LLVM entirely from source (pre-release, patched, or local branches), which is then used for the two previous stages; all in a single Bazel invocation.
We also showcase unique use cases enabled by this approach: - Cross-compiling to any target, entirely from source, with little to no configuration.
Whole-program sanitizer setups that are almost impossible with prebuilt sysroots.
Targeting arbitrary versions of the glibc.
Setup-free remote execution for cross compilation tasks.
Applying patches to LLVM, building a new toolchain and testing it against real-world projects, without manual bootstrapping steps.
Project source code: https://github.com/cerisier/toolchains_llvm_bootstrapped
Welcome to the 7th iteration of the Confidential Computing devroom! In this welcome session, we will give a very brief introduction to confidential computing and the devroom, and we will give an honorable mention to all the folks that contributed to this devroom, whether they are presenting or not.
Hardware extensions for confidential computing establish a strict trust boundary between a virtual machine and the host hypervisor. From the guest’s perspective, any interaction crossing this boundary must be treated as untrusted and potentially malicious. This places significant hardening demands on guest operating systems, especially around firmware interfaces, device drivers, and boot components.
This talk explores how COCONUT-SVSM can act as a trusted proxy between the hypervisor and the Linux guest, restoring trust in key firmware and memory-integrity interfaces. By offloading sensitive interactions to the SVSM, we can simplify guest OS hardening and provide a more secure boot process for confidential VMs.
Currently QEMU hypervisor based confidential guests on SEV-SNP, SEV-ES and TDX are not at-par with other non-confidential guests in terms of restartability. For these confidential guests, once their initial state is locked-in and its private memory pages, CPU register states are encrypted, its state is finalized and it cannot be changed. This means, in order to restart a confidential guest, a new confidential guest context must be created in KVM and CPU registers, private memory pages re-encrypted with a different key. Today, this means that upon restart, the old QEMU process terminates and the only way to achieve a reset is to instantiate a new guest with a new QEMU process on these systems.
Resettable confidential guests are important for reasons beyond bringing them at par with non-confidential guests. For example, they are a key requirement for implementation of the F-UKI idea [1][2]. This talk will describe some of the challenges we have faced and our experiences in implementing SEV-ES, SEV-SNP and TDX guest reset on QEMU. A demo will be shown that reflects the current state of progress of this work. A link for the demo video will also be shared. This will be mostly a QEMU centric presentation so we will also describe some fundamental concepts of confidential guest implementation in QEMU.
WIP patches based on which the demo will be shown are here [3][4][5]. These patches are posted in the qemu-devel mailing list for review and inclusion into QEMU [6][7][8].
In this talk, I will first introduce Intellectual Property Encapsulation, the confidential computing feature of Texas Instruments MSP430 microcontrollers, and multiple vulnerabilities we have found in it. Then, I will propose two methods of mitigating these vulnerabilities: first, a software-only solution that can be deployed on existing devices; second, a standard-compliant reimplementation of the hardware on an open-source CPU with more advanced security features and an extensive testing framework.
Attacks and software mitigation: https://github.com/martonbognar/ipe-exposure Open-source hardware design and security testing: https://github.com/martonbognar/openipe
Confidential computing is rapidly evolving with Intel TDX, AMD SEV-SNP, and Arm CCA. However, unlike TDX and SEV-SNP, Arm CCA lacks publicly available hardware, making performance evaluation difficult. While Arm's hardware simulation provides functional correctness, it lacks cycle accuracy, forcing researchers to build best-effort performance prototypes by transplanting their CCA-bound implementations onto non-CCA Arm boards and estimating CCA overheads in software. This leads to duplicated efforts, inconsistent comparisons, and high barriers to entry.
In this talk, I will present OpenCCA, our open research framework that enables CCA-bound code execution on commodity Arm hardware. OpenCCA systematically adapts the software stack—from bootloader to hypervisor—to emulate CCA operations for performance evaluation while preserving functional correctness. Our approach allows researchers to lift-and-shift implementations from Arm’s simulation to real hardware, providing a framework for performance analysis, even without publicly available Arm CPUs with CCA.
I will discuss the key challenges in OpenCCA's design and implementation. OpenCCA runs on an affordable Armv8.2 Rockchip RK3588 board ($250), making it a practical and accessible platform for Arm CCA research.
I brought the opencca box, the RK3588 board along with tooling to flash firmware and power reset to FOSDEM. During the talk, we will attempt a live demo and boot a confidential VM on OpenCCA to run GPU workloads. This with the goal to showcase how OpenCCA can be used to explore systems research ideas on Arm CCA.
Confidential Computing poses a unique challenge of Attestation Verification. The reason is, Attester in Confidential Computing is infact a collection of Attesters, what we call as Composite Attester. One Attester is a Workload which runs in a CC Environment, while the other Attester is the actual platform on which the Workload is executed. The two Attesters have separate Supply Chains (one been the Workload Owner deploying the Workload) while the Platform is a different Supplier, say Intel TDX or Arm CC. Another deployment could be a Workload been trained on a GPU (via means of Integrated TEE) attached to a CPU, to create an end-to-end secure environment. How can one trust such a Workload, along with the CPU which is feeding the training data to it?? To trust a Composite Attester, through remote attestation one needs multiple Remote Attestation Verifiers, for example one coming from CPU Vendor the other from a GPU Vendor. How do the Verifiers coordinate? Are there topological patterns of coordination that can be standardized.
The presentation will highlight the Work done in IETF Standards & Open Source Project Veraison to highlight: 1. Composite Attesters 2. Remote Attestation though Multiple Verifiers 3. Open-Source Work done in Project Veraison to highlight how Composition of Attesters can be constructed in a standardized manner 4. Open Source Work done in Project Veraison to highlight how Multiple Verifiers can coordinate to produce a Combined Attestation Verdict for a Composite Attester.
Please see the following links- https://datatracker.ietf.org/doc/draft-richardson-rats-composite-attesters/
https://datatracker.ietf.org/doc/draft-deshpande-rats-multi-verifier/
Composition of Attesters using Concise Message Wrappers:
Golang Implementation:
https://github.com/veraison/cmw
Rust Implementation:
https://github.com/veraison/rust-cmw
Attestation results required for constructing compositional semantics: Golang Implementation: https://github.com/veraison/ear
Rust Implementation: https://github.com/veraison/rust-ear
Verification of Composite Attesters - Arm-CCA https://github.com/veraison/services
We have released the sample codes for remote attestation on cloud confidential computing services. I report the lessons learned from them. https://github.com/iisec-suzaki/cloud-ra-sample The samples cover multiple types of Trusted Execution Environments (TEEs): (1) Confidential VMs, including AMD SEV-SNP on Azure, AWS, and GCP, and Intel TDX on Azure and GCP; (2) TEE enclaves using Intel SGX on Azure; and (3) hypervisor-based enclaves using AWS Nitro Enclaves. As verifiers, the samples make use of both open-source attestation tools and commercial services such as Microsoft Azure Attestation (MAA). This talk aims to share these observations to support developers and researchers working with heterogeneous TEE environments and to help avoid common pitfalls when implementing remote attestation on cloud platforms.
A decade after Intel SGX’s public release, a rich ecosystem of shielding runtimes has emerged, but research on API and ABI sanitization attacks shows that their growing complexity introduces new vulnerabilities. What is still missing is a truly minimal and portable way to develop enclaves.
In this talk, we will introduce our recent work on "bare-sgx", a lightweight, fully customizable framework for building SGX enclaves directly on bare-metal Linux using only C and assembly. The initial code was forked from the Linux kernel's selftests framework and explicitly encouraged by prominent kernel developers. By interfacing directly with the upstream SGX driver, bare-sgx removes the complexity and overhead of existing SGX SDKs and library OSs. The result is extremely small enclaves, often just a few pages, tailored to a specific purpose and excluding all other unnecessary code and features. Therefore, bare-sgx provides a truly minimal trusted computing base while avoiding fragile dependencies that could hinder portability or long-term reproducibility.
Although still young, bare-sgx aims to provide a long-term stable foundation for minimal-trust enclave development, reproducible research artifacts, and rapid prototyping of SGX attacks and defenses.
Attested TLS is a fundamental building block of confidential computing. We have defended our position (cf. expat BoF) to standardize the attested TLS protocols for confidential computing in the IETF, and a new Working Group named Secure Evidence and Attestation Transport (SEAT) has been formed to exclusively tackle this specific problem. In this talk, we present the design choices for standardization of attested TLS, namely pre-handshake attestation, intra-handshake attestation, and post-handshake attestation. We present the journey of standardization effort showing replay, diversion and relay attacks on pre-handshake attestation and intra-handshake attestation (see paper and formal proof). We finally present the post-handshake attestation candidate draft for standardization to gather feedback from the community, so that it can be accommodated in the standardization.
We propose a specification that defines a method for two parties in a communication interaction to exchange Evidence and Attestation Results using exported authenticators, as defined in RFC9261. Additionally, we introduce the cmw_attestation extension, which allows attestation credentials to be included directly in the Certificate message sent during the Exported Authenticator-based post-handshake authentication. The approach supports both the passport and background check models from the RATS architecture while ensuring that attestation remains bound to the underlying communication channel.
WiP Implementation uses the veraison/rust-cmw implementation of RATS conceptual messages wrapper. It includes a test which demonstrates using it with QUIC (for transport) and Intel TDX (as confidential compute platform): tests/quic_tdx.rs.
This talk presents a practical approach to building a high‑assurance core infrastructure for home and small business environments, using modern open firmware on commodity server hardware.
As AI workloads move from cloud to on‑premise, the need for trustworthy and attestable hardware platforms for running models and handling sensitive data becomes critical. But what does "trustworthy" actually mean at the hardware/firmware level, and can we realistically achieve it with today’s platforms?
We will walk through how to build a system based on a modern AMD server board combined with open‑source firmware (coreboot[1] and OpenSIL[2]) to gain more control and transparency across the boot chain. We will discuss:
The goal is to show how open firmware can complement security and confidentiality computing features to create a platform you can actually inspect, reason about, and attest from top to bottom - rather than treating the hardware and firmware as opaque, trusted black boxes.
[1] https://www.coreboot.org/ [2] https://github.com/openSIL/openSIL
As technology advances, the difference in cost and technical specifications between proprietary devices used in professional settings and education grows. We present the OpenFlexure Microscope, a locally manufacturable and 3D-printed, brightfield microscope as a solution to bridge the gap between teaching equipment and the devices medics are expected to use throughout labs around the world. The OpenFlexure Project develops all the hardware, software and documentation for education across all levels and disciplines. A locally manufacturable and affordable digital microscope allows pathologists to image and practice on local samples - something that is key to medical education - whilst also breaking down international barriers through collaborative initiatives such as the School of Open Pathology which seeks to connect pathologists around the world. High schools are becoming more invested in interdisciplinary projects across all age groups and the OpenFlexure Microscope gives students the opportunity to learn about biology, engineering, physics and computing science all in the same package. We have run workshops for high school science teachers around Glasgow, Scotland where we have shown them how to build the microscope and discussed challenges of integrating our microscope into a classroom environment.
GNU Octave is programming language intended for numerical computations. Often quoted as the MATLAB open-source clone or alternative, Octave has been in constant development for more than three decades already. Hundreds of engineering departments worldwide and tens of thousands of student have benefited from this free and open source software. Traditionally, Octave has been primarily targeting engineering applications due to its advanced computational capabilities with multidimensional arrays. Complementary to the core Octave capabilities, there has been Octave Forge, which for a long time served numerous packages extending Octave functionality for various domain-specific engineering applications. Since 2022, there has been a shift towards expanding the Octave ecosystem beyond engineering applications. This coincided with the development of Octave Packages, a new package indexing system that facilitates the development and integration of Octave package within the Octave ecosystem. This shift is most prominent by the recent advancements of the statistics package as well as a number of other packages focused on data analysis and visualization. This talk aims at a concise presentation of the current state of the Octave ecosystem with a special attention to the educational aspects and its benefits for educators and students alike. The talk will focus on statistics and data analysis with GNU Octave, and discuss its educational benefits in these two widely popular fields. https://octave.org https://gnu-octave.github.io/packages/ https://github.com/gnu-octave/statistics https://github.com/pr0m1th3as/datatypes
Processing is one of the most widely used open-source tools for creative coding and computer science education. Since its first release in 2001, it has helped millions of students, artists, and designers learn programming through visual and interactive projects. It has been used in classrooms, art installations, interactive media, and data visualization worldwide. Processing popularized the term creative coding and helped establish it as a field that bridges art, design, and computer science.
The values that shaped Processing (accessibility, creativity, and democratization) remain essential, but the context has changed. Computer science education is dealing with rapid shifts in technology and society and today’s learners encounter a software ecosystem dominated by opaque but tantalizing systems and automation. This raises new questions: What does it mean to learn to code today? Can we re-imagine coding tools in a way that preserves learner agency, curiosity, and critical thinking? Could creative coding hold some of the answers?
In this talk, we’ll share what we’re learning as stewards of Processing and how these efforts invite us to rethink creative coding’s role in the future of computer science education.
More about Processing:
While "AI" is all the rage in current educational debates, the associated skills are mostly about prompting chat bots and generally becoming a productive user of commercial offerings. This talk will introduce new approaches to educate learners about the algorithms that make up modern AI systems, specifically neural networks, how to build them from scratch using free and open source materials, how to use them to diagnose data sets and enhance your personal projects, and how to form an informed critical and skeptical competence towards them.
Execubot (execubot.fr) is an open-source serious game designed to help students learn Python. It also offers a collaborative environment where both students and teachers can create and submit custom levels. Execubot can be used independently by learners or integrated into the classroom, with teacher-controlled settings that adapt the experience to specific learning objectives.
Hedy is an open-source programming language designed to make programming easier for children. It’s also easy for teachers without a technical background to adopt. Hedy bridges the gap between block-based tools like Scratch and text-based programming in Python.
In this talk, we’ll explore how Hedy gradually introduces programming concepts in three stages: Basic, Advanced, and Expert. Across 16 levels, learners progress from simple print statements to fully functional Python code. We’ll share how teachers around the world use Hedy in classrooms, how our global translator community has made Hedy available in over 40 languages, and how open-source collaboration drives its continuous evolution.
You will gain insight into Hedy’s design principles, pedagogical impact, and the challenges that come with developing an educational programming language. Whether you’re an educator, developer, or open-source contributor, come see how Hedy lowers the barriers to programming and inspires the next generation of coders!