University of Bristol
Modern software increasingly demands memory-safe languages, with Rust emerging as a leading choice. Yet rewriting large legacy codebases is notoriously slow, error-prone and costly. This talk explores automatic approaches to translating C and Go code into Rust while preserving semantics and producing maintainable results. Our approach combines program analysis, testing and fuzzing with the generative capabilities of Large Language Models. I will also share lessons learned and highlight current limitations in scaling translation across languages and real-world systems.
Cristina David is a Senior Lecturer (Associate Professor) and Royal Society University Research Fellow in the Department of Computer Science at the University of Bristol. Before joining Bristol, she held a Royal Society University Research Fellowship at the University of Cambridge and a Research Associate position at the University of Oxford. She earned her PhD in Computer Science from the National University of Singapore. Her research spans programming languages, program analysis and synthesis, and software verification. Recently, her work has focused on neurosymbolic code generation and code translation, integrating program analysis with large language models to improve the correctness and reliability of the produced code.
University of Cambridge
Compilers are the core of our software ecosystem. Yet, maintaining proofs of their formal correctness conflicts with the need for fast evolution of our compiler stacks. Programming languages such as C++, Rust, Julia, and Lean utilize LLVM as their backend, and LLVM’s MLIR project extends LLVM technology to hardware design, cryptography, quantum computing, and the current AI software stack. Fast evolution is key to delivering peak performance for the latest workloads. Hence, streamlined software development is a must. Slowdowns, e.g., for additional formal reasoning, are undesirable. We ask: how can we bring formal methods to such a complex compiler ecosystem? We discuss the state of formal verification in the LLVM ecosystem and share our work towards bringing formal verification to MLIR-based compilers. As a first step, we introduce SMT-level abstractions as first-class verification dialects to MLIR. We then look at recent efforts to verify peephole optimizations at scale using the Lean ITP, detailing our work on bitvector reasoning and the development of relevant decision procedures. After reporting on our recent work on bringing authoritative RISC-V semantics, we discuss future steps to further increase formal reasoning in LLVM.
Tobias Grosser is an Associate Professor at the University of Cambridge. Before, he worked as a Reader at the University of Cambridge, as an Ambizione Fellow at ETH Zurich, and as a Google PhD Fellow at INRIA/Paris IV/ENS Paris. Tobias and his research group have a decade-long history of contributing to the LLVM ecosystem. Tobias developed polyhedral loop optimizations in LLVM/Polly, worked on hardware design with LLHD/CIRCT, and developed the xDSL Python-Native MLIR-style compiler framework. Over the last years, he started to look into formal methods in the context of the Lean ITP.
University of Oxford
Modern computing systems increasingly involve autonomous agents acting concurrently. They also often operate under uncertainty: communication mechanisms can be unreliable, hardware components can fail and sensors can yield imprecise information. Probabilistic model checking is an automated verification technique that can be used to provide quantitative guarantees on the correctness, safety and reliability of such systems. In this talk, I will present some of the advances that have been made in this area in order to provide robust verification and control of autonomous systems in uncertain environments. This includes the use of multi-agent models and game-theoretic verification methods, and the development of techniques to learn, model and reason about epistemic uncertainty in system models.
Dave Parker is a Professor of Computer Science at the University of Oxford. His research is in formal verification, with a particular focus on the analysis of probabilistic systems, and he leads the development of the widely used verification tools PRISM and PRISM-games. His current research interests include verification techniques for applications in AI and machine learning, including robust methods for quantifying uncertainty and game-theoretic approaches.