PPLV Group Seminar, 2026
Quantum computers encode data in quantum systems,
which enables us to do computations in totally new ways.
the code that runs on a quantum computer
|
INIT 5 CNOT 1 0 H 2 Z 3 H 0 H 1 CNOT 4 2 ... |
↔ |
code that makes that code (better)
![]() |
![]() |
![]() |
| Optimisation | Simulation & Verification | Error Correction |
A complete set of equations for qubit QC

ZX-diagrams are made of spiders:
Spiders can be used to construct basic pieces of a computation, namely...
...which evolve a quantum state, i.e. "do" the computation:
...which collapse the quantum state to a fixed one,
depending on the outcome $k \in \{0,1\}$:
A simple classical error model assumes at each time step, a bit might get flipped with some fixed (small) probability $p$.
Generalises to qubits, accounting for the fact that
"flips" can be on different axes:
Spiders of the same colour fuse together and their phases add:
$\implies$ errors will flip the outcomes of measurements of the same colour:
We can try to detect errors with measurements,
but single-qubit measurements have a problem...
...they collapse the state!
$n$-qubit basis vectors are labelled by bitstrings
$k=0$ projects onto "even" parity and $k=1$ onto "odd" parity
...the same, but w.r.t. a different basis
The Steane code requires 7 physical qubits, but it allows
correction of any single-qubit X, Y, or Z error
The surface code using $n^2$ physical qubits to correct $\lfloor \frac{n-1}{2} \rfloor$ errors
But codes are only half the story. We also need to know how to implement operations fault-tolerantly.
Q: Is the LHS really equivalent to the RHS?
A: It depends on what "equivalent" means.
They both give the same linear map, i.e. the behave the same in the absence of errors.
But they behave differently in the presence of errors, e.g.
$D \ \hat{=}\ E$
$D\ \hat{=}\ E \implies D = E$
$D = E \ \ \not\!\!\!\implies D\ \hat{=}\ E$
Similar to space-time codes, we can model faults by tracking their locations in a circuit or ZX-diagram.
$F \in \mathcal P_{|\mathcal L|} := \{ P_1 \otimes \ldots \otimes P_{|\mathcal L|} \ $ $:\ P_i \in \{I, X, Y, Z\}$ $\}$
basic faults $:=$ all single-qubit Paulis
basic faults $:=$ all single-qubit Paulis (memory errors) +
some multi-qubit Paulis (correlated gate/measurement errors)
Ex: sub-models, where we remove some basic faults to represent components we assume are noise-free:
Definition: Two circuits (or ZX-diagrams) $C, D$ are called fault-equivalent:
$C \ \hat{=}\ D$
if for any undetectable fault $F$ of weight $w$ on $C$, there exists an undetectable fault $F'$ of weight $\leq w$ on $D$ such that $C[F] = D[F']$
Idea: start with an idealised computation (i.e. specification) and refine it with fault-equivalent rewrites until it is implementable on hardware.
Specification/refinement has been used in formal methods for classical software dev since the 1970s. Why not for FTQC?




Image credit: Riverlane and Google Quantum AI
"Fault Tolerance by Construction". Rodatz, Poór, Kissinger
arXiv:2506.17181