Reliable Systems Engineering
Computer systems, from medical implants to fly-by-wire technology, are increasingly deployed in safety-critical situations. This calls for a rigorous design and verification methodology for coping with the major sources of system complexity: concurrency, heterogeneity, and uncertainty. For example, the flight control system of an aircraft consists of many interacting concurrent components, some with discrete behaviour (e.g., the software modules), others with continuous dynamics (e.g., the control surfaces), all of which must behave robustly even in unpredictable environments. We advocate a two-step methodology: formalization followed by algorithmic analysis (or, "model building" followed by "model checking"). As examples, several specific instances of this methodology are presented.
First, for coping with concurrency, we present interface automata, a formal model for component-based systems which views components as potential collaborators or adversaries in a multi-player game with a given objective, such as system safety. The compatibility of components can then be checked by algorithms for computing winning strategies in such games. Second, for coping with heterogeneity, we present hybrid automata, a formal model for mixed discrete-continuous systems which admits discrete state transitions as well as continuous state evolutions. The safety of such a hybrid system can then be checked by algorithms that explore the state space by propagating symbolic constraints. Third, for coping with uncertainty, we outline a quantitative theory of systems which supports stochastic behaviour and approximate analysis.