New Foundational Structures for Engineering Verified multi-UAVs
In March 2011, Japan suffered from its biggest earthquake and devastating
tsunami. Severe damage were inflicted on its Fukushima nuclear plants and more than 100,000 people had to be evacuated after
the radiation levels became unsafe. Workers were not able to operate on site, preventing them from securing safety at the
atomic power plant and averting a major radiation leak. One month after the disaster, in order to assess the severity of the
damage to the nuclear plant from above, a small aerial vehicle equipped with cameras was sent to take pictures and videos
of the affected areas. The video footage obtained brought valuable information to the rescue teams that could not have been
acquired otherwise. But the use of aerial vehicles still remains limited by the fact that they require a remote operator at
transmission range to control them. It is also necessary to have an operator to control the camera and interpret the data.
In order to work autonomously, these systems need to be highly intelligent and rational so that they can become reliable: they must have high levels of knowledge to accomplish their AI-complex missions which occur in any other information environment. This implies that they should adapt to any unexpected situations such as recent changes not reflected in prior information on the environment and possible loss of GPS due to obstructing buildings or indoor exploration; reliable operation under such conditions would, for instance, enable them to return safely to their base station. In a multi-UAV setting, they should additionally be able to communicate with each other to simplify their goals, to learn from each other's information, and to update and share their knowledge. Given that any mission is unique in terms of deployment areas, tasks and goals to be achieved, etc., and can be critical in the sense that human lives may be involved, the implementation must be verified to be correct with respect to a formal specification. A famous example of an implementation error and a failure to comply with the specification is the self-destruction of Ariane 5 in 1996 immediately after take-off, caused by a numeric overflow due to an implementation that was not suitable for all possible situations. In 1996, the Lockheed Martin/Boeing Darkstar long-endurance UAV crashed following what the Pentagon called a "mishap [..] directly traceable to deficiencies in the modelling and simulation of the flight vehicle".
To achieve the reliability required, we will need to develop a formalism that represents the sets of actions each Unmanned Aerial Vehicle (UAV) can perform while allowing capture of the kinetic constraints of the UAVs. We will then verify that the behaviours of each UAV modelled using this formalism lead to the individual or overall goal of the mission they are to achieve. These need to be extended from individual behaviours to a cooperative level amongst the multiple UAVs. Next, we plan to link the low-level code to high-level abstraction and verify it via advanced model-checking techniques. Finally, logical tools will be used to exhaustively reason about learning as a result of information flow among UAVs and their environment.