Skip to main content

DPhil the Future: AI Onboard of Satellites for Autonomous Detection of Disaster Events

Posted:

Graphic with a timer on a blue background titled DPhil The Future and the text 'Our students are 100% part of our success. DPhil the Future is our way of giving our students a platform to share their insight and views on all things computer science'.

DPhil student Vit Růžička explores the AI system allowing unprecedented automony for satellite constellations. 

Timely disaster detection onboard a satellite is important for the prioritisation of relevant data to downstream, and for the scheduling of following observations. A new AI system called RaVAEn allows for the detection of a variety of disaster events onboard low-powered satellites, allowing an unprecedented amount of autonomy for satellite constellations.  

Remote sensing and Earth observation is on the verge of its own unique revolution. Multiple factors come into play here, but the synchronisation of several technical advances make AI-powered intelligent decision-making onboard possible but also needed. For now, most satellites work as cameras that simply observe the Earth and send terabytes of image data daily to the ground – that is, if they are scheduled to do so, if someone has paid for their capture, or if they are one of the few missions that provide free data for the whole community of researchers, such as the Sentinel-2 satellite from the European Space Agency (ESA).  

With better sensors and larger quantities of satellites in orbit, the quantity of data will grow uncontrollably. Relying on systems that capture imagery only on demand would, on the other hand, miss a lot of interesting, unexpected events – such as disasters. Thankfully, this need is matched with new opportunities that come with the capabilities of hardware on even small satellites. We are entering the age where it is possible to run Artificial Intelligence models on these devices, which will allow us to make choices about what to do with the data.  

Several experimental satellite missions have been launched to date, serving as demonstrators that onboard AI processing is possible and beneficial. The PhiSat-1 satellite from the ESA is, for example, running - among other applications - a cloud detection model to select and downlink only cloud-free images to the ground. Another example is the WorldFloods system, deployed on D-Orbit satellites, which can detect flooded areas from space. These proof-of-concept missions demonstrate the usefulness of AI onboard, and hint at exciting future developments in this area.  

Our team of researchers participating in the Frontier Development Lab 2021 (Vit Růžička and Daniele De Martini from Oxford University and six researchers from other Universities) – has recently published a paper proposing an AI system called RaVAEn for the task of unsupervised change detection of extreme disaster events, which we demonstrate could run on a small EO satellite with limited processing power.  

Simply put, change detection compares a series of images and tries to point to locations within these images where there is a change. With real-world satellite imagery, this change could, for example, be to the colour of a river when it is flooded, in comparison with its usual state. Similarly, a burn area will look different when compared to the previous satellite capture of that location before a wildfire. In some cases, it is important to also measure the magnitude of this change, because small changes may be something we want to ignore, such as noise from the sensors or natural variations of the data.  

A specific property of our work is leveraging unsupervised learning methods, which in general means that there is no requirement for data annotation. This is useful in cases such as remote sensing, where there is a near never-ending stream of data, which couldn’t be feasibly annotated and checked by human experts. Another benefit of this approach is that by using unsupervised models, we remain sensor-agnostic, robust to near-sensor noisy data, and finally, that we can detect any type of event.  

More concretely, we use a variational auto-encoder (VAE) model, pre-trained on a representative dataset of satellite images from Sentinel 2, using the L1C level of data (level of processing before removing the effects of atmosphere, also known as the Top-Of-Atmosphere product). Later we evaluate this model on a dataset of four types of disaster events and we show that we can reliably detect any of these as changes despite not training with these disasters in the original dataset.  

Unsupervised approaches typically learn features useful for understanding the data, usually on some auxiliary tasks. Our example is using the auto-encoder model, which learns to reduce the dimension of the observed representative data into a bottleneck description, from which it then reconstructs the original data. Without having any labels, the model needs to understand enough about the data that it can reconstruct it, in the meantime finding an efficient representation. In our approach we compare the representations of images in a time series to select areas with a high amount of change.  

To evaluate the feasibility of deployment of our model on-board of satellites, we test it on a Xilinx Pynq board, which is a representative piece of hardware that simulates the low compute capabilities of a small CubeSat satellite (these are miniaturised satellites based on standardised size, each unit corresponding spatially to a cube of 10cm). At present, we reach very fast speeds without loss of accuracy (25 km2 area in 2 seconds) when deploying our model on the CPU of this board alone. Excitingly, we could also deploy the model on the FPGA component (which is an integrated circuit capable of reconfiguration based on a program) to achieve even faster processing speeds.  

In our work, we are reporting detection capabilities that outperform simple non-machine learning baselines. Our proposed system also allows for ingestion of longer time series of data - for example, we demonstrate better performance when allowing larger memory of images. Namely, comparing three previous passes with the last observation gives significantly better performance than when comparing only with the last remembered pass. Naturally, this would require larger storage for remembering earlier passes – however, using the learned feature encoders, we benefit from having to store only the feature representations occupying 60x less storage than if we used the raw images. This would also be beneficial in scenarios where we need to communicate these representations throughout a larger system of satellites in a constellation. A satellite flying over an area with detected disaster event could provide an update on the evolution of the event, or provide more robust detection in constellations with mixed sensing capabilities with a tip-and-cue regime.  

Furthermore, as our approach is sensor agnostic: the proposed method could be developed and deployed for any satellite without any need for costly manual annotation, outside of having a representative sample of images to re-train the auto-encoder models.  

Finally, if deployed, we see our system as the first system capable of the detection of general changes, and as a logical next step following the previous experimental missions of PhiSat-1 and WorldFloods, which focused on a concrete single class detection. Timely and accurate detection of disasters could be a reality thanks to the deployment of our model. The next phase of our project is to work towards putting our system onboard a real CubeSat.  

With this work, we are also one step closer to the futuristic vision of sending out a probe into deep space which would be capable of learning its own representations from raw observations, meanwhile sending anomalous detections to the home planet while allowing human curation. Cataloguing resources on other planetary bodies, early detection that would allow fast follow-up measurements with other instruments – these are just a few of the interesting future directions that occupy similar research space.  

Read more about the research in Nature Scientific Reports.