DPhil the Future: What extraterrestrial and AI communication could share in common
Posted: 3rd March 2026
Aliens have certainly made the news lately – US-ordered releases of UFO records and a Bank of England contingency plan for extraterrestrial contact have made the question of extraterrestrial intelligence feel, for perhaps the first time in my lifetime, like a genuinely mainstream one. I welcome this development.
At the same time, I think that our methods for detecting alien communication contain anthropocentric assumptions so fundamental that they may exclude genuine signals from our analyses. And an AI interpretability question (which I find at least as pressing) turns out to be a close cousin of this problem.
An assumption in the search for ET
We tend to take as given that languages have rules – ones that are, at minimum, stable over the course of a given utterance. Yes, words shift in meaning across centuries (even decades). Yes, we can code-switch between registers. But when you read this, ‘sentence’ means the same thing it meant three words ago; words are processed sequentially, whether spoken now or ten seconds forward. SETI (search for extraterrestrial intelligence) researchers make a version of the same bet when scanning the skies: that any civilisation worth its salt would broadcast something with a stable statistical structure.
But does this kind of feature have to be universal? Wittgenstein once famously remarked that if a lion could talk, we could not understand, because meaning arises from shared forms of life. We may not have encountered ETs (extraterrestrials) yet not only because of the vastness of the universe, or because we don’t share their experiences – but perhaps also because we lack the interpretive architecture through which their communication makes sense in the first place.
That problem motivates my recent work on Self-Modifying Communication (SMC): a formalisation about communication systems in which the rules of interpretation evolve as part of messages themselves. The seed of the idea comes from a paper I encountered by Douglas Youvan raising the possibility that ET communication might be self-modifying in nature. I found that I couldn't stop wondering what that would mean in formal terms.
To make the idea concrete, I introduce a hypothetical mechanism called Repetition-Triggered Probabilistic Autocompletion, or RPA. In RPA, repetition functions not as emphasis – as it can in human language (e.g. ‘I really, really like Oxford’) – but as a meta-level control operator. If a sender transmits ‘I like like’, the repetition triggers a predictive completion of what is likely to be said. The receiver infers the intended meaning, updates a jointly maintained model between recipient and communicator, and the next completion draws on a different distribution than the last. To its participants, this could be a coherent conversation, and to anyone outside it, potentially, noise.
Would any civilisation actually do this?
Honestly, we don't know, though I think there are plausible reasons SMC could arise in the universe. Creatures with richer access to each other's internal states (through biochemical signalling, say, or mechanisms entirely unknown to us) might naturally develop protocols where a seemingly 'unstable’ surface structure belies shared state awareness. There is also an information density argument. A single transmission that simultaneously carries content and updates rules for decoding future content is a remarkably compact thing.
On the other hand, SMC is fragile in ways that conventional (human) communication is not. If an ‘error’ occurs in SMC, subsequent decoding builds upon a corrupted state. For signals crossing galactic amounts of noisy interstellar space, that fragility could be a serious liability. Additionally, the empirical record from emergent communication simulations suggests that AI agents tend to converge on conventions over time. Stability could arguably be less an anthropocentric preference and more a general requirement for reliable information exchange.
The AI safety analogue
There is a (fun) version of this paper that exclusively focuses on aliens, but the implications of SMC also strike light years closer to home.
As AI systems develop persistent memory, extended interaction histories, and adaptive strategies that evolve across conversations, it is possible in theory that the boundary between message content and interpretive rules become less fixed than they are today. A system that reshapes how recipients interpret its outputs over the course of a sustained relationship would be considerably difficult to audit under the assumption that the decoding functions stay put.
Formalising what SMC actually is (and what it would even mean to detect SMC from outside) seems like useful groundwork to lay before we find ourselves needing answers more urgently. I’d argue that this work is as relevant to AI safety researchers as it is to anyone pointing a radio telescope at the ether.
Listening with humility
This research is not a prediction that extraterrestrial civilisations use SMC, nor evidence that current AI systems do. It is rather a demonstration that SMC is a formally coherent possibility, and that our existing tools for detecting and decoding communication are not, in some important senses, built to catch it. Expanding our conceptual toolkit (by asking what communication could look like, not just what it does look like in our own species) seems like a minimum requirement for taking these questions seriously. After all, other forms of intelligence in the universe – whether biological or artificial – are under no obligation to be legible on our terms.