Digital Twin of Politicians
Supervisors
Suitable for
Abstract
1 Background and Motivation
Recent advances in AI have made it possible to construct computational digital twins capable of representing an individual’s
communication style, reasoning patterns, and domain knowledge. While such systems have
been explored in engineering and
biomedical domains, their application to political figures remains largely unexplored due to the unique ethical, social, and
governance challenges involved. Yet, political communication
is an area in which digital twins can provide substantial
public value, as they can help: 1. efficiently engage in the daily routine to reduce workload, 2. support parliamentary staff
in drafting responses or briefing materials,
and 3. facilitate structured analysis of a politician’s public reasoning.
This project is motivated by a unique research opportunity to construct and study a digital twin representation of a political
persona. This creates a rare, controlled environment in which to investigate how a digital
twin reproduces political
language, how citizens or staff engage with AI-mediated political communication, and how safety, accountability, and transparency
requirements should shape such systems. Meanwhile, it
raises critical questions about the risks of misrepresentation,
hallucination, and potential misuse, making this an ideal testbed for studying what a political digital twin can do and how
it should be governed.
2 Proposed Work
The project will develop a prototype digital twin of the consenting parliamentarian using a combination of AI techniques,
including retrieval-augmented generation (RAG), persona representations (e.g., lightweight persona
vectors), and the
role-play technique. The main objective is to enable the model to generate responses that are simultaneously grounded in authorized
textual evidence and reflective of the individual’s characteristic
rhetorical style and political reasoning. To
achieve this, the project will curate a structured corpus from public speeches, debates, media articles, and approved internal
materials, which will serve as the foundation
for persona modeling.
Meanwhile, the proposed project aims to identify and mitigate safety risks related to the political digital twin. The system
will incorporate mechanisms to ensure that all claims are traceable to retrieved evidence,
that the model does not fabricate
new policy commitments, and that sensitive topics—such as elections, endorsements, or speculative future decisions—are
handled cautiously or deferred to human review. Techniques
such as controlled generation, validity checking, uncertainty
estimation, and evidence-based reasoning will be employed to maintain alignment with the authorized corpus. Through iterative
testing, including targeted
red-teaming and feedback from the parliamentarian or staff, the project aims to produce not
only an early digital twin of politicians but also a principled framework for safe and transparent use of AI in politics.
The
final output will be a practical prototype with a comprehensive analysis of its limitations, risks, and broader impact
for democratic communication