Topics: Large language model (LLM) alignment and evaluation, interactional ethics, nudging and dark patterns, Computers Are Social Actors (CASA), affective computing, respect, user autonomy, self-determination theory, design for wellbeing, social engineering, Conversational Agent (CA) design and evaluation, Responsible Innovation (RI)
Fields: Computational linguistics, Human-Computer Interaction (HCI), behavioural design, philosophy of language and cognition, Human-Robot Interaction (HRI), cognitive science, social psychology, technological governance, cognitive linguistics, Natural Language Processing/Understanding (NLP/NLU), analytic philosophy, social anthropology, human-centred artificial intelligence (HCAI), AI/technoethics
Lize is a doctoral candidate in the Human Centred Computing group, co-supervised by Prof. Max van Kleek and Prof. Marina Jirotka. Her doctorate is funded by a Graduate Lighthouse Scholarship from the Responsible Technology Institute.
Her research centres on building a novel theoretical framework regarding the ethics of interaction, with a primary focus on systems that behave as social actors (i.e., any system that "talks" to/at users). Her work involves exploring, with end-users, what it means for algorithmic systems to treat users respectfully, and the sorts of harms and expectation violations that may arise in human-computer social interaction. Her focuses include supporting users' sense of autonomy, competence and self-worth, as well as identifying inappropriate ways that interfaces use social/anthropomorphic cues to steer user behaviour, as dark patterns.
Before starting her DPhil, Lize graduated with distinction with a master's by thesis in Philosophy from Stellenbosch University, focusing on cognitive and computational linguistics. Her master's thesis critically integrates literature in cognitive science, cognitive linguistics, evolutionary psychology, and philosophy to investigate the embodied factors involved in concept acquisition and language learning. Comparing this against a review of technical approaches in Natural Language Understanding (NLU) in AI, it conceptually analyses and distinguishes different senses of language `understanding', and considers opportunities for artificial language comprehension using multi-modal concept grounding.
She also holds a BA Honours degree in Philosophy from Stellenbosch University, focusing on philosophy of language and computational linguistics, with her dissertation published in a peer-reviewed journal. Preceding that, she graduated top of her class with a BA in Humanities (majors: Philosophy, English, History of Art, and Social Anthropology) from North-West University in Potchefstroom.
Lize is a Research Fellow at the Unit for the Ethics of Technology in the Centre of Applied Ethics in Stellenbosch University's Philosophy Department. She is also a founding member of the Responsible Technology Institute's Student Network, and recently worked for three months as a student researcher at Google UK on the ethics of generative AI and dialogue agents.
Designing for Sustained Motivation: A Review of Self−Determination Theory in Behaviour Change Technologies
Lize Alberts‚ Ulrik Lyngs and Kai Lukoff
Computers as Bad Social Actors: Dark Patterns and Anti−Patterns in Interfaces that Act Socially
Lize Alberts‚ Ulrik Lyngs and Max Van Kleek
What makes for a 'good' social actor? Using respect as a lens to evaluate interactions with language agents
Lize Alberts‚ Geoff Keeling and Amanda McCroskery