Natural Language Understanding, Consistent and Trustworthy Language Model, Logical Reasoning.
M.J is a DPhil student in Computer Science at the University of Oxford. His current research focuses on investigating inconsistent behaviours of neural language models and developing remedies to alleviate the issues. M.J received a B.Sc degree in Industrial Management Engineering from Korea University in 2017 and earned an M.Eng degree in 2019 from the same affiliation. M.J has previously worked for SK Telecom in Seoul, South Korea, as a natural language processing (NLP) scientist & engineer from 2019 to 2020.
Beyond Distributional Hypothesis: Let Language Models Learn Meaning−Text Correspondence
Myeongjun Jang‚ Frank Martin Mtumbuka and Thomas Lukasiewicz
In Findings of NAACL 2022‚ Seattle‚ Washington‚ USA‚ July 2022. Pages 2030–2042. Association for Computational Linguistics. July, 2022.
BECEL: Benchmark for Consistency Evaluation of Language Models
Myeongjun Jang‚ Deuk Sin Kwon and Thomas Lukasiewicz
In Proceedings of the 29th International Conference on Computational Linguistics‚ COLING 2022‚ Gyeongju‚ Republic of Korea‚ October 2022. Pages 3680–3696. International Committee on Computational Linguistics. October, 2022.
KNOW How to Make Up Your Mind! Adversarially Detecting and Remedying Inconsistencies in Natural Language Explanations
Myeongjun Jang‚ Bodhisattwa Prasad Majumder‚ Julian McAuley‚ Thomas Lukasiewicz and Oana−Maria Camburu
In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics‚ ACL 2023‚ Toronto‚ Canada‚ July 9–14‚ 2023. Association for Computational Linguistics. July, 2023.