M.J Erik Jang : Publications
-
[1]
Pre−training and Diagnosing Knowledge Base Completion Models
Vid Kocijan‚ Myeongjun Jang and Thomas Lukasiewicz
In Artificial Intelligence. Vol. 329. No. 104081. April, 2024.
Details about Pre−training and Diagnosing Knowledge Base Completion Models | BibTeX data for Pre−training and Diagnosing Knowledge Base Completion Models | Link to Pre−training and Diagnosing Knowledge Base Completion Models
-
[2]
KNOW How to Make Up Your Mind! Adversarially Detecting and Remedying Inconsistencies in Natural Language Explanations
Myeongjun Jang‚ Bodhisattwa Prasad Majumder‚ Julian McAuley‚ Thomas Lukasiewicz and Oana−Maria Camburu
In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics‚ ACL 2023‚ Toronto‚ Canada‚ July 9–14‚ 2023. Association for Computational Linguistics. July, 2023.
Details about KNOW How to Make Up Your Mind! Adversarially Detecting and Remedying Inconsistencies in Natural Language Explanations | BibTeX data for KNOW How to Make Up Your Mind! Adversarially Detecting and Remedying Inconsistencies in Natural Language Explanations
-
[3]
Improving Language Models’ Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary
Myeongjun Erik Jang and Thomas Lukasiewicz
In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing‚ EMNLP 2023‚ Singapore‚ December 6−10‚ 2023. Association for Computational Linguistics. December, 2023.
Details about Improving Language Models’ Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary | BibTeX data for Improving Language Models’ Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary
-
[4]
Consistency Analysis of ChatGPT
Myeongjun Erik Jang and Thomas Lukasiewicz
In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing‚ EMNLP 2023‚ Singapore‚ December 6−10‚ 2023. Association for Computational Linguistics. December, 2023.
Details about Consistency Analysis of ChatGPT | BibTeX data for Consistency Analysis of ChatGPT
-
[5]
NoiER: An Approach for Training more Reliable Fine−Tuned Downstream Task Models
Myeongjun Jang and Thomas Lukasiewicz
In IEEE Transactions on Audio‚ Speech and Language Processing. Vol. 30. Pages 2514–2525. July, 2022.
Details about NoiER: An Approach for Training more Reliable Fine−Tuned Downstream Task Models | BibTeX data for NoiER: An Approach for Training more Reliable Fine−Tuned Downstream Task Models | Link to NoiER: An Approach for Training more Reliable Fine−Tuned Downstream Task Models
-
[6]
Beyond Distributional Hypothesis: Let Language Models Learn Meaning−Text Correspondence
Myeongjun Jang‚ Frank Martin Mtumbuka and Thomas Lukasiewicz
In Findings of NAACL 2022‚ Seattle‚ Washington‚ USA‚ July 2022. Pages 2030–2042. Association for Computational Linguistics. July, 2022.
Details about Beyond Distributional Hypothesis: Let Language Models Learn Meaning−Text Correspondence | BibTeX data for Beyond Distributional Hypothesis: Let Language Models Learn Meaning−Text Correspondence | Link to Beyond Distributional Hypothesis: Let Language Models Learn Meaning−Text Correspondence
-
[7]
BECEL: Benchmark for Consistency Evaluation of Language Models
Myeongjun Jang‚ Deuk Sin Kwon and Thomas Lukasiewicz
In Proceedings of the 29th International Conference on Computational Linguistics‚ COLING 2022‚ Gyeongju‚ Republic of Korea‚ October 2022. Pages 3680–3696. International Committee on Computational Linguistics. October, 2022.
Details about BECEL: Benchmark for Consistency Evaluation of Language Models | BibTeX data for BECEL: Benchmark for Consistency Evaluation of Language Models | Link to BECEL: Benchmark for Consistency Evaluation of Language Models
-
[8]
KoBEST: Korean Balanced Evaluation of Significant Tasks
Dohyung Kim‚ Myeongjun Jang‚ Deuk Sin Kwon and Eric Davis
In Proceedings of International Conference on Computational Linguistics (COLING) 2022‚ Gyeongju‚ Republic of Korea‚ October 2022. International Committee on Computational Linguistics. 2022.
Details about KoBEST: Korean Balanced Evaluation of Significant Tasks | BibTeX data for KoBEST: Korean Balanced Evaluation of Significant Tasks | Link to KoBEST: Korean Balanced Evaluation of Significant Tasks
-
[9]
Learning−Free Unsupervised Extractive Summarization Model
Myeongjun Jang and Pilsung Kang
In IEEE Access. Vol. 9. Pages 14358−14368. 2021.
Details about Learning−Free Unsupervised Extractive Summarization Model | BibTeX data for Learning−Free Unsupervised Extractive Summarization Model | DOI (10.1109/ACCESS.2021.3051237)