Skip to main content

Continuous representation of language and its implications -- In the case of neural machine translation

Kyunghyun Cho

 

In this talk, I will go over some of my research on neural machine translation that has happened during the past 2.5 years. Starting from now-standard attention-based neural machine translation, i will walk you through first multilingual translation, search engine guided non-parametric neural machine translation and unsupervised machine translation. Then, I will delve deeper into some of my recent work on decoding algorithms for neural machine translation. If time permits, I will briefly touch upon some of the on-going work at my lab, including non-autoregressive neural machine translation and trainable greedy decoding. 

Speaker bio

Kyunghyun Cho is an assistant professor of computer science and data science at New York University. He was a postdoctoral fellow at University of Montreal until summer 2015 under the supervision of Prof. Yoshua Bengio, and received PhD and MSc degrees from Aalto University early 2014 under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.

 

 

Share this: