Google Faculty Award: 2016

posted Feb 25, 2017, 9:20 AM by KyungHyun Cho

My research proposal on <A Trainable Decoding Algorithm for Neural Machine Translation> has been selected for Google Research Award 2016 (it's a bit confusing whether it's 2016 or 2017; deadline in 2016 but decision in 2017.) I'd like to thank Google for this award which would greatly help my research. Gotta go buy a few more GPU's!

Best paper runner-up at NAACL'16

posted May 22, 2016, 7:36 PM by KyungHyun Cho

A paper by Orhan Firat, me and Yoshua Bengio on multi-way, multilingual neural machine translation is sadly but also happily a best paper runner up at NAACL'16. You can find the paper at 

https://arxiv.org/abs/1601.01073

The code has also been made public recently by Orhan at 

https://github.com/nyu-dl/dl4mt-multi


[Closed] A Post-Doctoral Researcher Position in Deep Learning for Medical Image Analysis

posted Feb 16, 2016, 10:58 AM by KyungHyun Cho   [ updated Mar 15, 2016, 8:46 AM ]

Update on March 15, 2016

Thanks for sending me your CV! I have screened the applications and have made an offer. 



Prof. Kyunghyun Cho (http://www.kyunghyuncho.me/) at the Computational Intelligence, Learning, Vision, and Robotics (CILVR) Group (http://cilvr.cs.nyu.edu/), Department of Computer Science (https://cs.nyu.edu/), New York University invites applications for a postdoctoral position on deep learning for medical image analysis. 

Applicants are expected to have strong background and experience in developing and investigating deep neural networks for computer vision, in addition to good knowledge of machine learning and excellent programming skills. Applicants should be able to implement deep neural networks, including multilayered convolutional networks and recurrent networks, for a large-scale data which consists of many high-resolution images and associated textual descriptions.  

The appointment will be for one year, with the option of renewing for a further year, dependent on satisfactory performance. The candidate will be expected to interact with other students and faculties in CILVR. 

To be considered for the position, send your CV, list of publications and the contact details of two references to kyunghyun.cho@nyu.edu.

Google Faculty Award: Fall 2015

posted Feb 15, 2016, 5:25 AM by KyungHyun Cho

I have been awarded Google Faculty Award (Fall 2015) in the field of machine translation. I am honoured to be a recipient of this award and will use it toward advancing my machine translation research further.

DeepMind Q&A Data

posted Jan 14, 2016, 8:22 PM by KyungHyun Cho

One major issue with research in Q&A is that there is not a controlled but large-scale standard benchmark dataset available. There are a number of open-ended Q&A datasets, but they often require a system to have access to external resources. This makes it difficult for researchers to compare different models in a unified way. 

Recently, one such large-scale standard Q&A dataset was proposed by Hermann et al. (2015). In this dataset, a question comes along with a context of which one of the word is an answer to the question. And.. wait.. I just realized that I don't have to explain the dataset nor the task here at all. After all, it's not my data/task nor my paper. I will just leave a link to the original paper: 


So, what is an issue with this dataset? It's that the dataset was not published online. I can understand why, even without asking them (though, I neither confirm nor deny any interaction between me and DeepMind or anyone there,) and you can probably guess as well (though, among the two you guessed, a less evil one it probably is.) They instead released a script to generate the dataset, and I am grateful for their effort.

This is unfortunately never fun to spend a few hours generating a dataset, isn't it? Nothing to worry about your laziness anymore! Because, I generated the dataset and am making it available for you to download at 


Enjoy Q&A'ing!
- K









<Lost in Interpretability>

posted Oct 1, 2015, 1:55 PM by KyungHyun Cho

The Center for Data Science (CDS) at NYU has a weekly lunch seminar series. Each Monday, one speaker gives an (informal) presentation on any topic she/he wants to talk about, or at least so I thought. Anyways, I thought it would be a good chance to discuss with people (students, research fellows at CDS as well as faculty members from various departments all over NYU) what the interpretability of machine learning models means. I prepared a set of slides based on an excellent article <Statistical Modeling: The Two Cultures> by Leo Breiman.

Instead of trying to write what I've talked about here, I'll put a link to my slides:


Oh, right, there were a lot of wonderful discussions on this topic during and after the talk. 



Summer School in Helsinki, Finland

posted Sep 8, 2015, 7:42 PM by KyungHyun Cho

I was invited to give lectures on natural language processing with deep learning at the DENIS Summer School held in Espoo, Finland this year. It was really good to be back in Finland after 1.5 years (or more like 2 years, since I was travelling France and Italy half of the month I stayed there 1.5 years ago.) The weather was amazing, the sauna was pretty cool (shame I didn't have enough time to enjoy it fully!) and it was great to meet the friends and former colleagues there.

The Summer School itself was also great with some awesome talks by Razvan Pascanu at Google DeepMind and Tapani Raiko at Aalto University, and with enthusiastic audience. Razvan talked to us about Theano and reinforcement learning, and Tapani about his latest work on semi-supervised learning with a ladder network (congrats to the authors of this paper on NIPS acceptance!)

The slides for my talk can be found here: lecture 1 and lecture 2.


Joining NYU this Autumn

posted May 11, 2015, 6:30 PM by KyungHyun Cho

This autumn (2015), I am joining NYU as an assistant professor, jointly appointed at the Department of Computer Science (Courant Institute of Mathematical Sciences) and the Center for Data Science. Assuming all immigration-related issues happen without any trouble, I will start on 1 September. 

You can find my job talk slides here.

Quick Introduction to Natural Language Processing with Neural Networks

posted Apr 25, 2015, 11:20 AM by KyungHyun Cho   [ updated Apr 25, 2015, 11:20 AM ]

I recently give one lecture on how to use neural nets for natural language processing at the course <INF8225 - Intelligence artificielle : techniques probabilistes et d’apprentissage> at the École Polytechnique de Montréal. It was a very fun experience having extremely lively discussion with active students. 

Deep Learning Workshop at ICML 2015

posted Apr 14, 2015, 12:59 PM by KyungHyun Cho

I will co-organize the Deep Learning Workshop at ICML 2015 in Lille, France together with Geoff Hinton, Yann LeCun, Yoshua Bengio, Max Welling and Durk Kingma. This is my first time organizing a workshop (or as a matter of fact, any large academic meeting), and I have made a few blunders so far, but I believe this workshop will be pretty awesome with an awesome lineup of invited speakers:

First day (July 10)
Tara Sainath, Google
Yann Ollivier, Paris-Sud University
Oriol Vinyals, Google
Jason Weston, Facebook
Jorge Nocedal, Northwestern University

Second day (July 11)
Neil Lawrence , Sheffield University
Roland Memisevic, University of Montreal
Rajesh Ranganath, Princeton University
Ian Goodfellow, Google
Karol Gregor, Google DeepMind

Unlike the previous workshops on deep learning, it'll be a two-day workshop this year. This was decided based on the number of attendees at the NIPS workshop on deep learning last year where the number went over 600. 

We will receive extended abstracts (2 - 8 pages + up to 2 pages of references only) which will be reviewed by yet another set of awesome reviewers. The accepted abstracts will not be archived as a proceedings but will be made available only. Also, note that it's an abstract, meaning the full version can be submitted later to any other publication venue without violating the submission policy (just make sure to change the title, substantially shorten or rewrite the content, and clearly state that it's an abstract.)

For more information, visit https://sites.google.com/site/deeplearning2015/, or contact me by email!


1-10 of 20