My research proposal on <A Trainable Decoding Algorithm for Neural Machine Translation> has been selected for Google Research Award 2016 (it's a bit confusing whether it's 2016 or 2017; deadline in 2016 but decision in 2017.) I'd like to thank Google for this award which would greatly help my research. Gotta go buy a few more GPU's!
For more info, see https://research.googleblog.com/2017/02/google-research-awards-2016.html.
A paper by Orhan Firat, me and Yoshua Bengio on multi-way, multilingual neural machine translation is sadly but also happily a best paper runner up at NAACL'16. You can find the paper at
The code has also been made public recently by Orhan at
Update on March 15, 2016
Thanks for sending me your CV! I have screened the applications and have made an offer.
Prof. Kyunghyun Cho (http://www.kyunghyuncho.me/) at the Computational Intelligence, Learning, Vision, and Robotics (CILVR) Group (http://cilvr.cs.nyu.edu/), Department of Computer Science (https://cs.nyu.edu/), New York University invites applications for a postdoctoral position on deep learning for medical image analysis.
Applicants are expected to have strong background and experience in developing and investigating deep neural networks for computer vision, in addition to good knowledge of machine learning and excellent programming skills. Applicants should be able to implement deep neural networks, including multilayered convolutional networks and recurrent networks, for a large-scale data which consists of many high-resolution images and associated textual descriptions.
The appointment will be for one year, with the option of renewing for a further year, dependent on satisfactory performance. The candidate will be expected to interact with other students and faculties in CILVR.
To be considered for the position, send your CV, list of publications and the contact details of two references to email@example.com.
I have been awarded Google Faculty Award (Fall 2015) in the field of machine translation. I am honoured to be a recipient of this award and will use it toward advancing my machine translation research further.
For more details, see http://googleresearch.blogspot.com/2016/02/google-research-awards-fall-2015.html.
One major issue with research in Q&A is that there is not a controlled but large-scale standard benchmark dataset available. There are a number of open-ended Q&A datasets, but they often require a system to have access to external resources. This makes it difficult for researchers to compare different models in a unified way.
Recently, one such large-scale standard Q&A dataset was proposed by Hermann et al. (2015). In this dataset, a question comes along with a context of which one of the word is an answer to the question. And.. wait.. I just realized that I don't have to explain the dataset nor the task here at all. After all, it's not my data/task nor my paper. I will just leave a link to the original paper:
So, what is an issue with this dataset? It's that the dataset was not published online. I can understand why, even without asking them (though, I neither confirm nor deny any interaction between me and DeepMind or anyone there,) and you can probably guess as well (though, among the two you guessed, a less evil one it probably is.) They instead released a script to generate the dataset, and I am grateful for their effort.
This is unfortunately never fun to spend a few hours generating a dataset, isn't it? Nothing to worry about your laziness anymore! Because, I generated the dataset and am making it available for you to download at
The Center for Data Science (CDS) at NYU has a weekly lunch seminar series. Each Monday, one speaker gives an (informal) presentation on any topic she/he wants to talk about, or at least so I thought. Anyways, I thought it would be a good chance to discuss with people (students, research fellows at CDS as well as faculty members from various departments all over NYU) what the interpretability of machine learning models means. I prepared a set of slides based on an excellent article <Statistical Modeling: The Two Cultures> by Leo Breiman.
Instead of trying to write what I've talked about here, I'll put a link to my slides:
Oh, right, there were a lot of wonderful discussions on this topic during and after the talk.
I was invited to give lectures on natural language processing with deep learning at the DENIS Summer School held in Espoo, Finland this year. It was really good to be back in Finland after 1.5 years (or more like 2 years, since I was travelling France and Italy half of the month I stayed there 1.5 years ago.) The weather was amazing, the sauna was pretty cool (shame I didn't have enough time to enjoy it fully!) and it was great to meet the friends and former colleagues there.
The Summer School itself was also great with some awesome talks by Razvan Pascanu at Google DeepMind and Tapani Raiko at Aalto University, and with enthusiastic audience. Razvan talked to us about Theano and reinforcement learning, and Tapani about his latest work on semi-supervised learning with a ladder network (congrats to the authors of this paper on NIPS acceptance!)
This autumn (2015), I am joining NYU as an assistant professor, jointly appointed at the Department of Computer Science (Courant Institute of Mathematical Sciences) and the Center for Data Science. Assuming all immigration-related issues happen without any trouble, I will start on 1 September.
You can find my job talk slides here.
I recently give one lecture on how to use neural nets for natural language processing at the course <INF8225 - Intelligence artificielle : techniques probabilistes et d’apprentissage> at the École Polytechnique de Montréal. It was a very fun experience having extremely lively discussion with active students.
The slides can be found at https://drive.google.com/file/d/0B16RwCMQqrtdNEhwbHN2bXJzdXM/view?usp=sharing.
I will co-organize the Deep Learning Workshop at ICML 2015 in Lille, France together with Geoff Hinton, Yann LeCun, Yoshua Bengio, Max Welling and Durk Kingma. This is my first time organizing a workshop (or as a matter of fact, any large academic meeting), and I have made a few blunders so far, but I believe this workshop will be pretty awesome with an awesome lineup of invited speakers:
First day (July 10)
Tara Sainath, Google
Yann Ollivier, Paris-Sud University
Oriol Vinyals, Google
Jason Weston, Facebook
Jorge Nocedal, Northwestern University
Second day (July 11)
Neil Lawrence , Sheffield University
Roland Memisevic, University of Montreal
Rajesh Ranganath, Princeton University
Ian Goodfellow, Google
Karol Gregor, Google DeepMind
Unlike the previous workshops on deep learning, it'll be a two-day workshop this year. This was decided based on the number of attendees at the NIPS workshop on deep learning last year where the number went over 600.
We will receive extended abstracts (2 - 8 pages + up to 2 pages of references only) which will be reviewed by yet another set of awesome reviewers. The accepted abstracts will not be archived as a proceedings but will be made available only. Also, note that it's an abstract, meaning the full version can be submitted later to any other publication venue without violating the submission policy (just make sure to change the title, substantially shorten or rewrite the content, and clearly state that it's an abstract.)
For more information, visit https://sites.google.com/site/deeplearning2015/, or contact me by email!
1-10 of 20