Deep Learning: Recent Advances and New Challenges: Ruslan Salakhutdinov
In the first part of the talk, Ruslan is introducing XLNet, a generalized autoregressive pretraining method that enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and overcomes some limitations of BERT due to its autoregressive formulation. The professor is further showing how XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining.
In the second part of the talk, the speaker is showing how we can design modular hierarchical reinforcement learning agents for visual navigation that can perform tasks specified by natural language instructions, perform efficient exploration and long-term planning, learn to build the map of the environment while generalizing across domains and tasks.
Ruslan Salakhutdinov is a professor of computer science at Carnegie Mellon University. Since 2009, he’s published at least 42 papers on machine learning. His research has been funded by Google, Microsoft, and Samsung. In 2016, Ruslan joined Apple as its director of AI research.
Source: ML in PL