While deep learning has achieved remarkable success in supervised and reinforcement learning problems, such as image classification, speech recognition, and game-playing, these models are, to a large degree, specialized for the single task they are trained for.
This course will cover the setting where there are multiple tasks to be solved, and study how the structure arising from multiple tasks can be leveraged to learn more efficiently or effectively. This includes:
- goal-conditioned reinforcement learning techniques that leverage the structure of the provided goal space to learn many tasks significantly faster
- meta-learning methods that aim to learn efficient learning algorithms that can learn new tasks quickly
- curriculum and lifelong learning, where the problem requires learning a sequence of tasks, leveraging their shared structure to enable knowledge transfer
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 1 – Introduction & Overview
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 2 – Multi-Task & Meta-Learning Basics
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 3 – Optimization-Based Meta-Learning
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 4 – Non-Parametric Meta-Learners
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 5 – Bayesian Meta-Learning
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 6 – Reinforcement Learning Primer
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 7 – Kate Rakelly (UC Berkeley)
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 8 – Model-Based Reinforcement Learning
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 9 – Lifelong Learning
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 10 – Jeff Clune (Uber AI Labs)
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 11 – Sergey Levine (UC Berkeley)
Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 12 – Frontiers and Open Challenges
Source: http://cs330.stanford.edu/