Reality Lab Lecture: Andrew Rabinovich
Описание
The Reality Lab Lectures - Tuesday, April 23, 2019
TALK TITLE: Multi Task Learning for Computer Vision
SPEAKER: Andrew Rabinovich (Director of Deep Learning, Head of AI / Magic Leap)
TALK ABSTRACT: Deep multitask networks, in which one neural network produces multiple predictive outputs, are more scalable and often better regularized than their single-task counterparts. Such advantages can potentially lead to gains in both speed and performance, but multitask networks are also difficult to train without finding the right balance between tasks.
In this talk I will present novel gradient based methods which automatically balances the multitask loss function by directly tuning the gradients to equalize task training rates. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, these techniques improve accuracy and reduce overfitting over single networks, static baselines, and other adaptive multitask loss balancing techniques. They match or surpasses the performance of exhaustive grid search methods. Thus, what was once a tedious search process which incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we hope to demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.
Event held on the UW-Seattle Campus and recorded by UW CSE Production Team
© UW Reality Lab, 2019
http://realitylab.uw.edu
Рекомендуемые видео



















