Learning Transferrable Knowledge through Embedding Spaces in Machine Learning

Friday, September 13, 2019, 11:00 am - 12:00 pm PDTiCal
10th floor conference room (1014)
This event is open to the public.
AI Seminar
Mohammad Rostami, HRL
Video Recording:
The unprecedented processing demand, posed by the explosion of big data, challenges researchers to design efficient and adaptive machine learning algorithms that do not require persistent retraining and avoid learning redundant information. Inspired from learning techniques of intelligent biological agents, identifying transferable knowledge across learning problems has been a significant research focus to improve machine learning algorithms.  In this talk, we explain how the challenges of knowledge transfer can be addressed through embedding spaces that capture and store hierarchical knowledge.      
We first focus on the problem of cross-domain knowledge transfer. We explore the problem of zero-shot image classification, where the goal is to identify images from unseen classes using semantic descriptions of these classes.  We train two coupled dictionaries which align visual and semantic domains via an intermediate embedding space. We then extend this idea by training deep networks that match data distributions of two visual domains in a shared cross-domain embedding space. Our approach addresses both semi-supervised and unsupervised domain adaptation settings.
We then investigate the problem of cross-task knowledge transfer. Here, the goal is to identify relations and similarities of multiple machine learning tasks to improve performance across the tasks. We first address the problem of zero-shot learning in a lifelong machine learning setting, where the goal is to learn tasks with no data using high-level task descriptions. Our idea is to relate high-level task descriptors to the optimal task parameters through an embedding space. We then develop a method to overcome the problem of catastrophic forgetting within a continual learning setting of deep neural networks by enforcing the tasks to share the same distribution in the embedding space. We further demonstrate that our model can address the challenges of domain adaptation in the continual learning setting.
Finally, we consider the problem of cross-agent knowledge transfer. We demonstrate that multiple lifelong machine learning agents can collaborate to increase individual performance by sharing learned knowledge in an embedding space without sharing private data through a shared embedding space.
Through this talk, we demonstrate that despite major differences, problems within the above learning scenarios can be tackled through learning an intermediate embedding space that allows transferring knowledge effectively.
Bio: Mohammad Rostami is a research scientist at HRL Labs. He received Ph.D. degree in Electrical and Systems Engineering from the University of Pennsylvania. He also received the M.S. degree in Robotics and M.A. degree in Philosophy at Penn. Prior to Penn, he obtained M.Sc. degree in Electrical and Computer Engineering from University of Waterloo, and B.Sc. degree in Electrical Engineering and B.Sc. degree in Mathematics from Sharif University of Technology. His current research area is transfer and lifelong learning within machine learning.
« Return to Upcoming Events