The TensorFlow Dev Summit was hosted at Google’s Headquarters in Mountain View, California, February 15th, 2017. TensorFlow team and machine learning experts from around the world met for a full day of technical talks, demos, and conversations.
Wide models are great for memorization, deep models are great for generalization — why not combine them to create even better models? In this talk, Heng-Tze Cheng, currently a software engineer and researcher in the Large-Scale Machine Learning Team at Google Research, explains Wide and Deep networks and gives examples of how they can be used.
Wide & Deep Learning
Memorization + Generalization with TensorFlow
“Combine the power of memorization and generalization on one unified machine learning platform for everyone”
Memorization +Generalization
Wide : Memorization “Seagulls can fly.” “Pigeons can fly.”
Deep : Generalization “Animals with wings can fly”
Wide+Deep : Generalization + memorizing expceptions:
“ Animals with wings can fly, but penguins can not fly.”
Wider.. Deeper.. Together..
Who is Heng-Tze Cheng?
Heng-Tze Cheng is currently a software engineer and researcher in the Large-Scale Machine Learning Team at Google Research. Giving computers the power to learn, understanding, and appreciating the meanings behind the signals as humans do (or, even beyond what humans can do) are his top missions.
Heng-Tze founded the Wide & Deep Learning project in TensorFlow, and has worked on large-scale machine learning platforms that are widely used for retrieval, ranking, and recommender systems. Prior to joining Google, Heng-Tze received his Ph.D. from Carnegie Mellon University in 2013 and B.S. from National Taiwan University in 2008. His research interests range across machine learning, information retrieval, user behavior modeling, and human activity recognition.