|Date||February 1, 2018 (Thursday)|
|Speaker||Prof. Wiston Hsu (National Taiwan University)|
|Profile||Prof. Winston Hsu is an active researcher dedicated to large-scale image/video retrieval/mining, visual recognition, and machine intelligence. He is keen to realizing advanced researches towards business deliverables via academia-industry collaborations and co-founding startups. He is a Professor in the Department of Computer Science and Information Engineering, National Taiwan University. He founded MiRA (Multimedia indexing, Retrieval, and Analysis) research group and co-leads Communication and Multimedia Lab (CMLab). Working closely with the industry, he was a Visiting Scientist at Microsoft Research Redmond (2014) and had his sabbatical leave (2016-2017) at IBM TJ Watson Research Center, New York, to enhance Watson’s visual cognition, where he contributed the first AI produced movie|
trailer. He is the (founding) Director for NVIDIA AI Lab (NTU), the 1st in Asia. He received Ph.D. (2007) from Columbia University, New York. Before that, he was a founding engineer and R&D manager in CyberLink Corp. He serves as the Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) and IEEE Transactions on Multimedia (TMM), two premier journals, and was
in the Editorial Board for IEEE Multimedia Magazine (2010 – 2017).
|Title||Learning from Unstructured and Multimodal Data Streams|
Images, videos, audios, and user-contributed logs, etc., are major data types nowadays essential for disruptive opportunities in social media, entertainments, education, healthcare, IoT, etc. However, the developed techniques are far behind the dire needs. For leveraging multimodal data streams with effective deep neural networks, we will present advanced and novel methods for jointly considering spatial and sequential neural networks and their variations, which well approximate the multimodal data streams. We will show the importance and the problems for cross-domain learning – computing data of different types in the same semantic space – and present several solutions. We will demonstrate how to utilize such diverse modalities
for improving challenging learning tasks in the end-to-end neural networks. We will showcase some recent works published in the top venues, e.g., CVPR, AAAI, ICCV, etc., and cognitive applications having been deployed for online services collaborating with international partners (e.g., Microsoft Research, IBM, etc.).
|Site||Hokkaido U. IST 6-03 seminar room|