【2017学术报告01】Machine Learning with Connectionist Models – A Developmental Perspective
题目：Machine Learning with Connectionist Models – A Developmental Perspective
报告人： B.H.Juang，Professor , Georgia Institute of Technology
Biing Hwang (Fred) Juang是乔治亚理工学院讲座教授暨乔治亚州研究联盟杰出学者，美国国家工程院院士，美国发明家学院院士，中央研究院院士，IEEE会士，也是数家著名大学的荣誉讲座教授。在加入乔治亚理工学院之前，他是美国贝儿实验室资讯研究部门主任，负责声音和语音以及人类资讯模式化的研究。Juang教授的学术贡献甚丰，包括向量量化，隐式马可夫模式，语音编码与识别，多频回声控制等，其所著语音识别基础一书，被公认是经典之作。Juang教授所获奖项极多，包括贝儿实验室金牌奖，IEEE信号处理学会技术成就奖，IEEE技术领域J.L. Flanagan奖章(IEEE J. L. Flanagan Field Medal) 等。
There is a recent surge in research activities around the idea of the so-called “deep neural networks” (DNN). As a technical item, DNN without a doubt is an important classroom topic and several tutorial articles and related learning resources are available. Nevertheless, streams of questions about DNN never subside from students or researchers and there appears to be a frustrating tendency among the learners to treat DNN simply as a black box. In this talk, a pedagogy is attempted with the aim to present DNN in the well-established traditional pattern recognition framework so that a deeper understanding of DNN can be reached through proper contrast to conventional techniques. In particular, we review the developmental aspect of DNN, in terms of how advances in connectionist models have evolved into this powerful technique. Time permitting, we’ll discuss the application of DNN in the area of automatic speech recognition so as to ascertain its efficacy, as compared to traditional statistical modeling, and to bring to surface possibly unrealized potentials of DNN and conventional techniques.