【2013学术报告7】Facing the NLP Emergency
Title: Facing the NLP Emergency
Speaker: Dr. Erik Cambria
Time: 2:00PM，October 18, 2013
Place: 1-515，FIT Building
Organizer: Research Institute of Information Technology (RIIT), Tsinghua University
Erik Cambria received his BEng and MEng with honors in Electronic Engineering from the University of Genoa, in 2005 and 2008 respectively. In 2011, he was awarded his PhD in Computing Science and Mathematics, following the completion of an industrial Cooperative Awards in Science and Engineering (CASE) research project, which was born from a collaboration between the University of Stirling, Sitekit Solutions Ltd., and the MIT Media Laboratory, where he currently works as associate researcher in the Synthetic Intelligence Project. Today, Dr. Cambria is also a lead investigator of the Cognitive Science Programme at NUS Temasek Laboratories, where he carries out research in fields such as AI, KR, NLP, big social data analysis, affective and cognitive modeling, intention awareness, HCI, and e-health. Dr. Cambria is invited speaker/tutor in many international venues, e.g., WWW, IEEE SSCI, and MICAI, associate editor of Springer Cognitive Computation, and lead guest editor of top AI journals, e.g., IEEE Computational Intelligence Magazine, Elsevier Knowledge-Based Systems, and IEEE Intelligent Systems. He is also chair of several international conferences, e.g., Extreme Learning Machines (ELM), and workshop series, e.g., ICDM SENTIRE, KDD WISDOM, and WWW MABSDA.
In a Web where user-generated content has already hit critical mass, the need for automated systems for filtering out noise and aggregating meaningful information is growing exponentially. The democratization of online content creation has, in fact, led to the increase of Web debris, which is inevitably and negatively affecting information retrieval, aggregation, and processing. In order to optimize the execution of such tasks, machine-learning techniques have gained high classification efficiency in recent years, and with great success too. However, some of the most effective machine-learning algorithms produce no human understandable results; although they may achieve improved accuracy, little about how and why is known, apart from some superficial knowledge gained in the manual feature engineering process. We are facing a NLP crisis caused by the fact that machine-learning techniques cannot go beyond the syntactical structure of text and, hence, lack domain adaptivity and implicit semantic feature inference. Before such techniques reach saturation, NLP researchers need to 'jump the curve' of concept-level text analysis. Despite still being rather limited by the richness of knowledge bases and ontologies, semantic-based approaches are already making inroads into competing with traditional algorithms due to their nature of truly emulating the way the human mind processes natural language.