学术报告--Compositional Deep Learning

讲学题目:Compositional Deep Learning

时间:2015年8月3日(星期一)

地点:新学生活动中心326室

内容:Distributed representations of human language content and structure had a brief boom in the 1980s, but it quickly faded, and the past 20 years have been dominated by continued use of categorical representations of language, despite the use of probabilities or weights over elements of these categorical representations. However, the last five years have seen a resurgence, with highly successful use of distributed vector space representations, often in the context of "neural" or "deep learning" models.  One great success has been distributed word representations, and I will look at some of our recent work and that of others on better understanding word representations and how they can be thought of as global matrix factorizations, much more similar to the traditional literature. But we need more than just word representations: We need to understand the larger linguistic units that are made out of words, a problem which has been much less addressed. I will discuss the use of distributed representations in tree-structured recursive neural network models, showing how they can provide sophisticated linguistic models of semantic similarity, sentiment, syntactic parse structure, and logical entailment.


专家简介:Christopher Manning is a professor of computer science and linguistics at Stanford University. His research goal is computers that can

intelligently process, understand, and generate human language

material. Manning concentrates on machine learning approaches to

computational linguistic problems, including syntactic parsing,

computational semantics and pragmatics, textual inference, machine

translation, and hierarchical deep learning for NLP. He is an ACM

Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading

textbooks on statistical natural language processing and information

retrieval. 


 

新闻分类: