Actions

Events

Computational Linguistics and Information Processing

Colloquia

{{#widget:Google Calendar |id=j68vmmdfnnq8khdrd9f93djv84@group.calendar.google.com |color=B1440E |title=CLIP Events }}

Past Speakers

  • Roger Levy

September 22: Earl Wagner

Presenting the Context of News Events with Brussell

Using content-specific models to guide information retrieval and extraction can provide richer interfaces to end-users for both understanding the context of news events and navigating related news articles. A system, Brussell, is presented that uses semantic models to organize retrieval and extraction results, generating both storylines explaining how news event situations unfold and also biographical sketches of the situation participants. A survey of business news suggests the broad prevalence of news event situations indicating Brussell's potential utility, and its performance in finding kidnapping situations is characterized.

Earl J. Wagner is a Postdoctoral Research Associate at the University of Maryland, College Park. He works with Jimmy Lin and Doug Oard on software to help users find documents relevant to their tasks. In particular, he is contributing to Ivory, a toolkit for information retrieval running on Apache's Hadoop, an open-source, Map/Reduce-based framework for cloud computing. He previously worked with Bank of America, as a Research Affiliate with the Center for Future Banking at the MIT Media Lab where he applied MIT's common sense computing technologies to text analysis tasks in banking. In December 2009, he completed a Ph.D. in Computer Science at Northwestern University for his work designing and developing Brussell, an intelligent news-situation analysis and presentation tool. Before joining Northwestern, Earl earned an M.S. degree at the MIT Media Lab for his work on Woodstein, a prototype tool for consumers to diagnose problems with e-commerce transactions. He earned his bachelor's degree at University of California, Berkeley studying computer science and philosophy. He has presented and published his work on Brussell and Woodstein in several conferences and workshops, including the Intelligent User Interfaces conference and the AAAI Spring Symposium. He has also spoken about this work at corporations such as IBM, Intel, Microsoft and Mastercard and universities including MIT, NYU, and Berkeley.

September 29: Eugene Charniak

Top-Down Nearly-Context-Sensitive Parsing

We present a new syntactic parser that works left-to-right and top down, thus maintaining a fully-connected parse tree for a few alternative parse hypotheses. All of the commonly used statistical parsers use context-free dynamic programming algorithms and as such work bottom up on the entire sentence. Thus they only find a complete fully connected parse at the very end. In contrast, both subjective and experimental evidence show that people understand a sentence word-to-word as they go along, or close to it. The constraint that the parser keeps one or more fully connected syntactic trees is intended to operationalize this cognitive fact. Our parser achieves a new best result for top-down parsers of 89.4%,a 20% error reduction over the previous single-parser best result for parsers of this type of 86.8% (Roark01). The improved performance is due to embracing the very large feature set available in exchange for giving up dynamic programming.


Eugene Charniak is University Professor of Computer Science and Cognitive Science at Brown University and past chair of the Department of Computer Science. He received his A.B. degree in Physics from University of Chicago, and a Ph.D. from M.I.T. in Computer Science. He has published four books the most recent being Statistical Language Learning. He is a Fellow of the American Association of Artificial Intelligence and was previously a Councilor of the organization. His research has always been in the area of language understanding or technologies which relate to it. Over the last 20 years years he has been interested in statistical techniques for many areas of language processing including parsing and discourse.

October 4: Dave Newman

Topic modeling: Are we there yet?

Topic models -- such as Latent Dirichlet Allocation (LDA) -- have been heralded by many as a revolutionary method for extracting semantic content from document collections. The machine learning community has been busy extending the original LDA model in dozens of ways, but this creation of new models has far outpaced broader applications of topic modeling. Why this gap? I will share some insights as to why topic models are not quite ready for prime time, including results from studies of end-users using topics to find and access online resources. I will present a pointwise mutual information (PMI) based measure that is useful for evaluating topic models, as an alternative to perplexity or log-likelihood of test data. I will then show how one can leverage PMI data to structure Dirichlet priors which regularize the learning of topic models -- particularly for small or noisy document collections -- to learn topics that are more coherent and interpretable.

David Newman is a Research Scientist in the Department of Computer Science at the University of California, Irvine and currently visiting NICTA Australia. His research focuses on theory and application of topic models and related text mining and machine learning techniques. Newman's work combines theoretical advances with practical applications to improve the way people find and discover information. Newman received his PhD from Princeton University.

October 6: EMNLP Practice Talks

Some subset of:

  • Jordan Boyd-Graber
  • Eric Hardisty
  • Hendra Setiawan
  • Amit Goyal