Actions

CLIP Colloquium (Fall 2012): Difference between revisions

Computational Linguistics and Information Processing

No edit summary
No edit summary
Line 40: Line 40:


== 09/12/2012: 5 Minute Madness (Part II) ==
== 09/12/2012: 5 Minute Madness (Part II) ==
== 09/19/2012: CoB: Pairwise Similarity on Large Text Collections with MapReduce==
'''Speaker:''' Earl Wagner, University of Maryland<br/>
'''Time:''' Wednesday, September 19, 2012, 11:00 AM<br/>
'''Venue:''' AVW 3258<br/>
Faced with high-volume information streams, intelligence analysts often rely on standing queries to retrieve materials that they need to see. Results of these queries are currently extended by effective and efficient probabilistic techniques that find similar, non-matching content. We discuss research looking further afield to find additional useful documents via MapReduce techniques performing rapid clustering of documents. This approach is intended to provide an improved “peripheral vision” to overcome some blind spots, yielding both immediate utility (detection of documents that otherwise would not have been found) and the potential for improvements to specific standing queries.
'''About the Speaker:''' Earl J. Wagner is a Postdoctoral Research Associate at the University of Maryland, College Park in the College of Information Studies (Maryland's iSchool). He was previously a Research Assistant at Northwestern University where he earned his Ph.D. in Computer Science.
== 09/26/2012: Better! Faster! Stronger (theorems)! Learning to Balance Accuracy and Efficiency when Predicting Linguistic Structures ==
'''Speaker:''' Hal Daume III, University of Maryland<br/>
'''Time:''' Wednesday, September 26, 2012, 11:00 AM<br/>
'''Venue:''' AVW 3258<br/>
Viewed abstractly, many classic problems in natural language
processing can be cast as trying to map a complex input (eg., a
sequence of words) to a complex output (eg., a syntax tree or
semantic graph).  This task is challenging both because language
is ambiguous (learning difficulties) and represented with
discrete combinatorial structures (computational difficulties).
I will describe my multi-pronged research effort to develope
learning algorithms that explicitly learn to trade-off accuracy
and efficiency, applied to a variety of language processing
phenomena.  Moreover, I will show that in some cases, we can
actually obtain model that is faster and more accurate by
exploiting smarter learning algorithms.  And yes, those
algorithms come with stronger theoretical guarantees too.
The key insight that makes this possible is a connection between
the task of predicting structured objects (what I care about) and
imitation learning (a subfield in robotics).  This insight came
about as a result of my work a few years ago, and has formed the
backbone of much of my work since then.  These connections have
led other NLP and robotics researchers to make their own
independent advances using many of these ideas.
At the end of the talk, I'll briefly survey some of my other
contributions in the areas of domain adaptation and multilingual
modeling, both of which also fall under the general rubric
of "what goes wrong when I try to apply off-the-shelf machine
learning models to real language processing problems?"

Revision as of 22:48, 26 September 2012

08/20/2012: TopSig – Signature Files Revisited

Speaker: Shlomo Geva, Queensland University of Technology, Australia
Time: Monday, August 20, 2012, 11:00 AM
Venue: AVW 2120

Abstract: Performance comparisons between File Signatures and Inverted Files for text retrieval have previously shown several significant shortcomings of file signatures relative to inverted files. The inverted file approach underpins most state-of-the-art search engine algorithms, such as Language and Probabilistic models. It has been widely accepted that traditional file signatures are inferior alternatives to inverted files. This paper describes TopSig, a modern approach to the construction of file signatures - many advances in semantic hashing and dimensionality reduction have been made in recent times, but these were not so far linked to general purpose, signature file based, search engines. This paper introduces a different signature file approach that builds upon and extends these recent advances. We are able to demonstrate significant improvements in the performance of signature file based indexing and retrieval, performance that is comparable to that of state of the art inverted file based systems, including Language models and BM25. These findings suggest that file signatures offer a viable alternative to inverted files in suitable settings and position the file signature model in the class of Vector Space retrieval models. TopSig is an open-source search engine from QUT and it can be discussed too if there is an interest.

About the Speaker: Associate Professor Shlomo Geva is the discipline leader for Computational Intelligence and Signal Processing in the Computer Science Department at the Queensland University of Technology in Brisbane, Australia. His research interests include clustering, cross-language information retrieval, focused information retrieval, link discovery, and xml indexing.

Host: Doug Oard, oard@umd.edu

09/05/2012: 5 Minute Madness (Part I)

09/12/2012: 5 Minute Madness (Part II)

09/19/2012: CoB: Pairwise Similarity on Large Text Collections with MapReduce

Speaker: Earl Wagner, University of Maryland
Time: Wednesday, September 19, 2012, 11:00 AM
Venue: AVW 3258

Faced with high-volume information streams, intelligence analysts often rely on standing queries to retrieve materials that they need to see. Results of these queries are currently extended by effective and efficient probabilistic techniques that find similar, non-matching content. We discuss research looking further afield to find additional useful documents via MapReduce techniques performing rapid clustering of documents. This approach is intended to provide an improved “peripheral vision” to overcome some blind spots, yielding both immediate utility (detection of documents that otherwise would not have been found) and the potential for improvements to specific standing queries.

About the Speaker: Earl J. Wagner is a Postdoctoral Research Associate at the University of Maryland, College Park in the College of Information Studies (Maryland's iSchool). He was previously a Research Assistant at Northwestern University where he earned his Ph.D. in Computer Science.


09/26/2012: Better! Faster! Stronger (theorems)! Learning to Balance Accuracy and Efficiency when Predicting Linguistic Structures

Speaker: Hal Daume III, University of Maryland
Time: Wednesday, September 26, 2012, 11:00 AM
Venue: AVW 3258

Viewed abstractly, many classic problems in natural language processing can be cast as trying to map a complex input (eg., a sequence of words) to a complex output (eg., a syntax tree or semantic graph). This task is challenging both because language is ambiguous (learning difficulties) and represented with discrete combinatorial structures (computational difficulties). I will describe my multi-pronged research effort to develope learning algorithms that explicitly learn to trade-off accuracy and efficiency, applied to a variety of language processing phenomena. Moreover, I will show that in some cases, we can actually obtain model that is faster and more accurate by exploiting smarter learning algorithms. And yes, those algorithms come with stronger theoretical guarantees too.

The key insight that makes this possible is a connection between the task of predicting structured objects (what I care about) and imitation learning (a subfield in robotics). This insight came about as a result of my work a few years ago, and has formed the backbone of much of my work since then. These connections have led other NLP and robotics researchers to make their own independent advances using many of these ideas.

At the end of the talk, I'll briefly survey some of my other contributions in the areas of domain adaptation and multilingual modeling, both of which also fall under the general rubric of "what goes wrong when I try to apply off-the-shelf machine learning models to real language processing problems?"