Actions

Difference between revisions of "Events"

Computational Linguistics and Information Processing

 
(96 intermediate revisions by 9 users not shown)
Line 1: Line 1:
The CLIP Colloquium is a weekly speaker series organized and hosted by CLIP Lab. The talks are open to everyone. Most talks are held at 11AM in AV Williams 3258 unless otherwise noted. Typically, external speakers have slots for one-on-one meetings with Maryland researchers before and after the talks; contact the host if you'd like to have a meeting.
+
<center>[[Image:colloq.jpg|center|504px|x]]</center>
  
If you would like to get on the cl-colloquium@umiacs.umd.edu list or for other questions about the colloquium series, e-mail [mailto:jimmylin@umd.edu Jimmy Lin], the current organizer.
+
== CLIP Colloquium ==
  
 +
The CLIP Colloquium is a weekly speaker series organized and hosted by CLIP Lab. The talks are open to everyone. Most talks are held on Wednesday at 11AM online unless otherwise noted. Typically, external speakers have slots for one-on-one meetings with Maryland researchers.
  
{{#widget:Google Calendar
+
If you would like to get on the clip-talks@umiacs.umd.edu list or for other questions about the colloquium series, e-mail [mailto:aiwei@umiacs.umd.edu Wei Ai], the current organizer.
|id=lqah25nfftkqi2msv25trab8pk@group.calendar.google.com
 
|color=B1440E
 
|title=Upcoming Talks
 
|view=AGENDA
 
|height=300
 
}}
 
  
__NOTOC__
+
For up-to-date information, see the [https://talks.cs.umd.edu/lists/7 UMD CS Talks page].  (You can also subscribe to the calendar there.)
== 09/19/2012: CoB: Pairwise Similarity on Large Text Collections with MapReduce==
 
'''Speaker:''' Earl Wagner, University of Maryland<br/>
 
'''Time:''' Wednesday, September 19, 2012, 11:00 AM<br/>
 
'''Venue:''' AVW 3258<br/>
 
  
Faced with high-volume information streams, intelligence analysts often rely on standing queries to retrieve materials that they need to see. Results of these queries are currently extended by effective and efficient probabilistic techniques that find similar, non-matching content. We discuss research looking further afield to find additional useful documents via MapReduce techniques performing rapid clustering of documents. This approach is intended to provide an improved “peripheral vision” to overcome some blind spots, yielding both immediate utility (detection of documents that otherwise would not have been found) and the potential for improvements to specific standing queries.
+
=== Colloquium Recordings ===
 +
* [[Colloqium Recording (Fall 2020)|Fall 2020]]
 +
* [[Colloqium Recording (Spring 2021)|Spring 2021]]
 +
* [[Colloqium Recording (Fall 2021)|Fall 2021]]
 +
* [[Colloqium Recording (Spring 2022)|Spring 2022]]
  
'''About the Speaker:''' Earl J. Wagner is a Postdoctoral Research Associate at the University of Maryland, College Park in the College of Information Studies (Maryland's iSchool). He was previously a Research Assistant at Northwestern University where he earned his Ph.D. in Computer Science.
+
=== Previous Talks ===
 +
* [[https://talks.cs.umd.edu/lists/7?range=past Past talks, 2013 - present]]
 +
* [[CLIP Colloquium (Spring 2012)|Spring 2012]]  [[CLIP Colloquium (Fall 2011)|Fall 2011]]  [[CLIP Colloquium (Spring 2011)|Spring 2011]]  [[CLIP Colloquium (Fall 2010)|Fall 2010]]
  
 +
== CLIP NEWS  ==
  
== 09/26/2012: Better! Faster! Stronger (theorems)! Learning to Balance Accuracy and Efficiency when Predicting Linguistic Structures ==
+
* News about CLIP researchers on the UMIACS website [http://www.umiacs.umd.edu/about-us/news]
'''Speaker:''' Hal Daume III, University of Maryland<br/>
+
* Please follow us on Twitter @ClipUmd[https://twitter.com/ClipUmd?lang=en]
'''Time:''' Wednesday, September 26, 2012, 11:00 AM<br/>
 
'''Venue:''' AVW 3258<br/>
 
 
 
Viewed abstractly, many classic problems in natural language
 
processing can be cast as trying to map a complex input (eg., a
 
sequence of words) to a complex output (eg., a syntax tree or
 
semantic graph).  This task is challenging both because language
 
is ambiguous (learning difficulties) and represented with
 
discrete combinatorial structures (computational difficulties).
 
I will describe my multi-pronged research effort to develope
 
learning algorithms that explicitly learn to trade-off accuracy
 
and efficiency, applied to a variety of language processing
 
phenomena.  Moreover, I will show that in some cases, we can
 
actually obtain model that is faster and more accurate by
 
exploiting smarter learning algorithms.  And yes, those
 
algorithms come with stronger theoretical guarantees too.
 
 
 
The key insight that makes this possible is a connection between
 
the task of predicting structured objects (what I care about) and
 
imitation learning (a subfield in robotics).  This insight came
 
about as a result of my work a few years ago, and has formed the
 
backbone of much of my work since then.  These connections have
 
led other NLP and robotics researchers to make their own
 
independent advances using many of these ideas.
 
 
 
At the end of the talk, I'll briefly survey some of my other
 
contributions in the areas of domain adaptation and multilingual
 
modeling, both of which also fall under the general rubric
 
of "what goes wrong when I try to apply off-the-shelf machine
 
learning models to real language processing problems?"
 
 
 
== 10/03/2012: Shay Cohen ==
 
 
 
In the past few years, there has been an increased interest in the machine learning community in spectral
 
algorithms for estimating models with latent variables. Examples include algorithms for estimating mixture of
 
Gaussians or for estimating the parameters of a hidden Markov model.
 
 
 
The EM algorithm has been the mainstay for estimation with latent variables, but because it is not guaranteed
 
to converge to a global maximum of the likelihood, it is not a consistent estimator. Spectral algorithms, on
 
the other hand, are often shown to be consistent.
 
 
 
In this talk, I am interested in presenting a spectral algorithm for latent-variable PCFGs, a model widely
 
used in the NLP community for parsing. This model, originally introduced by Matsuzaki et al. (2005), augments
 
with a latent state the nonterminals in an underlying PCFG grammar. These latent states re-fine the nonterminal
 
category in order to capture subtle syntactic nuances in the data. This model has been successfully implemented
 
in state-of-the-art parsers such as the Berkeley parser (Petrov et al., 2006).
 
 
 
Our spectral algorithm for latent-variable PCFGs is based on a novel tensor formulation designed for inference
 
with PCFGs. This tensor formulation yields an "observable operator model" for PCFGs which can be readily used
 
for spectral estimation.
 
 
 
The algorithm we developed is considerably faster than EM, and makes only one pass over the data. Statistics are
 
collected from the data in this pass, and singular value decomposition is performed on matrices containing these
 
statistics. Our algorithm is also provably consistent in the sense that, given enough samples, it will estimate
 
probabilities for test trees close to their true probabilities under the latent-variable PCFG model.
 
 
 
If time permits, I will also present a method to improve the efficiency of parsing with latent-variable PCFGs.
 
This method relies on tensor decomposition of the latent-variable PCFG. The tensor decomposition is approximate,
 
and therefore the new parser is an approximate parser as well. Still, the quality of approximation can
 
be guaranteed theoretically by inspecting how errors from the approximation propagate in the parse trees.
 
 
 
== 10/10/2012: Beyond MaltParser - Advances in Transition-Based Dependency Parsing ==
 
'''Speaker:''' [http://stp.lingfil.uu.se/~nivre/ Joakim Nivre], Uppsala University / Google<br/>
 
'''Time:''' Wednesday, October 10, 2012, 11:00 AM<br/>
 
'''Venue:''' AVW 3258<br/>
 
 
 
The transition-based approach to dependency parsing has become
 
popular thanks to its simplicity and efficiency. Systems like MaltParser
 
achieve linear-time parsing with projective dependency trees using locally
 
trained classifiers to predict the next parsing action and greedy best-first
 
search to retrieve the optimal parse tree, assuming that the input sentence has
 
been morphologically disambiguated using a part-of-speech tagger. In this talk,
 
I survey recent developments in transition-based dependency parsing that address
 
some of the limitations of the basic transition-based approach. First, I show
 
how globally trained classifiers and beam search can be used to mitigate error
 
propagation and enable richer feature representations. Secondly, I discuss
 
different methods for extending the coverage to non-projective trees, which are
 
required for linguistic adequacy in many languages.Finally, I present a
 
model for joint tagging and parsing that leads to improvements in both tagging
 
and parsing accuracy as compared to the standard pipeline approach.
 
 
 
'''About the Speaker:''' Joakim Nivre is Professor of Computational Linguistics at Uppsala
 
University and currently visiting scientist at Google, New York. He holds a
 
Ph.D. in General Linguistics from the University of Gothenburg and a Ph.D. in
 
Computer Science from Växjö University. Joakim's research focuses on data-driven
 
methods for natural language processing, in particular for syntactic and semantic analysis. He is one of the main developers of the transition-based
 
approach to syntactic dependency parsing, described in his 2006 book Inductive
 
Dependency Parsing and implemented in the MaltParser system. Joakim's current
 
research interests include the analysis of mildly non-projective dependency
 
structures, the integration of morphological and syntactic processing for richly
 
inflected languages, and methods for cross-framework parser evaluation. He has
 
produced over 150 scientific publications, including 3 books, and has given
 
nearly 70 invited talks at conferences and institutions around the world. He is
 
the current secretary of the European Chapter of the Association for
 
Computational Linguistics.
 
 
 
'''Host:''' Hal Daume III, hal@umd.edu
 
 
 
== 10/23/2012: Bootstrapping via Graph Propagation ==
 
 
 
'''Speaker:''' [http://www.cs.sfu.ca/~anoop/ Anoop Sarkar], Simon Fraser University <br/>
 
'''Time:''' Tuesday, October 23, 2012, 2:00 PM<br/>
 
'''Venue:''' AVW 4172<br/>
 
 
 
'''Note special time and place!!!'''
 
 
 
In natural language processing, the bootstrapping algorithm introduced
 
by David Yarowsky (15 years ago) is a discriminative unsupervised
 
learning algorithm that uses some seed rules to bootstrap a classifier
 
(this is the ordinary sense of bootstrapping which is distinct from
 
the Bootstrap in statistics). The Yarowsky algorithm works remarkably
 
well on a wide variety of NLP classification tasks such as
 
distinguishing between word senses and deciding if a noun phrase is an
 
organization, location, or person.
 
 
 
Extending previous attempts at providing an objective function
 
optimization view of Yarowsky, we show that bootstrapping a classifier
 
from a small set of seed rules can be viewed as the propagation of
 
labels between examples via features shared between them. This paper
 
introduces a novel variant of the Yarowsky algorithm based on this
 
view. It is a bootstrapping learning method which uses a graph
 
propagation algorithm with a well defined per-iteration objective
 
function that incorporates the cautious behaviour of the original
 
Yarowsky algorithm.
 
 
 
The experimental results show that our proposed bootstrapping
 
algorithm achieves state of the art performance or better on several
 
different natural language data sets, outperforming other unsupervised
 
methods such as the EM algorithm. We show that cautious learning is an
 
important principle in unsupervised learning, however we do not
 
understand it well, and we show that the Yarowsky algorithm can
 
outperform or match co-training  without any reliance on multiple
 
views.
 
 
 
'''About the Speaker:''' Anoop Sarkar is an Associate Professor at Simon Fraser University in
 
British Columbia, Canada where he co-directs the [http://natlang.cs.sfu.ca Natural Language Laboratory]. He received his Ph.D. from the
 
Department of Computer and Information Sciences at the University of
 
Pennsylvania under Prof. Aravind Joshi for his work on semi-supervised
 
statistical parsing using tree-adjoining grammars.
 
 
 
His research is focused on statistical parsing and machine translation
 
(exploiting syntax or morphology, semi-supervised learning, and domain
 
adaptation). His interests also include formal language theory and
 
stochastic grammars, in particular tree automata and tree-adjoining
 
grammars.
 
 
 
== 10/31/2012: Kilian Weinberger ==
 
 
 
== Previous Talks ==
 
* [[CLIP Colloquium (Fall 2012)|Fall 2012]]
 
* [[CLIP Colloquium (Spring 2012)|Spring 2012]]
 
* [[CLIP Colloquium (Fall 2011)|Fall 2011]]
 
* [[CLIP Colloquium (Spring 2011)|Spring 2011]]
 
* [[CLIP Colloquium (Fall 2010)|Fall 2010]]
 

Latest revision as of 04:49, 6 May 2022

x

CLIP Colloquium

The CLIP Colloquium is a weekly speaker series organized and hosted by CLIP Lab. The talks are open to everyone. Most talks are held on Wednesday at 11AM online unless otherwise noted. Typically, external speakers have slots for one-on-one meetings with Maryland researchers.

If you would like to get on the clip-talks@umiacs.umd.edu list or for other questions about the colloquium series, e-mail Wei Ai, the current organizer.

For up-to-date information, see the UMD CS Talks page. (You can also subscribe to the calendar there.)

Colloquium Recordings

Previous Talks

CLIP NEWS

  • News about CLIP researchers on the UMIACS website [1]
  • Please follow us on Twitter @ClipUmd[2]