|
|
(91 intermediate revisions by 9 users not shown) |
Line 1: |
Line 1: |
| The CLIP Colloquium is a weekly speaker series organized and hosted by CLIP Lab. The talks are open to everyone. Most talks are held at 11AM in AV Williams 3258 unless otherwise noted. Typically, external speakers have slots for one-on-one meetings with Maryland researchers before and after the talks; contact the host if you'd like to have a meeting.
| | <center>[[Image:colloq.jpg|center|504px|x]]</center> |
|
| |
|
| If you would like to get on the cl-colloquium@umiacs.umd.edu list or for other questions about the colloquium series, e-mail [mailto:jimmylin@umd.edu Jimmy Lin], the current organizer.
| | == CLIP Colloquium == |
|
| |
|
| | The CLIP Colloquium is a weekly speaker series organized and hosted by CLIP Lab. The talks are open to everyone. Most talks are held on Wednesday at 11AM online unless otherwise noted. Typically, external speakers have slots for one-on-one meetings with Maryland researchers. |
|
| |
|
| {{#widget:Google Calendar
| | If you would like to get on the clip-talks@umiacs.umd.edu list or for other questions about the colloquium series, e-mail [mailto:rudinger@umd.edu Rachel Rudinger], the current organizer. |
| |id=lqah25nfftkqi2msv25trab8pk@group.calendar.google.com
| |
| |color=B1440E
| |
| |title=Upcoming Talks
| |
| |view=AGENDA
| |
| |height=300
| |
| }}
| |
|
| |
|
| __NOTOC__
| | For up-to-date information, see the [https://talks.cs.umd.edu/lists/7 UMD CS Talks page]. (You can also subscribe to the calendar there.) |
|
| |
|
| | === Colloquium Recordings === |
| | * [[Colloqium Recording (Fall 2020)|Fall 2020]] |
| | * [[Colloqium Recording (Spring 2021)|Spring 2021]] |
| | * [[Colloqium Recording (Fall 2021)|Fall 2021]] |
| | * [[Colloqium Recording (Spring 2022)|Spring 2022]] |
|
| |
|
| | === Previous Talks === |
| | * [[https://talks.cs.umd.edu/lists/7?range=past Past talks, 2013 - present]] |
| | * [[CLIP Colloquium (Spring 2012)|Spring 2012]] [[CLIP Colloquium (Fall 2011)|Fall 2011]] [[CLIP Colloquium (Spring 2011)|Spring 2011]] [[CLIP Colloquium (Fall 2010)|Fall 2010]] |
|
| |
|
| == 10/10/2012: Beyond MaltParser - Advances in Transition-Based Dependency Parsing == | | == CLIP NEWS == |
| '''Speaker:''' [http://stp.lingfil.uu.se/~nivre/ Joakim Nivre], Uppsala University / Google<br/>
| |
| '''Time:''' Wednesday, October 10, 2012, 11:00 AM<br/>
| |
| '''Venue:''' AVW 3258<br/>
| |
|
| |
|
| The transition-based approach to dependency parsing has become
| | * News about CLIP researchers on the UMIACS website [http://www.umiacs.umd.edu/about-us/news] |
| popular thanks to its simplicity and efficiency. Systems like MaltParser
| | * Please follow us on Twitter @ClipUmd[https://twitter.com/ClipUmd?lang=en] |
| achieve linear-time parsing with projective dependency trees using locally
| |
| trained classifiers to predict the next parsing action and greedy best-first
| |
| search to retrieve the optimal parse tree, assuming that the input sentence has
| |
| been morphologically disambiguated using a part-of-speech tagger. In this talk,
| |
| I survey recent developments in transition-based dependency parsing that address
| |
| some of the limitations of the basic transition-based approach. First, I show
| |
| how globally trained classifiers and beam search can be used to mitigate error
| |
| propagation and enable richer feature representations. Secondly, I discuss
| |
| different methods for extending the coverage to non-projective trees, which are
| |
| required for linguistic adequacy in many languages.Finally, I present a
| |
| model for joint tagging and parsing that leads to improvements in both tagging
| |
| and parsing accuracy as compared to the standard pipeline approach.
| |
| | |
| '''About the Speaker:''' Joakim Nivre is Professor of Computational Linguistics at Uppsala
| |
| University and currently visiting scientist at Google, New York. He holds a
| |
| Ph.D. in General Linguistics from the University of Gothenburg and a Ph.D. in
| |
| Computer Science from Växjö University. Joakim's research focuses on data-driven
| |
| methods for natural language processing, in particular for syntactic and semantic analysis. He is one of the main developers of the transition-based
| |
| approach to syntactic dependency parsing, described in his 2006 book Inductive
| |
| Dependency Parsing and implemented in the MaltParser system. Joakim's current
| |
| research interests include the analysis of mildly non-projective dependency
| |
| structures, the integration of morphological and syntactic processing for richly
| |
| inflected languages, and methods for cross-framework parser evaluation. He has
| |
| produced over 150 scientific publications, including 3 books, and has given
| |
| nearly 70 invited talks at conferences and institutions around the world. He is
| |
| the current secretary of the European Chapter of the Association for
| |
| Computational Linguistics.
| |
| | |
| '''Host:''' Hal Daume III, hal@umd.edu
| |
| | |
| == 10/17/2012: Using Syntactic Head Information in Hierarchical Phrase-Based Translation ==
| |
| | |
| '''Speaker:''' Junhui Li<br/>
| |
| '''Time:''' Wednesday, October 17, 2012, 11:00 AM<br/>
| |
| '''Venue:''' AVW 3258<br/>
| |
| | |
| The traditional hierarchical phrase-based (HPB) model is prone to overgeneration due to lack of linguistic knowledge: the grammar may suggest more derivations than appropriate, many of which may lead to ungrammatical translations. On the other hand, limitations of glue grammar rules in HPB model may actually prevent systems from considering some reasonable derivations. This talk presents a simple but effective translation model, called the Head-Driven HPB (HD-HPB) model, which incorporates head information in translation rules to better capture syntax-driven information in a derivation. In addition, unlike the original glue rules, the HD-HPB model allows improved reordering between any two neighboring non-terminals to explore a larger reordering search space. In experiments, we examined different head label sets to refine non-terminal X, including part-of-speech (POS) tags, coarsed POS tags, dependency labels.
| |
| | |
| '''About the Speaker:''' Junhui Li joined CLIP lab as a post-doc researcher from Aug 2012. He was previously a post-doc researcher in the Centre for Next Generation Localisation (CNGL), at Dublin City University from Feb 2011 to Jul 2012. Before that, he was a student at NLP Lab of Soochow University, China.
| |
| | |
| == 10/23/2012: Bootstrapping via Graph Propagation ==
| |
| | |
| '''Speaker:''' [http://www.cs.sfu.ca/~anoop/ Anoop Sarkar], Simon Fraser University <br/>
| |
| '''Time:''' Tuesday, October 23, 2012, 2:00 PM<br/>
| |
| '''Venue:''' AVW 4172<br/>
| |
| | |
| '''Note special time and place!!!'''
| |
| | |
| In natural language processing, the bootstrapping algorithm introduced
| |
| by David Yarowsky (15 years ago) is a discriminative unsupervised
| |
| learning algorithm that uses some seed rules to bootstrap a classifier
| |
| (this is the ordinary sense of bootstrapping which is distinct from
| |
| the Bootstrap in statistics). The Yarowsky algorithm works remarkably
| |
| well on a wide variety of NLP classification tasks such as
| |
| distinguishing between word senses and deciding if a noun phrase is an
| |
| organization, location, or person.
| |
| | |
| Extending previous attempts at providing an objective function
| |
| optimization view of Yarowsky, we show that bootstrapping a classifier
| |
| from a small set of seed rules can be viewed as the propagation of
| |
| labels between examples via features shared between them. This paper
| |
| introduces a novel variant of the Yarowsky algorithm based on this
| |
| view. It is a bootstrapping learning method which uses a graph
| |
| propagation algorithm with a well defined per-iteration objective
| |
| function that incorporates the cautious behaviour of the original
| |
| Yarowsky algorithm.
| |
| | |
| The experimental results show that our proposed bootstrapping
| |
| algorithm achieves state of the art performance or better on several
| |
| different natural language data sets, outperforming other unsupervised
| |
| methods such as the EM algorithm. We show that cautious learning is an
| |
| important principle in unsupervised learning, however we do not
| |
| understand it well, and we show that the Yarowsky algorithm can
| |
| outperform or match co-training without any reliance on multiple
| |
| views.
| |
| | |
| '''About the Speaker:''' Anoop Sarkar is an Associate Professor at Simon Fraser University in
| |
| British Columbia, Canada where he co-directs the [http://natlang.cs.sfu.ca Natural Language Laboratory]. He received his Ph.D. from the
| |
| Department of Computer and Information Sciences at the University of
| |
| Pennsylvania under Prof. Aravind Joshi for his work on semi-supervised
| |
| statistical parsing using tree-adjoining grammars.
| |
| | |
| His research is focused on statistical parsing and machine translation
| |
| (exploiting syntax or morphology, semi-supervised learning, and domain
| |
| adaptation). His interests also include formal language theory and
| |
| stochastic grammars, in particular tree automata and tree-adjoining
| |
| grammars.
| |
| | |
| == 10/31/2012: Kilian Weinberger ==
| |
| | |
| == Previous Talks ==
| |
| * [[CLIP Colloquium (Fall 2012)|Fall 2012]]
| |
| * [[CLIP Colloquium (Spring 2012)|Spring 2012]]
| |
| * [[CLIP Colloquium (Fall 2011)|Fall 2011]]
| |
| * [[CLIP Colloquium (Spring 2011)|Spring 2011]]
| |
| * [[CLIP Colloquium (Fall 2010)|Fall 2010]]
| |