|
|
(77 intermediate revisions by 8 users not shown) |
Line 1: |
Line 1: |
| <center>[[Image:colloq.jpg|center|504px|x]]</center> | | <center>[[Image:colloq.jpg|center|504px|x]]</center> |
|
| |
|
| | == CLIP Colloquium == |
|
| |
|
| The CLIP Colloquium is a weekly speaker series organized and hosted by CLIP Lab. The talks are open to everyone. Most talks are held at 11AM in AV Williams 3258 unless otherwise noted. Typically, external speakers have slots for one-on-one meetings with Maryland researchers before and after the talks; contact the host if you'd like to have a meeting. | | The CLIP Colloquium is a weekly speaker series organized and hosted by CLIP Lab. The talks are open to everyone. Most talks are held on Wednesday at 11AM online unless otherwise noted. Typically, external speakers have slots for one-on-one meetings with Maryland researchers. |
|
| |
|
| If you would like to get on the cl-colloquium@umiacs.umd.edu list or for other questions about the colloquium series, e-mail [mailto:jimmylin@umd.edu Jimmy Lin], the current organizer. | | If you would like to get on the clip-talks@umiacs.umd.edu list or for other questions about the colloquium series, e-mail [mailto:rudinger@umd.edu Rachel Rudinger], the current organizer. |
|
| |
|
| | For up-to-date information, see the [https://talks.cs.umd.edu/lists/7 UMD CS Talks page]. (You can also subscribe to the calendar there.) |
|
| |
|
| {{#widget:Google Calendar
| | === Colloquium Recordings === |
| |id=lqah25nfftkqi2msv25trab8pk@group.calendar.google.com
| | * [[Colloqium Recording (Fall 2020)|Fall 2020]] |
| |color=B1440E | | * [[Colloqium Recording (Spring 2021)|Spring 2021]] |
| |title=Upcoming Talks | | * [[Colloqium Recording (Fall 2021)|Fall 2021]] |
| |view=AGENDA | | * [[Colloqium Recording (Spring 2022)|Spring 2022]] |
| |height=300 | |
| }}
| |
|
| |
|
| | === Previous Talks === |
| | * [[https://talks.cs.umd.edu/lists/7?range=past Past talks, 2013 - present]] |
| | * [[CLIP Colloquium (Spring 2012)|Spring 2012]] [[CLIP Colloquium (Fall 2011)|Fall 2011]] [[CLIP Colloquium (Spring 2011)|Spring 2011]] [[CLIP Colloquium (Fall 2010)|Fall 2010]] |
|
| |
|
| == 11/28/2012: New Machine Learning Tools for Structured Prediction == | | == CLIP NEWS == |
|
| |
|
| '''Speaker:''' [http://old-site.clsp.jhu.edu/~ves/ Veselin Stoyanov], Johns Hopkins University<br/>
| | * News about CLIP researchers on the UMIACS website [http://www.umiacs.umd.edu/about-us/news] |
| '''Time:''' Wednesday, November 28, 2012, 2012, 11:00 AM<br/>
| | * Please follow us on Twitter @ClipUmd[https://twitter.com/ClipUmd?lang=en] |
| '''Venue:''' AVW 3258<br/>
| |
| | |
| I am motivated by structured prediction problems in NLP and social network analysis. Markov Random Fields (MRFs) and other Probabilistic Graphical Models (PGMs) are suitable for representing structured prediction: they can model joint distributions and utilize standard inference procedures. MRFs also provide a principled ways for incorporating background knowledge and combining multiple systems.
| |
| | |
| Two properties of structured prediction problems make learning challenging. First, structured prediction almost inevitably requires approximation to inference, decoding or model structure. Second, unlike the traditional ML setting that assumes i.i.d. training and test data, structured learning problems often consist of a single example used both for training and prediction.
| |
| | |
| We address the two issues above. First, we argue that the presence of approximations in MRF-based systems requires a novel perspective on training. Instead of maximizing data likelihood, one should seek the parameters that minimize the empirical risk of the entire imperfect system. We show how to locally optimize this risk using error back-propagation and local optimization. On four NLP problems our approach significantly reduces loss on test data compared to choosing approximate MAP parameters.
| |
| | |
| Second, we utilize data imputation in the limited data setting. At test time we use sampling to impute data that is a more accurate approximation of the data distribution. We use our risk minimization techniques to train fast discriminative models on the imputed data. This we can: (i) train discriminative models given a single training and test example; (ii) train generative/discriminative hybrids that can incorporate useful priors and learn from semi-supervised data.
| |
| | |
| '''About the Speaker:''' Veselin Stoyanov is currently a postdoctoral researcher at the Human Language Technology Center of Excellence (HLT-COE) at Johns Hopkins University (JHU). He will be joining Facebook as a Research Scientist starting in January 2013. Previously he spent two years working with Prof. Jason Eisner at JHU's Center for Language and Speech Processing supported by a Computing Innovation Postdoctoral Fellowship. He received the Ph.D. degree from Cornell University under the supervision of Prof. Claire Cardie in 2009 and the Honors B.Sc. from the University of Delaware in 2002. His research interests reside in the intersection of Machine Learning and Computational Linguistics. More precisely, he is interested in using probabilistic models for complex structured problems with applications to knowledge base population, modeling social networks, extracting information from text and coreference resolution. In addition to the CIFellowship, Ves Stoyanov is the recipient of an NSF Graduate Research Fellowship and other academic honors.
| |
| | |
| == 12/05/2012: Combining Statistical Translation Techniques for Cross-Language Information Retrieval ==
| |
| | |
| '''Speaker:''' [http://www.cs.umd.edu/~fture/Home.html Ferhan Ture], University of Maryland<br/>
| |
| '''Time:''' Wednesday, December 5, 2012, 11:00 AM<br/>
| |
| '''Venue:''' AVW 3258<br/>
| |
| | |
| Cross-language information retrieval today is dominated by techniques
| |
| that rely principally on context-independent token-to-token mappings
| |
| despite the fact that state-of-the-art statistical machine translation
| |
| systems now have far richer translation models available in their
| |
| internal representations. This paper explores combination-of-evidence
| |
| techniques using three types of statistical translation models:
| |
| context-independent token translation, token translation using
| |
| phrase-dependent contexts, and token translation using
| |
| sentence-dependent contexts. Context-independent translation is
| |
| performed using statistically-aligned tokens in parallel text,
| |
| phrase-dependent translation is performed using aligned statistical
| |
| phrases, and sentence-dependent translation is performed using those
| |
| same aligned phrases together with an n-gram language model.
| |
| Experiments on retrieval of Arabic, Chinese, and French documents
| |
| using English queries show that no one technique is optimal for all
| |
| queries, but that statistically significant improvements in mean
| |
| average precision over strong baselines can be achieved by combining
| |
| translation evidence from all three techniques. The optimal
| |
| combination is, however, found to be resource-dependent, indicating
| |
| a need for future work on robust tuning to the characteristics of
| |
| individual collections.
| |
| | |
| This is a practice talk for COLING 2012.
| |
| | |
| == 01/30/2013: Human Translation and Machine Translation ==
| |
| | |
| '''Speaker:''' [http://homepages.inf.ed.ac.uk/pkoehn/ Philipp Koehn], University of Edinburgh<br/>
| |
| '''Time:''' Wednesday, January 30, 2013, 11:00 AM<br/>
| |
| '''Venue:''' AVW 3258<br/>
| |
| | |
| Despite all the recent successes of machine translation, when it
| |
| comes to high quality publishable translation, human translators
| |
| are still unchallenged. Since we can't beat them, can we help
| |
| them to become more productive? I will talk about some recent
| |
| work on developing assistance tools for human translators.
| |
| You can also check out a prototype [http://www.caitra.org/ here]
| |
| and learn about our ongoing European projects [http://www.casmacat.eu/ CASMACAT]
| |
| and [http://www.matecat.com/ MATECAT].
| |
| | |
| '''About the Speaker:''' Philipp Koehn is Professor of Machine Translation at the
| |
| School of Informatics at the University of Edinburgh, Scotland.
| |
| He received his PhD at the University of Southern California
| |
| and spent a year as postdoctoral researcher at MIT.
| |
| He is well-known in the field of statistical machine translation
| |
| for the leading open source toolkit Moses, the organization
| |
| of the annual Workshop on Statistical Machine Translation
| |
| and its evaluation campaign as well as the Machine Translation
| |
| Marathon. He is founding president of the ACL SIG MT and
| |
| currently serves a vice president-elect of the ACL SIG DAT.
| |
| He has published over 80 papers and the textbook in the
| |
| field. He manages a number of EU and DARPA funded
| |
| research projects aimed at morpho-syntactic models, machine
| |
| learning methods and computer assisted translation tools.
| |
| | |
| == 02/13/2013: Mona Diab ==
| |
| | |
| '''Speaker:''' [http://www1.ccls.columbia.edu/~mdiab/ Mona Diab], Columbia University<br/>
| |
| '''Time:''' Wednesday, February 13, 2013, 11:00 AM<br/>
| |
| '''Venue:''' AVW 3258<br/>
| |
| | |
| == 03/13/2013: Dan Hopkins ==
| |
| | |
| == 04/10/2013: Learning with Marginalized Corrupted Features ==
| |
| | |
| '''Speaker:''' [http://www.cse.wustl.edu/~kilian/ Kilian Weinberger], Washington University in St. Louis<br/>
| |
| '''Time:''' Wednesday, April 10, 2013, 11:00 AM<br/>
| |
| '''Venue:''' AVW 3258<br/>
| |
| | |
| If infinite amounts of labeled data are provided, many machine learning algorithms become perfect. With finite amounts of data, regularization or priors have to be used to introduce bias into a classifier. We propose a third option: learning with marginalized corrupted features. We corrupt existing data as a means to generate infinitely many additional training samples from a slightly different data distribution -- explicitly in a way that the corruption can be marginalized out in closed form. This leads to machine learning algorithms that are fast, effective and naturally scale to very large data sets. We showcase this technology in two settings: 1. to learn text document representations from unlabeled data and 2. to perform supervised learning with closed form gradient updates for empirical risk minimization.
| |
| | |
| Text documents (and often images) are traditionally expressed as bag-of-words feature vectors (e.g. as tf-idf). By training linear denoisers that recover unlabeled data from partial corruption, we can learn new data-specific representations. With these, we can match the world-record accuracy on the Amazon transfer learning benchmark with a simple linear classifier. In comparison with the record holder (stacked denoising autoencoders) our approach shrinks the training time from several days to a few minutes.
| |
| | |
| Finally, we present a variety of loss functions and corrupting distributions, which can be applied out-of-the-box with empirical risk minimization. We show that our formulation leads to significant improvements in document classification tasks over the typically used l_p norm regularization. The new learning framework is extremely versatile, generalizes better, is more stable during test-time (towards distribution drift) and only adds a few lines of code to typical risk minimization.
| |
| | |
| '''About the Speaker:''' Kilian Q. Weinberger is an Assistant Professor in the Department of Computer Science & Engineering at Washington University in St. Louis. He received his Ph.D. from the University of Pennsylvania in Machine Learning under the supervision of Lawrence Saul. Prior to this, he obtained his undergraduate degree in Mathematics and Computer Science at the University of Oxford. During his career he has won several best paper awards at ICML, CVPR and AISTATS. In 2011 he was awarded the AAAI senior program chair award and in 2012 he received the NSF CAREER award. Kilian Weinberger's research is in Machine Learning and its applications. In particular, he focuses on high dimensional data analysis, metric learning, machine learned web-search ranking, transfer- and multi-task learning as well as bio medical applications.
| |
| | |
| | |
| == Previous Talks ==
| |
| * [[CLIP Colloquium (Fall 2012)|Fall 2012]]
| |
| * [[CLIP Colloquium (Spring 2012)|Spring 2012]]
| |
| * [[CLIP Colloquium (Fall 2011)|Fall 2011]]
| |
| * [[CLIP Colloquium (Spring 2011)|Spring 2011]]
| |
| * [[CLIP Colloquium (Fall 2010)|Fall 2010]]
| |