|
|
(69 intermediate revisions by 8 users not shown) |
Line 1: |
Line 1: |
| <center>[[Image:colloq.jpg|center|504px|x]]</center> | | <center>[[Image:colloq.jpg|center|504px|x]]</center> |
|
| |
|
| | == CLIP Colloquium == |
|
| |
|
| The CLIP Colloquium is a weekly speaker series organized and hosted by CLIP Lab. The talks are open to everyone. Most talks are held at 11AM in AV Williams 3258 unless otherwise noted. Typically, external speakers have slots for one-on-one meetings with Maryland researchers before and after the talks; contact the host if you'd like to have a meeting. | | The CLIP Colloquium is a weekly speaker series organized and hosted by CLIP Lab. The talks are open to everyone. Most talks are held on Wednesday at 11AM online unless otherwise noted. Typically, external speakers have slots for one-on-one meetings with Maryland researchers. |
|
| |
|
| If you would like to get on the cl-colloquium@umiacs.umd.edu list or for other questions about the colloquium series, e-mail [mailto:jimmylin@umd.edu Jimmy Lin], the current organizer. | | If you would like to get on the clip-talks@umiacs.umd.edu list or for other questions about the colloquium series, e-mail [mailto:rudinger@umd.edu Rachel Rudinger], the current organizer. |
|
| |
|
| | For up-to-date information, see the [https://talks.cs.umd.edu/lists/7 UMD CS Talks page]. (You can also subscribe to the calendar there.) |
|
| |
|
| {{#widget:Google Calendar
| | === Colloquium Recordings === |
| |id=lqah25nfftkqi2msv25trab8pk@group.calendar.google.com
| | * [[Colloqium Recording (Fall 2020)|Fall 2020]] |
| |color=B1440E | | * [[Colloqium Recording (Spring 2021)|Spring 2021]] |
| |title=Upcoming Talks | | * [[Colloqium Recording (Fall 2021)|Fall 2021]] |
| |view=AGENDA | | * [[Colloqium Recording (Spring 2022)|Spring 2022]] |
| |height=300 | |
| }}
| |
|
| |
|
| | === Previous Talks === |
| | * [[https://talks.cs.umd.edu/lists/7?range=past Past talks, 2013 - present]] |
| | * [[CLIP Colloquium (Spring 2012)|Spring 2012]] [[CLIP Colloquium (Fall 2011)|Fall 2011]] [[CLIP Colloquium (Spring 2011)|Spring 2011]] [[CLIP Colloquium (Fall 2010)|Fall 2010]] |
|
| |
|
| | == CLIP NEWS == |
|
| |
|
| == 01/30/2013: Human Translation and Machine Translation ==
| | * News about CLIP researchers on the UMIACS website [http://www.umiacs.umd.edu/about-us/news] |
| | | * Please follow us on Twitter @ClipUmd[https://twitter.com/ClipUmd?lang=en] |
| '''Speaker:''' [http://homepages.inf.ed.ac.uk/pkoehn/ Philipp Koehn], University of Edinburgh<br/>
| |
| '''Time:''' Wednesday, January 30, 2013, 11:00 AM<br/>
| |
| '''Venue:''' AVW 3258<br/>
| |
| | |
| Despite all the recent successes of machine translation, when it
| |
| comes to high quality publishable translation, human translators
| |
| are still unchallenged. Since we can't beat them, can we help
| |
| them to become more productive? I will talk about some recent
| |
| work on developing assistance tools for human translators.
| |
| You can also check out a prototype [http://www.caitra.org/ here]
| |
| and learn about our ongoing European projects [http://www.casmacat.eu/ CASMACAT]
| |
| and [http://www.matecat.com/ MATECAT].
| |
| | |
| '''About the Speaker:''' Philipp Koehn is Professor of Machine Translation at the
| |
| School of Informatics at the University of Edinburgh, Scotland.
| |
| He received his PhD at the University of Southern California
| |
| and spent a year as postdoctoral researcher at MIT.
| |
| He is well-known in the field of statistical machine translation
| |
| for the leading open source toolkit Moses, the organization
| |
| of the annual Workshop on Statistical Machine Translation
| |
| and its evaluation campaign as well as the Machine Translation
| |
| Marathon. He is founding president of the ACL SIG MT and
| |
| currently serves a vice president-elect of the ACL SIG DAT.
| |
| He has published over 80 papers and the textbook in the
| |
| field. He manages a number of EU and DARPA funded
| |
| research projects aimed at morpho-syntactic models, machine
| |
| learning methods and computer assisted translation tools.
| |
| | |
| == 02/06/2013: Chong Wang: A New Recommender System for Large-scale Document Exploration ==
| |
| | |
| How can we help people quickly navigate the vast amount of data
| |
| and acquire useful knowledge from it? Recommender systems provide
| |
| a promising solution to this problem. They narrow down the search
| |
| space by providing a few recommendations that are tailored to
| |
| users' personal preferences. However, these systems usually work
| |
| like a black box, limiting further opportunities to provide more
| |
| exploratory experiences to their users.
| |
| | |
| In this talk, I will describe how we build a new recommender
| |
| system for document exploration. Specially, I will talk about two
| |
| building blocks of the system in detail. The first is about a new
| |
| probabilistic model for document recommendation that is both
| |
| predictive and interpretable. It not only gives better predictive
| |
| performance, but also provides better transparency than
| |
| traditional approaches. This transparency creates many new
| |
| opportunities for exploratory analysis---For example, a user can
| |
| manually adjust her preferences and the system responds to this
| |
| by changing its recommendations. Second, building a recommender
| |
| system like this requires learning the probabilistic model from
| |
| large-scale empirical data. I will describe a scalable approach
| |
| for learning a wide class of probabilistic models that include
| |
| our recommendation model as a special case.
| |
| | |
| Chong is a Project Scientist in Eric Xing's group, Machine Learning Department, Carnegie Mellon University. His PhD advisor was David M. Blei from Princeton University.
| |
| | |
| == 02/13/2013: Mona Diab ==
| |
| | |
| '''Speaker:''' [http://www1.ccls.columbia.edu/~mdiab/ Mona Diab], Columbia University<br/>
| |
| '''Time:''' Wednesday, February 13, 2013, 11:00 AM<br/>
| |
| '''Venue:''' AVW 3258<br/>
| |
| | |
| == 02/14/2013: Efficient Probabilistic Models for Rankings and Orderings ==
| |
| | |
| '''Speaker:''' [http://stanford.edu/~jhuang11/ Jon Huang], Stanford University<br/>
| |
| '''Time:''' Thursday, February 14, 2013, 11:00 AM<br/>
| |
| '''Venue:''' ABW 3258<br/>
| |
| | |
| The need to reason probabilistically with rankings and orderings arises
| |
| in a number of real world problems. Probability distributions over
| |
| rankings and orderings arise naturally, for example, in preference data,
| |
| and political election data, as well as a number of less obvious
| |
| settings such as topic analysis and neurodegenerative disease
| |
| progression modeling. Representing distributions over the space of all
| |
| rankings is challenging, however, due to the factorial number of ways to
| |
| rank a collection of items. The focus of my talk is to discuss methods
| |
| for combatting this factorial explosion in probabilistic representation
| |
| and inference.
| |
| | |
| Ordinarily, a typical machine learning method for dealing with
| |
| combinatorial complexity might be to exploit conditional independence
| |
| relations in order to decompose a distribution into compact factors of a
| |
| graphical model. For ranked data, however, a far more natural and
| |
| useful probabilistic relation is that of `riffled independence'. I will
| |
| introduce the concept of riffled independence and discuss how these
| |
| riffle independent relations can be used to decompose a distribution
| |
| over rankings into a product of compactly represented factors. These
| |
| so-called hierarchical riffle-independent distributions are particularly
| |
| amenable to efficient inference and learning algorithms and in many
| |
| cases lead to intuitively interpretable probabilistic models. To
| |
| illustrate the power of exploiting riffled independence, I will discuss
| |
| a few applications, including Irish political election analysis,
| |
| visualizing the japanese preferences of sushi types and modeling the
| |
| progression of Alzheimer's disease, showing results on real datasets in
| |
| each problem.
| |
| | |
| This is joint work with Carlos Guestrin (University of Washington),
| |
| Ashish Kapoor (Microsoft Research) and Daniel Alexander (University
| |
| College London).
| |
| | |
| == 02/27/2013:David Mimno ==
| |
| | |
| == 03/13/2013: Dan Hopkins ==
| |
| | |
| == 03/27/2013: Richard Sproat ==
| |
| | |
| == 04/10/2013: Learning with Marginalized Corrupted Features ==
| |
| | |
| '''Speaker:''' [http://www.cse.wustl.edu/~kilian/ Kilian Weinberger], Washington University in St. Louis<br/>
| |
| '''Time:''' Wednesday, April 10, 2013, 11:00 AM<br/>
| |
| '''Venue:''' AVW 3258<br/>
| |
| | |
| If infinite amounts of labeled data are provided, many machine learning algorithms become perfect. With finite amounts of data, regularization or priors have to be used to introduce bias into a classifier. We propose a third option: learning with marginalized corrupted features. We corrupt existing data as a means to generate infinitely many additional training samples from a slightly different data distribution -- explicitly in a way that the corruption can be marginalized out in closed form. This leads to machine learning algorithms that are fast, effective and naturally scale to very large data sets. We showcase this technology in two settings: 1. to learn text document representations from unlabeled data and 2. to perform supervised learning with closed form gradient updates for empirical risk minimization.
| |
| | |
| Text documents (and often images) are traditionally expressed as bag-of-words feature vectors (e.g. as tf-idf). By training linear denoisers that recover unlabeled data from partial corruption, we can learn new data-specific representations. With these, we can match the world-record accuracy on the Amazon transfer learning benchmark with a simple linear classifier. In comparison with the record holder (stacked denoising autoencoders) our approach shrinks the training time from several days to a few minutes.
| |
| | |
| Finally, we present a variety of loss functions and corrupting distributions, which can be applied out-of-the-box with empirical risk minimization. We show that our formulation leads to significant improvements in document classification tasks over the typically used l_p norm regularization. The new learning framework is extremely versatile, generalizes better, is more stable during test-time (towards distribution drift) and only adds a few lines of code to typical risk minimization.
| |
| | |
| '''About the Speaker:''' Kilian Q. Weinberger is an Assistant Professor in the Department of Computer Science & Engineering at Washington University in St. Louis. He received his Ph.D. from the University of Pennsylvania in Machine Learning under the supervision of Lawrence Saul. Prior to this, he obtained his undergraduate degree in Mathematics and Computer Science at the University of Oxford. During his career he has won several best paper awards at ICML, CVPR and AISTATS. In 2011 he was awarded the AAAI senior program chair award and in 2012 he received the NSF CAREER award. Kilian Weinberger's research is in Machine Learning and its applications. In particular, he focuses on high dimensional data analysis, metric learning, machine learned web-search ranking, transfer- and multi-task learning as well as bio medical applications.
| |
| | |
| | |
| == Previous Talks ==
| |
| * [[CLIP Colloquium (Fall 2012)|Fall 2012]]
| |
| * [[CLIP Colloquium (Spring 2012)|Spring 2012]]
| |
| * [[CLIP Colloquium (Fall 2011)|Fall 2011]]
| |
| * [[CLIP Colloquium (Spring 2011)|Spring 2011]]
| |
| * [[CLIP Colloquium (Fall 2010)|Fall 2010]]
| |