Actions

Events: Difference between revisions

Computational Linguistics and Information Processing

No edit summary
No edit summary
Line 51: Line 51:




== 10/23/2013: Title TBA ==
== 10/23/2013: Towards Minimizing the Annotation Cost of Certified Text Classification ==


'''Speaker:''' Mossaab Bagdouri,  University of Maryland]<br/>
'''Speaker:''' Mossaab Bagdouri,  University of Maryland]<br/>
'''Time:''' Wednesday, October 23, 2013, 11:00 AM<br/>
'''Time:''' Wednesday, October 23, 2013, 11:00 AM<br/>
'''Venue:''' AVW 3258<br/>
'''Venue:''' AVW 3258<br/>
The common practice of testing a sequence of text classifiers learned on a growing training set, and stopping when a target value of estimated effectiveness is first met, introduces a sequential testing bias. In settings where the effectiveness of a text classifier must be certified (perhaps to a court of law), this bias may be unacceptable. The choice of when to stop training is made even more complex when, as is common, the annotation of training and test data must be paid for from a common budget: each new labeled training example is a lost test example. Drawing on ideas from statistical power analysis, we present a framework for joint minimization of training and test annotation that maintains the statistical validity of effectiveness estimates, and yields a natural definition of an optimal allocation of annotations to training and test data. We identify the development of allocation policies that can approximate this optimum as a central question for research. We then develop simulation-based power analysis methods for van Rijsbergen's F-measure, and incorporate them in four baseline allocation policies which we study empirically. In support of our studies, we develop a new analytic approximation of confidence intervals for the F-measure that is of independent interest.





Revision as of 13:38, 5 September 2013

x


The CLIP Colloquium is a weekly speaker series organized and hosted by CLIP Lab. The talks are open to everyone. Most talks are held at 11AM in AV Williams 3258 unless otherwise noted. Typically, external speakers have slots for one-on-one meetings with Maryland researchers before and after the talks; contact the host if you'd like to have a meeting.

If you would like to get on the cl-colloquium@umiacs.umd.edu list or for other questions about the colloquium series, e-mail Jimmy Lin, the current organizer.

{{#widget:Google Calendar |id=lqah25nfftkqi2msv25trab8pk@group.calendar.google.com |color=B1440E |title=Upcoming Talks |view=AGENDA |height=300 }}

9/4/2013 and 9/11/2013: N-Minute Madness

The people of CLIP talk about what's going on in N minutes.

Special location note: on 9/4/2013, we'll be in AVW 4172.


9/18/2013: Spatio-Temporal Crime Prediction using GPS- and Time-Tagged Tweets

Speaker: Matthew Gerber, University of Virginia
Time: Wednesday, September 18, 2013, 11:00 AM
Venue: AVW 3258

Recent research has shown that social media messages (e.g., tweets) can be used to predict various large-scale events like elections (Bermingham and Smeaton, 2011), infectious disease outbreaks (St. Louis and Zorlu, 2012), and even national revolutions (Howard et al., 2011). The essential hypothesis is that the timing, location, and content of these messages are informative with regard to such future events. For many years, the Predictive Technology Laboratory at the University of Virginia has been constructing statistical prediction models of criminal incidents (e.g., robberies and assaults), and we have recently found preliminary evidence of Twitter’s predictive power in this domain (Wang, Brown, and Gerber, 2012). In my talk, I will present an overview of our crime prediction research with a specific focus on current Twitter-based approaches. I will discuss (1) how precise locations and times of tweets have been integrated into the crime prediction model, and (2) how the textual content of tweets has been integrated into the model via latent Dirichlet allocation. I will present current results of our research in this area and discuss future areas of investigation.

About the Speaker: Matthew Gerber joined the University of Virginia faculty in 2011 and is currently a Research Assistant Professor in the Department of Systems and Information Engineering. Prior to joining the University of Virginia, Matthew was a Ph.D. candidate in the Department of Computer Science and Engineering at Michigan State University and a Visiting Instructor in the School of Computing and Information Systems at Grand Valley State University. In 2010, he received (jointly with Joyce Chai) the ACL Best Long Paper Award for his work on recovering null-instantiated arguments for semantic role labeling. His current research focuses on the semantic analysis of natural language text and its application to various prediction and informatics problems.


9/25/2013: CLIP Lab Meeting

Phillip will set the agenda.


10/2/2013: Title TBA

Speaker: Miles Osborne, University of Edinburgh
Time: Wednesday, October 2, 2013, 11:00 AM
Venue: AVW 3258


10/9/2013: Semantics and Social Science: Learning to Extract International Relations from Political Context

Speaker: Brendan O'Connor, Carnegie Mellon University
Time: Wednesday, October 9, 2013, 11:00 AM
Venue: AVW 3258


10/23/2013: Towards Minimizing the Annotation Cost of Certified Text Classification

Speaker: Mossaab Bagdouri, University of Maryland]
Time: Wednesday, October 23, 2013, 11:00 AM
Venue: AVW 3258

The common practice of testing a sequence of text classifiers learned on a growing training set, and stopping when a target value of estimated effectiveness is first met, introduces a sequential testing bias. In settings where the effectiveness of a text classifier must be certified (perhaps to a court of law), this bias may be unacceptable. The choice of when to stop training is made even more complex when, as is common, the annotation of training and test data must be paid for from a common budget: each new labeled training example is a lost test example. Drawing on ideas from statistical power analysis, we present a framework for joint minimization of training and test annotation that maintains the statistical validity of effectiveness estimates, and yields a natural definition of an optimal allocation of annotations to training and test data. We identify the development of allocation policies that can approximate this optimum as a central question for research. We then develop simulation-based power analysis methods for van Rijsbergen's F-measure, and incorporate them in four baseline allocation policies which we study empirically. In support of our studies, we develop a new analytic approximation of confidence intervals for the F-measure that is of independent interest.


10/30/2013: Title TBA

Speaker: Gary Kazantsev, Bloomberg LP
Time: Wednesday, October 30, 2013, 11:00 AM
Venue: AVW 3258


Previous Talks