- 1 May 11, Dave Blei: Scalable Topic Modeling
- 2 May 4, Sinead Williamson: Nonparametric Bayesian models for dependent data
- 3 April 27, Michele Gelfand
- 4 April 22, Eugene Agichtein: Mining Rich User Interaction Data to Improve Web Search
- 5 April 20, Lillian Lee: Language as Influence(d)
- 6 April 13, Leora Morgenstern: Knowledge Representation in the DARPA Machine Reading Program
- 7 April 11, Giacomo Inches: Investigating the statistical properties of user-generated documents
- 8 April 6, Rachel Pottinger
- 9 March 30, Sujith Ravi: Deciphering Natural Language
- 10 March 16, Mark Liberman: Problems and opportunities in corpus phonetics
- 11 March 9, Asad Sayeed: Finding Target-Relevant Sentiment Words
- 12 March 2, Ned Talley: An Unsupervised View of NIH Grants - Latent Categories and Clusters in an Interactive Format
- 13 February 16, Ophir Frieder: Humane Computing
- 14 February 9, Naomi Feldman: Using a developing lexicon to constrain phonetic category acquisition
- 15 February 2, Ahn Jae-wook: Exploratory user interfaces for personalized information access
May 11, Dave Blei: Scalable Topic Modeling
Probabilistic topic modeling provides a suite of tools for the unsupervised analysis of large collections of documents. Topic modeling algorithms can uncover the underlying themes of a collection and decompose its documents according to those themes. This analysis can be used for corpus exploration, document search, and a variety of prediction problems.
In this talk, I will review the state-of-the-art in probabilistic topic models. I will describe the basic ideas behind latent Dirichlet allocation, and discuss a few of the recent topic modeling algorithms that we have developed in my research group.
I will then describe an online strategy for fitting topic models. This approach lets us analyze massive document collections and document collections arriving in a stream. Specifically, we use variational inference to approximate the posterior of the topic model, and we develop a stochastic optimization algorithm for the corresponding objective function. I will describe online algorithms for finite dimensional topic models and for the Bayeisan nonparametric variant based on the hierarchical Dirichlet process.
Our algorithms can fit models to millions of articles in a matter of hours, and I will present a study of 3.3M articles from Wikipedia. These results show that the online approach finds topic models that are as good or better than those found with traditional inference algorithms.
Bio: David Blei is an assistant professor of Computer Science at Princeton University. He received his PhD in 2004 at U.C. Berkeley and was a postdoctoral fellow at Carnegie Mellon University. His research focuses on probabilistic models, Bayesian nonparametric methods, and approximate posterior inference. He works on a variety of applications, including text, images, music, social networks, and scientific data.
May 4, Sinead Williamson: Nonparametric Bayesian models for dependent data
A priori assumptions about the number of parameters required to model our data are often unrealistic. Bayesian nonparametric models circumvent this problem by assigning prior mass to a countably infinite set of parameters, only a finite (but random) number of which will contribute to a given data set. Over recent years, a number of authors have presented dependent nonparametric models -- distributions over collections of random measures associated with values in some covariate space. While the properties of these random measures are allowed to vary across the covariate space, the marginal distribution at each covariate value is given by a known nonparametric distribution. Such distributions are useful for modelling data that vary with some covariate: in image segmentation, proximal pixels are likely to be assigned to the same segment; in modelling documents, topics are likely to increase and decrease in popularity over time.
Most dependent nonparametric models in the literature have Dirichlet process-distributed marginals. While the Dirichlet process is undeniably the most commonly used discrete nonparametric Bayesian prior, this ignores a wide range of interesting models. In my PhD, I have focused on dependent nonparametric models beyond the Dirichlet process -- in particular, on dependent nonparametric models based on the Indian buffet process, a distribution over binary matrices with an infinite number of columns. In this talk, I will give a general introduction to dependent nonparametric models, and describe some of the work I have done in this area.
Bio: Sinead Williamson is a PhD student working with Zoubin Ghahramani at the University of Cambridge, UK. Her main research interests are dependent nonparametric processes and nonparametric latent variable models. She will be visiting the University of Maryland for six months before starting a post doc at Carnegie Mellon University in the Fall.
April 27, Michele Gelfand
In this presentation, I will describe a perspective on metaphor and negotiation that can help to understand, predict, and manage cultural differences in negotiation. The metaphor approach has its roots in linguistics, cognitive science, and cultural psychology. Metaphors are conceptual systems in which different domains of experience are put into the same category so that knowledge from one domain can be used to make sense of the other. Although they have traditionally been conceived of as linguistic devices, metaphors are a basic mechanism through which humans conceptualize experience (Gibbs, 1990; Lakoff, 1987). In the context of negotiation, metaphors serve a number of critical functions in negotiation. First, they function to create negotiators’ subjective intentional realities (Bruner, 1980; Miller, 1997), guiding both thought and action in negotiation. Specifically, metaphors provide a basis for answering the question, "What kind of situation is this? Is it a battle? A game? A dance? A family gathering? A seduction? A visit to the dentist? I will show how metaphoric mappings provide information about what the task is about and dictate specific entailments or scripts that are derived from their source domains. A "Negotiation as Sports" metaphor, for example, suggests a very different task and scripts as compared to a "Negotiation as Dental Work" or a "Negotiation as Marriage” metaphor. Second, shared metaphors function to organize social action in negotiation (Weick, 1979). Through ongoing communicative exchange, negotiators who develop a shared metaphor for negotiation will come to inhabit the same intentional world, will be more organized and "in-sync" in their interactions (Blount & Janicik, 2003), and will be in a better position to negotiate effectively. I also discuss how metaphors for negotiation are selectively developed, activated, and perpetuated through participation in cultural institutions, helping to explain cross-cultural variation in negotiation dynamics, and problems that arise in intercultural negotiations. In the presentation, I will present a number of recent empirical studies, from the lab and the field, and with samples from a number of countries, which provide support for aspects of the theory. I will conclude with a discussion of the role of metaphor in helping create shared reality in intercultural negotiations.
Bio: Dr. Gelfand is a professor in Maryland's psychology department.
April 22, Eugene Agichtein: Mining Rich User Interaction Data to Improve Web Search
Abstract: Web search engines have advanced greatly over the last decade. In particular, query and click logs have been invaluable to understanding and improving searcher experience. Yet, even the immense logs amassed by the major search engines provide only a narrow glimpse into the searcher behavior and goals. I will present novel techniques for acquiring, analyzing, and exploiting a much richer array of searcher interactions including cursor movements, scrolling, and clicks. As a result, we can more accurately infer searcher intent, enabling dramatic improvements for some search tasks. I will also briefly describe a promising medical application of these techniques.
Biosketch: Eugene Agichtein is an Assistant Professor in the Math & CS department at Emory University, where he leads the Intelligent Information Access Lab. Eugene's research centers on Web search and information retrieval, primarily focusing on modeling users interactions in web search and social media to improve access to information on the web. Increasingly, Eugene is collaborating with medical researchers on applications to medical informatics and clinical diagnosis. This work has been supported by NSF, Microsoft Research, HP Labs, Yahoo! Research, and others. More information about Eugene is available at http://www.mathcs.emory.edu/~eugene/.
April 20, Lillian Lee: Language as Influence(d)
What effect does language have on people, and what effect do people have on language?
You might say in response, "Who are you to discuss these problems?" and you would be right to do so; these are Major Questions that science has been tackling for many years. But as a field, I think natural language processing and computational linguistics have much to contribute to the conversation, and I hope to encourage the community to further address these issues. To this end, I'll describe two efforts I've been involved in.
The first project uncovers previously unexamined contextual biases that people may have when determining which opinions to focus on, using Amazon.com helpfulness votes on reviews as a case study to evaluate competing theories from sociology and social psychology. The second project considers linguistic style matching between conversational participants, using a novel setting to study factors that affect the degree to which people tend to instantly adapt to each others' conversational styles.
Joint work with Cristian Danescu-Niculescu-Mizil, Jon Kleinberg, and Gueorgi Kossinets.
Lillian Lee is a professor of computer science at Cornell University. She is the recipient of the inaugural Best Paper Award at HLT-NAACL 2004 (joint with Regina Barzilay), a citation in "Top Picks: Technology Research Advances of 2004" by Technology Research News (also joint with Regina Barzilay), and an Alfred P. Sloan Research Fellowship, and her group's work has been featured in the New York Times. Homepage
April 13, Leora Morgenstern: Knowledge Representation in the DARPA Machine Reading Program
The DARPA Machine Reading Program (MRP) is focused on developing reading systems that serve as a bridge between the informal information found in natural language texts and the powerful AI systems that use formal knowledge. Central to this effort is the integration of knowledge representation and reasoning techniques into standard information retrieval technology.
In this talk, I discuss the knowledge representation components, including the core ontologies and the domain specific reasoning system, for the MRP reading systems. I focus on the spatiotemporal reasoning that serve as the cornerstone for the central challenge of Phase 3 of the Machine Reading Program: building geographical timelines from news reports.
Leora Morgenstern is currently PI of the DARPA Machine Reading evaluation and knowledge infrastructure team at SAIC. Previous to joining SAIC, she spent most of her career at the IBM T.J. Watson Research Center, where she combined foundational AI research with developing cutting-edge and highly profitable applications for Fortune-500 companies. She is noted in particular for her contributions in applying her research in semantic networks, nonmonotonic inheritance networks, and business rules for applications in knowledge management, customer relationship management, and decision support.
Dr. Morgenstern is the author of over forty scholarly publications and holds three patents, which have won several IBM awards due to their value to industry. She has served on the editorial boards of JAIR, AMAI, and ETAI. She has edited several special issues of journals, the most recent of which was a volume of Artificial Intelligence (January 2011) dedicated to John McCarthy's leadership in field of knowledge representation. Together with John McCarthy and Vladimir Lifschitz, she founded the biannual symposium on Logical Formalizations of Commonsense Reasoning, and has served several times as program co-chair of this symposium. She developed and continues to maintain the Commonsense Problem Page, a website devoted to the pursuit of research in formal commonsense knowledge and reasoning.
April 11, Giacomo Inches: Investigating the statistical properties of user-generated documents
The importance of the Internet as a communication medium is reflected in the large amount of documents being generated every day by users of the different services that take place online. We analyzed the properties of some of the established services over the Internet (Kongregate, Twitter, Myspace and Slashdot) and compared them with consolidated collection of standard information retrieval documents (from the Wall Street Journal, Associated Press and Financial Times, as part of the TREC ad-hoc collection). We investigate features such as document similarity, term burstiness, emoticons and Part-Of-Speech analysis, highlighting their similarities and differences.
Giacomo Inches is a Ph.D. student in the Information Retrieval group within the Informatics Faculty at the University of Lugano (Università della Svizzera italiana, USI), Switzerland. His research is focused on short documents analysis using IR, text mining and machine learning techniques of user generated contents like twitter, chat logs, sms and police report archives. He is currently working on the SNF ChatMiner project ("Mining of conversational content for topic identification and author identification."). In prior scientific work he investigated the field of images classification and worked in the field of database systems (RIA, web engineering). Giacomo received his B.Sc. and M.Sc. from the Politecnico di Milano, Italy and hold a Diplom in Informatik from the University of Erlangen-Nuerember, Germany.
April 6, Rachel Pottinger
When heterogeneous databases are combined, they typically have different schemas, i.e., a description of how the data is stored. For information to be shared between these databases, there must be some way for differences in representation to be resolved. Combining these heterogeneous sources so that they can be queried uniformly is known as semantic integration. There are many aspects to semantic integration, including how to create the underlying system that allows queries to be processed to allowing the user to understand the overpowering amount of data available. In this talk, I describe some of the research that my students and I have been doing to increase data utility through semantic integration, particularly when motivated by real world applications.
Rachel Pottinger is an assistant professor in Computer Science at the University of British Columbia. She received her Ph.D. in computer science from the University of Washington in 2004. Her main research interest is data management, particularly semantic data integration, how to manage metadata (data about data), and how to manage data that is currently not well supported by databases.
March 30, Sujith Ravi: Deciphering Natural Language
Current research in natural language processing (NLP) relies heavily on supervised techniques, which require labeled training data. But such data does not exist for all languages and domains. Using human annotation to create new resources is not a scalable solution, which raises a key research challenge: How can we circumvent the problem of limited labeled resources for NLP applications?
Interestingly, cryptanalysts and archaeologists have tackled similar challenges in the past for solving decipherment problems. Our work draws inspiration from these successes and we present a novel, unified decipherment-based approach for solving natural language problems without labeled (parallel) data. In this talk, we show how NLP problems can be modeled as decipherment tasks. For example, in statistical language translation one can view the foreign-language text as a cipher for English.
Combining techniques from classical cryptography and statistical NLP, we then develop novel decipherment methods to tackle a wide variety of problems ranging from letter substitution decipherment to sequence labeling tasks (such as part-of-speech tagging) to language translation. We also introduce novel unsupervised algorithms that explicitly search for minimized models during decipherment and outperform existing state-of-the-art systems on several NLP tasks.
Along the way, we show experimental results on several tasks and finally, we demonstrate the first successful attempt at automatic language translation without the use of bilingual resources. Unlike conventional approaches, these decipherment methods can be easily extended to multiple domains and languages (especially resource-poor languages), thereby helping to spread the impact and benefits of NLP research.
Sujith Ravi is a Ph.D. candidate in Computer Science at the University of Southern California/Information Sciences Institute, working with Kevin Knight. He received his M.S (2006) degree in Computer Science from USC, and a B.Tech (2004) degree in Computer Science from the National Institute of Technology, Trichy in India. He has also held summer research positions at Google Research and Yahoo Research. His research interests lie in natural language processing, machine learning, computational decipherment and artificial intelligence. His current research focuses on unsupervised and semi-supervised methods with applications in machine translation, transliteration, sequence labeling, large-scale information extraction, syntactic parsing, and information retrieval in discourse. Beyond that, his research experience also includes work on cross-disciplinary areas such as theoretical computer science, computational advertising and computer-aided education. During his graduate student career at USC, he received several awards including an Outstanding Research Assistant Award, an Outstanding Teaching Assistant Award, and an Outstanding Academic Achievement Award.
March 16, Mark Liberman: Problems and opportunities in corpus phonetics
Techniques developed for speech and language technology can now be applied as research tools in an increasing number of areas, some of them perhaps unexpected: sociolinguistics, psycholinguistics, language teaching, clinical diagnosis and treatment, political science -- and even theoretical phonetics and phonology. Some applications are straightforward, and the short-term prospects for work in this field are excellent, but there are many interesting problems for which satisfactory solutions are not yet available. In contrast to traditional speech-technology applications areas, in many of these cases the obvious solutions have not been tried.
Bio (from Wikipedia): Mark has a dual appointment at the University of Pennsylvania, as Trustee Professor of Phonetics in the Department of Linguistics, and as a professor in the Department of Computer and Information Sciences. He is the founder and director of the Linguistic Data Consortium. His main research interests lie in phonetics, prosody, and other aspects of speech communication. Liberman is also the founder of (and frequent contributor to) Language Log, a blog with a broad cast of dozens of professional linguists. The concept of the eggcorn was first proposed in one of his posts there.
March 9, Asad Sayeed: Finding Target-Relevant Sentiment Words
A major indicator of the presence of an opinion and its polarity are the words immediately surrounding a potential opinion "target". But not all the words near the target are likely to be relevant to finding an opinion. Furthermore, prior polarity lexica are only of limited value in finding these words given corpora in specialized domains such as the information technology (IT) business press. There is no ready-made labeled data for this genre and no existing lexica for domain-specific polarity words.
This implementation-level talk describes some work in progress in identifying polarity words in an IT business corpus through crowdsourcing, identifying some of the challenges found in multiple failed attempts. We found that annotating at a fine-grained level with trained individuals is slow, costly, and unreliable given articles that are sometimes quite long. In order to crowdsource the task, however, we had to find ways to ask the question that do not require the user to think too hard about exactly what an opinion is and to reduce the propensity to cheat on a difficult question.
We built an CrowdFlower-based interface that uses a drag-and-drop process to classify words in context. We will demonstrate the interface during the talk and show samples of the results, which we are still in the process of gathering. We will also show some of the implementation-level challenges of adapting the CrowdFlower interface to a non-standard UI paradigm.
If there is time, we will also discuss one of the ways in which we plan to use the data through a CRF-based model of the syntactic relationship between sentiment words and target mentions which we developed in FACTORIE and Scala."
Bio: "Asad Sayeed is a PhD candidate in computer science and member of the University of Maryland CLIP lab. He is working on his dissertation in syntactically fine-grained sentiment analysis."
March 2, Ned Talley: An Unsupervised View of NIH Grants - Latent Categories and Clusters in an Interactive Format
The U.S. National Institutes of Health (NIH) consists of twenty-five Institutes and Centers that award ~80,000 grants each year. The Institutes have distinct missions and research priorities, but there is substantial overlap in the types of research they support, which creates a funding landscape that can be difficult for researchers and research policy professionals to navigate. We have created a publicly accessible database (https://app.nihmaps.org) in which NIH grants are topic modeled using Latent Dirichlet Allocation, and are clustered using a force-directed algorithm for placing grants as nodes in two dimensional space, where they can be accessed in an online map-like format.
Ned Talley is an NIH Program Director who manages grants on synaptic transmission, synaptic plasticity, and advanced microscopy and imaging. For the past two years he has also been focused on NIH grants informatics, in order to address unmet needs at NIH, and to match these needs with burgeoning technologies in artificial intelligence, information retrieval, and information visualization. He has directed this project through collaborations with investigators from University of Southern California, UC Irvine, Indiana University, and University of Massachusetts.
February 16, Ophir Frieder: Humane Computing
Humane Computing is the design, development, and implementation of computing systems that directly focus on improving the human condition or experience. In that light, three efforts are presented, namely, improving foreign name search technology, spam detection algorithms for peer-to-peer file sharing systems, and novel techniques for urinary tract infection treatment.
The first effort is in support of the Yizkor Books project of the Archives Section of the United States Holocaust Memorial Museum. Yizkor Books are commemorative, firsthand accounts of communities that perished before, during, and after the Holocaust. Users of such volumes include historians, archivists, educators, and survivors. Since Yizkor collections are written in 13 different languages, searching them is difficult. In this effort, novel foreign name search approaches which favorably compare against the state of the art are developed. By segmenting names, fusing individual results, and filtering via a threshold, our approach statistically significantly improves on traditional Soundex and n-gram based search techniques used in the search of such texts. Thus, previously unsuccessful searches are now supported.
In the second effort, spam characteristics in peer-to-peer file sharing systems are determined. Using these characteristics, an approach that does not rely on external information or user feedback is developed. Cost reduction techniques are employed resulting in a statistically significant reduction of spam. Thus, the user search experience is improved.
Finally, a novel “self start”, patient-specific approach for the treatment of recurrent urinary tract infections is presented. Using conventional data mining techniques, an approach that improves patient care, reduces bacterial mutation, and lowers treatment cost is presented. Thus, an approach that provides better, in terms of patient comfort, quicker, in terms of outbreak duration, and more economical care for female patients that suffer from recurrent urinary tract infections is described.
Biography Ophir Frieder is the Robert L. McDevitt, K.S.G., K.C.H.S. and Catherine H. McDevitt L.C.H.S. Chair in Computer Science and Information Processing and is Chair of the Department of Computer Science at Georgetown University. His research interests focus on scalable information retrieval systems spanning search and retrieval and communications issues. He is a Fellow of the AAAS, ACM, and IEEE.
February 9, Naomi Feldman: Using a developing lexicon to constrain phonetic category acquisition
Variability in the acoustic signal makes speech sound category learning a difficult problem. Despite this difficulty, human learners are able to acquire phonetic categories at a young age, between six and twelve months. Learners at this age also show evidence of attending to larger units of speech, particularly in word segmentation tasks. This work investigates how word-level information can help make the phonetic category learning problem easier. A hierarchical Bayesian model is constructed that learns to categorize speech sounds and words simultaneously from a corpus of segmented acoustic tokens. No lexical information is given to the model a priori; it is simply allowed to begin learning a set of word types at the same time that it learns to categorize speech sounds. Simulations compare this model to a purely distributional learner that does not have feedback from a developing lexicon. Results show that whereas a distributional learner mistakenly merges several sets of overlapping categories, an interactive model successfully disambiguates these categories. An artificial language learning experiment with human learners demonstrates that people can make use of the type of word-level cues required for interactive learning. Together, these results suggest that phonetic category learning can be better understood in conjunction with other contemporaneous learning processes and that simultaneous learning of multiple layers of linguistic structure can potentially make the language acquisition problem more tractable.
Bio: Naomi was a graduate student in the Department of Cognitive and Linguistic Sciences at Brown University working with Jim Morgan and Tom Griffiths. She's interested in speech perception and language acquisition, especially the relationship between phonetic category learning, phonological development, and perceptual changes during infancy. In January 2011, she became an assistant professor in the Department of Linguistics at the University of Maryland.
February 2, Ahn Jae-wook: Exploratory user interfaces for personalized information access
Personalized information access systems aim to provide tailored information to users according to their various tasks, interests, or contexts. They have long been relied on the ability of algorithms for estimating user interests and generating personalized information. They observe user behaviors, build mental models of the users, and apply the user model for customizing the information. This process can be done even without any explicit user intervention. However, we can add users into the loop of the personalization process, so that the systems can catch user interests even more precisely and the users can flexibly control the behavior of the systems.
In order to exploit the benefits of the user interfaces for personalized information access, we have investigated various aspects of exploratory information access systems. Exploratory information access systems can combine the strengths of algorithms and user interfaces. Users can learn and investigate their information need beyond the simple lookup search strategy. By adding the idea of the exploration to the personalized information access, we could devise advanced user interfaces for the personalization. Specifically, we have tried to understand how we could let users learn, manipulate, and control the core component of many personalized systems, user models. In this presentation, I am going to introduce several ideas about how to present and control user models using different user interfaces. The example studies include open/editable user model, tab-based user model and query control, reference point-based visualization that incorporates the user model and the query spaces, and named-entity based searching/browsing user interface. The results and the lessons of the user studies are discussed.
Bio: Jae-wook Ahn has recently defended his Ph.D. dissertation at the School of Information Sciences, University of Pittsburgh in September 2010. He has worked with his Ph.D. mentor Dr. Peter Brusilovsky and Dr. Daqing He. He is currently a research associate of the Department of Computer Science and the Human Computer Interaction Lab, working with Dr. Ben Shneiderman.