Date
|
Leader
|
Topic
|
09/04/2014
|
Niklas Elmqvist New iSchool Professor in Infovis (link)
|
Expand
Ubiquitous Analytics: Interacting with Big Data Anywhere, Anytime
Abstract: Computing is becoming increasingly embedded in our everyday lives: mobile devices are growing smaller yet more powerful, large displays are getting cheaper, and our physical environments are turning intelligent and are integrating an increasing number of digital processors. Meanwhile, data is everywhere, and people need to leverage all of this digital infrastructure to turn it into actionable information about their hobbies, health, and personal interest. In this talk, I will present the concept of ubiquitous analytics that is staking out a new digital future of ever-present, always-on computing; one that can support manipulating, thinking about, and interacting with data anytime, anywhere.
Bio: Niklas Elmqvist is an associate professor in the College of Information Studies at University of Maryland, College Park, MD, USA. He is also a member of the University of Maryland Institute for Advanced Computer Studies. He received his Ph.D. in 2006 from Chalmers University of Technology in Gothenburg, Sweden. Prior to joining UMD in 2014, he was an faculty member in the School of Electrical and Computer Engineering at Purdue University from 2008, a postdoctoral researcher at INRIA in France from 2007, and a visiting scholar at Georgia Institute of Technology in 2006. His research areas are information visualization, human-computer interaction, and visual analytics. Prof. Elmqvist is the recipient of an NSF CAREER award in 2013, the Purdue ECE Chicago Alumni New Faculty in 2010, Google research awards in 2009 and 2010, the Ruth and Joel Spira Outstanding Teacher Award in 2012, and three best paper awards in premier venues in his field. His work has been sponsored by the National Science Foundation and the U.S. Department of Homeland Security, as well as by Google, Microsoft, and NVidia. He is a senior member of ACM, IEEE, and IEEE Computer Society.
|
09/11/2014
|
All new students!
|
Expand
New student introductions!
Much like last year, this BBL is for new students to introduce themselves, talk briefly about their projects and interests and bounce their ideas off the HCIL members. The purpose of these informal and participatory talks is to help connect new students with professors and other students sharing the same interests.
The students presenting are: Chris Musialek, Deok Gun Park, Seokbin Kang, Jonggi Hong, Sriram Karthik Badam and Majeed Kazemitabaar.
|
09/18/2014
|
Moving the cubes!
|
Resisting the cookies is futile.
|
09/25/2014
|
Kotaro Hara CS PhD Student: (link)
|
Expand
UIST2014 Practice Talk: Tohme: Detecting Curb Ramps in Google Street View Using Crowdsourcing, Computer Vision, and Machine Learning
Building on recent prior work that combines Google Street View (GSV) and crowdsourcing to remotely collect information on physical world accessibility, we present the first “smart” system, Tohme, that combines machine learning, computer vision (CV), and custom crowd interfaces to find curb ramps remotely in GSV scenes. Tohme consists of two workflows, a human labeling pipeline and a CV pipeline with human verification, which are scheduled dynamically based on predicted performance. Using 1,086 GSV scenes (street intersections) from four North American cities and data from 403 crowd workers, we show that Tohme performs similarly in detecting curb ramps compared to a manual labeling approach alone (F-measure: 84% vs. 86% baseline) but at a 13% reduction in time cost. Our work contributes the first CV-based curb ramp detection system, a custom machine-learning based workflow controller, a validation of GSV as a viable curb ramp data source, and a detailed examination of why curb ramp detection is a hard problem along with steps forward.
|
10/02/2014
|
Michelle Mazurek Assistant Professor, Department of Computer Science (link)
|
Expand
Measuring Password Guessability for an Entire University
Despite considerable research on passwords, empirical studies of password strength have been limited by lack of access to plaintext passwords, small data sets, and password sets specifically collected for a research study or from low-value accounts. Properties of passwords used for high-value accounts thus remain poorly understood.
We fill this gap by studying the single-sign-on passwords used by over 25,000 faculty, staff, and students at a research university with a complex password policy. Key aspects of our contributions rest on our (indirect) access to plaintext passwords. We describe our data collection methodology, particularly the many precautions we took to minimize risks to users. We then analyze how guessable the collected passwords would be during an offline attack by subjecting them to a state-of-the-art password cracking algorithm. We discover significant correlations between a number of demographic and behavioral factors and password strength.
We also compare the guessability and other characteristics of the passwords we analyzed to sets previously collected in controlled experiments or leaked from low-value accounts. We find more consistent similarities between the university passwords and passwords collected for research studies under similar composition policies than we do between the university passwords and subsets of passwords leaked from low-value accounts that happen to comply with the same policies.
|
10/09/2014 (room 2119)
|
m.c. schraefel Professor, University of Southampton (link)
|
Expand
Exploring the role of HCI as an agent of cultural change: from health as a medical condition to health as shared, social aspiration.
Abstract: What is the role of HCI in supporting a better normal for our health, creativity, quality of life - especially if we think about health outside a medical context. I have been thinking about the concept of “make better normal” and Ben Shneiderman has challenged me to ask isn’t that the role of design in general? And most of us would agree, so what’s different when we talk about health, not as a medical condition, but as a paradigm shift, where health is a shared and supported social aspiration? In such a discussion, HCI becomes an agent not necessarily for change, but for cultural shift - assuming we might agree on what proactive health looks like in practice - so we can design to support it. As part of this discussion i’ll offer in5 as a design model for proactive health and look forward to your feedback.
Also, we might consider how the role of HCI would change in this dynamic over time. Initially, proactive health design is likely design against the status quo. For example, if the status quo is sedentary knowledge work, and the research shows that more movement during the day is better for us cognitively, physiologically, socially, then what does HCI do to help support this transition individually and culturally? What is the role and perhaps responsibility of our collaborative work with, for instance, visualisation and big data? Likewise, what is the map of this territory for us? where are the important research questions? how would we know them? Do we ourselves need to evolve a new disciplinary expertise from nutrition to neurology for proactive health tech design? I have some thoughts/experiences in this space i’d like to share to hear your insights. Also, in particular, I would also like to present the related outcomes from a Dagstuhl Workshop that happened in June to consider Grand Challenges for Interactive Technology Design for Proactive Health, and to invite you to participate in and contribute to shaping these Challenges. This exchange, i hope, will act as both this invitation and a call to action - to say that if we see the opportunities to make a real and credible difference for proactive health, do we not need to find, fundamentally, ways to better support each others’ work to have effects at scale, to model a path for others to trust and to follow?
Bio: m.c. schraefel, ph.d, f.bcs, c.eng, cscs, @mcphoo holds the post Professor of Computer Science and Human Performance in the Agents, Interaction and Complexity Group of Electronics and Computer Science, University of Southampton, UK (http://www.ecs.soton.ac.uk/~mc). mc also holds a Research Chair sponsored by the Royal Academy of Engineering and Microsoft Research to investigate how to design interactive technology to better support creativity, innovation and discovery. As part of that research, schraefel utilises her works with athletes as a professional strength and conditioning, movement and nutrition coach for design insights into real people's longitudinal experience of and challenges with wellbeing practice (http://begin2dig.com). mc directs the Human Systems Interaction Lab at Southampton where the vision is to make better normal; make normal better, and the mission is to explore how ICT can support the brain/body connexion to enhance innovation, creativity and improved Quality of Life for all.
|
10/16/2014
|
Leah Findlater Assistant Professor, iSchool (link) Uran Oh CS PhD Student
|
ASSETS 2014 Practice Talk: ASSETS 2014 Practice Talks:
Expand
Accessibility in Context: Understanding the Truly Mobile Experience of Smartphone Users with Motor Impairments
Lab-based studies on touchscreen use by people with motor
impairments have identified both positive and negative impacts on
accessibility. Little work, however, has moved beyond the lab to
investigate the truly mobile experiences of users with motor
impairments. We conducted two studies to investigate how
smartphones are being used on a daily basis, what activities they
enable, and what contextual challenges users are encountering.
The first study was a small online survey with 16 respondents.
The second study was much more in depth, including an initial
interview, two weeks of diary entries, and a 3-hour contextual
session that included neighborhood activities. Four expert
smartphone users participated in the second study and we used a
case study approach for analysis. Our findings highlight the ways
in which smartphones are enabling everyday activities for people
with motor impairments, particularly in overcoming physical
accessibility challenges in the real world and supporting writing
and reading. We also identified important situational impairments,
such as the inability to retrieve the phone while in transit, and
confirmed many lab-based findings in the real-world setting. We
present design implications and directions for future work.
Design of and Subjective Response to On-body Input for People With Visual Impairments
For users with visual impairments, who do not necessarily need the visual display of a mobile device, non-visual on-body interaction (e.g., Imaginary Interfaces) could provide accessible input in a mobile context. Such interaction provides the potential advantages of an always-available input surface, and increased tactile and proprioceptive feedback compared to a smooth touchscreen. To investigate preferences for and design of accessible on-body interaction, we conducted a study with 12 visually impaired participants. Participants evaluated five locations for on-body input and compared on-phone to on-hand interaction with one versus two hands. Our findings show that the least preferred areas were the face/neck and the forearm, while locations on the hands were considered to be more discreet and natural. The findings also suggest that participants may prioritize social acceptability over ease of use and physical comfort when assessing the feasibility of input at different locations of the body. Finally, tradeoffs were seen in preferences for touchscreen versus on-body input, with on-body input considered useful for contexts where one hand is busy (e.g., holding a cane or dog leash). We provide implications for the design of accessible on-body input.
|
10/23/2014
|
Andrea Wiggins Assistant Professor, iSchool (link)
|
Citizen Science at Scale: Human Computation for Science, Education, and Sustainability
|
10/30/2014
|
Nicholas Diakopoulos Assistant Professor, UMD College of Journalism (link)
|
Expand
Computational Journalism: From Tools to Algorithmic Accountability
Abstract: Computational Journalism was initially conceived of as an application of computing technologies to enable journalism across information tasks such as information gathering, organization and sensemaking, storytelling, and dissemination. But computing and algorithms can also become the object of journalism. Algorithms adjudicate a large array of decisions in our lives: not just search engines and personalized online news systems, but educational evaluations, markets and political campaigns, and the management of social services like welfare and public safety. A new form of computational journalism that I call “Algorithmic Accountability Reporting” is emerging to apply the core journalistic functions of watchdogging and accountability reporting to algorithms. In this talk I will provide some perspective on the tool-oriented roots of computational journalism, and then discuss how algorithmic accountability reporting is emerging as a mechanism for elucidating and articulating the power structures, biases, and influences that computational artifacts play in society.
Bio: Nicholas Diakopoulos is an Assistant Professor at the University of Maryland College of Journalism. His research is in computational and data journalism with an emphasis on algorithmic accountability, narrative data visualization, and social computing in the news. He received his Ph.D. in Computer Science from the School of Interactive Computing at Georgia Tech where he co-founded the program in Computational Journalism. Before UMD he worked as a researcher at Columbia University, Rutgers University, and CUNY studying the intersections of information science, innovation, and journalism. Nick can be contacted via email at nad@umd.edu, and is online at @ndiakopoulos and http://www.nickdiakopoulos.com.
|
11/06/2014
|
Susan Winter & Diane Travis
|
BS HCI specialization: Emerging undergraduate program
|
11/13/2014
|
|
|
11/20/2014
|
Beverly Harrison Principal Scientist & Director Mobile Research, Yahoo!
|
Expand
Yahoo Labs – Mobile Research
In this talk, Beverly will describe key areas that the Yahoo Labs Mobile Research team is actively working on (and hiring for!). Several recent research projects will be presented including a study of teens' use of smartphones and mobile apps, a study about people’s understanding of what “personalized ads” means, some prototypes of speech related mobile interfaces, and some highlights of watches, buttons, and hardware prototyping efforts.
Bio: Beverly Harrison is currently a Principal Scientist and Director of Mobile Research at Yahoo Labs. Her expertise and passion over the last 20 years is creating, building and evaluating innovative user interface technologies and in inferring user behavior patterns from various types of sensor data. She has previously worked at Xerox PARC, IBM Research, Intel Research, and Amazon/Lab126 as well as doing startups. Beverly has 80+ publications, holds over 50 patents, and held 3 affiliate faculty positions in CSE, iSchool, Design (Univ of Washington). She has a B. Mathematics (Waterloo) and a M.Sc. and PhD in Human Factors Engineering (Toronto).
|
11/27/2014
|
No Brown Bag for Thanksgiving break.
|
12/04/2014
|
|
|
12/11/2014
|
|
|
12/18/2014
|
|
|