UMass Amherst CrowdLogger as a Community Platform for Searcher Behavior Experiments Henry Feild Center for Intelligent Information Retrieval University of Massachusetts Amherst Searcher behavior can be mined to many ends. One common task is to use it to understand user habits, such as their tendency to re-find information or switch search engines. Another is to use it to evaluate multiple retrieval algorithms or interfaces. In both situations, researchers would ideally be able to request feedback from users, e.g., “Why did you switch search engines?” or “Which system do you prefer and why?” User studies are a great way to conduct such tasks, but they are also time consuming. Whether in a controlled lab setting or in-situ, researchers conducting studies need to develop and configure logging software and recruit subjects. If any sense of user search history is needed, the study must be extended in order to collect that data. Reproducibility is also an issue due to differences in user populations and the software. To reduce the severity of these issues, we propose a community-shared platform on which to evaluate new retrieval algorithms, search tools, and explore user behavior. The platform software should be easy to install, have a large user base, provide secure and private logging, and expose an API that allows researchers to conduct experiments in-situ. The software should also allow researchers to use it out-of-network for in-lab studies. We believe that CrowdLogger is a natural base for this community-shared platform. CrowdLogger is an extension for the Firefox and Chrome web browsers that logs search behavior locally and has mechanisms for aggregating sensitive data across users privately (which is useful if an algorithm depends on, e.g., query rewrites entered by other users). However, there are many additional challenges that we must overcome, including: - data management across multiple experiments - an API that allows researchers sufficient control over accessing user data and implementing experiments - controlling what data is shared with researchers (just feedback data? or queries, etc., too?) - incentivizing users to download the extension and participate in experiments In this talk, we will give an overview of the proposed system and some initial ideas about how to address the challenges listed above. We hope to get feedback from the DIRE community to improve the development of the platform.