CSR 2021: The 1st International Workshop on Causality in Search and Recommendation
Co-located with The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval
Co-located with The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval
The motivation of the workshop is to promote the research and application of Causal Analysis and Causal Modeling in Information Retrieval tasks, including but not limited to Search, Recommendation, QA, and Dialog. Causality in IR attempts to develop causal models not only to improve the ranking performance but also benefit IR systems in a broader scope of perspectives such as explainability, fairness, robustness, trustworthiness, etc.
In a broader sense, researchers in the broader AI community have also realized the importance of advancing from correlative learning to causal learning, which aims to address a wide range of AI problems in machine learning, machine reasoning, computer vision, autonomous systems, and natural language processing tasks. As an important branch of AI research, it highlights the importance our IR/RecSys communities to advance from correlative modeling to causal modeling in various search, recommendation, QA and dialog systems.
We welcome contributions of both technical and perspective papers. Paper should be at least 4 pages and at most 12 pages using standard ACM double column template, any page number in between 4 and 12 are welcome. Space for references is unlimited. We welcome papers from a wide range of topics, including but not limited to causal search and recommendation models, incorporating multi-modal information for causal modeling, evaluation of causal search and recommendation models, user study of causal models, as well as causal models for explainable, fair, unbiased, and robust IR. More topics are listed in the call for papers. Papers must be submitted to EasyChair by 23:59, AoE (Anywhere on Earth) on Apr 29 (Abstract) and May 6 (Fullpaper), 2021. Notifications will be sent on May 20, 2021.
Session | Title | EDT (New York, Montreal, Santiago) | CET (Paris, Berlin, Rome) | CST (Beijing, Hong Kong, Singapore) | Speaker |
---|---|---|---|---|---|
1 | Constraint-based Causal Structure Learning and Effect Identification | 7:45am-8:30am (7/15) | 1:45pm-2:30pm (7/15) | 7:45pm-8:30pm (7/15) | Jiji Zhang |
How to De-Bias for Industrial Recommender System? A Causal Perspective | 8:30am-9:15am (7/15) | 2:30pm-3:15pm (7/15) | 8:30pm-9:15pm (7/15) | Zhenhua Dong | |
ExDocS: Evidence based Explainable Document Search | 9:15am-9:35am (7/15) | 3:15pm-3:35pm (7/15) | 9:15pm-9:35pm (7/15) | Sayantan Polley | |
Towards Truly Useful Recommender Systems | 9:40am-10:35am (7/15) | 3:40pm-4:35pm (7/15) | 9:40pm-10:35pm (7/15) | Tobias Schnabel | |
2 | Off-policy Evaluation and Learning for Interactive Systems | 7:00pm-8:00pm (7/15) | 1:00am-2:00am (7/16) | 7:00am-8:00am (7/16) | Yi Su |
Unbiased Counterfactual Estimation of Ranking Metrics | 8:00pm-8:20pm (7/15) | 2:00am-2:20am (7/16) | 8:00am-8:20am (7/16) | Haining Yu | |
A Large-scale Dataset for Decision Making Algorithms | 8:20pm-8:40pm (7/15) | 2:20am-2:40am (7/16) | 8:20am-8:40am (7/16) | Yuta Saito | |
Learning Causal Explanations for Recommendation | 8:40pm-9:00pm (7/15) | 2:40am-3:00am (7/16) | 8:40am-9:00am (7/16) | Shuyuan Xu |
Dr. Jiji Zhang, Professor of Philosophy, Hong Kong Baptist University
Title: Constraint-based Causal Structure Learning and Effect Identification
Abstract: The constraint-based approach to causal discovery seeks to infer certain statistical relations from data and use these relations as constraints to search for causal structures, under suitable assumptions. In this talk I will describe some recent work in this line of research, including attempts to relax a standard assumption known as the faithfulness assumption, use of SAT solvers to execute the search, and extension of the framework to handle heterogeneous or non-stationary data. In addition, since the output of a constraint-based algorithm is usually limited to a Markov equivalence class of causal structures, I will present some new results on the identifiability of intervention effects, when the causal structure is only partially known.
Bio: Dr. Jiji Zhang is a professor of philosophy at Hong Kong Baptist University, and taught previously at Lingnan University and California Institute of Technology. His main research interests are interdisciplinary, spanning a wide range of questions on causation and causal inference, including especially epistemological, logical, and methodological issues. His work has been published in premier venues of artificial intelligence and machine learning, as well as in leading journals of philosophy.
Dr. Tobias Schnabel, Senior Researcher at Microsoft Research
Title: Towards Truly Useful Recommender Systems
Abstract: Recommender systems are almost exclusively used as part of more complex applications and with varying user interfaces. Much of past research has ignored that complexity and focused purely on the algorithmic modeling on user preferences. In this talk, I argue that this focus has severely limited our ability to make progress in real world settings. Because of the complexity of real-world systems, I strongly believe that designing truly useful recommender systems requires the joint consideration of how user data is generated and how recommendations are consumed. In my talk, I will highlight two examples for how this can be done. In the first case, causal modeling biases in the data was able to drastically improve the relevance of related items recommendations. The second example demonstrates how without the joint consideration of user experience and feedback data, practitioners can easily arrive at incorrect conclusions about the system’s usefulness. I will end my talk with a roadmap of where future research should fill in knowledge and technology gaps.
Bio: Dr. Tobias Schnabel is a researcher in the Information and Data Sciences group, at Microsoft Research in Redmond. He is interested in improving human-facing machine learning systems in an integrated way, considering not only algorithmic but also human factors. To this end, his research draws from causal inference, reinforcement learning, machine learning, HCI, and decision-making under uncertainty. His work has often used recommender systems or information retrieval systems as natural application domains. Before joining Microsoft, he obtained Ph.D. from the Computer Science Department at Cornell University under Thorsten Joachims. Prior to that, he worked with Hinrich Schütze during and after Master thesis on NLP-related projects.
Dr. Zhenhua Dong, Principal Researcher, Huawei Noah's Ark Lab
Title: How to De-Bias for Industrial Recommender System? A Causal Perspective
Abstract: The recommender systems face bias challenges, such item expose bias, position bias, user attention bias, feedback bias. The recommendation models trained with biased data are the biased models, and will hurt the user experiences and the revenue of platform by recommending biased items. In this talk, I first summarize the main biases of industry recommender system, and then, I proposed practical solutions from causality perspective. The talk consists of three parts. The first part describes the mainly biases of commercial industrial recommender systems, and how these biases impact the recommendation modeling. In the second part, we briefly introduce two types of causality inspired approaches: counterfactual and intervention. After that, I detail the two approaches and related studies, for the counterfactual approach, I will introduce practical Inverse Propensity Score (IPS) and doubly robust methods, and summarize their advantages and disadvantages; for the intervention approach, firstly, I will detail the uniform intervention and how to train unbiased model with uniform data, then I briefly describe some non-uniform intervention methods. The last part emphasizes that biases in recommender system are still challenging, although there are several studies and engineering solutions. Especially for some scenarios lack of pre-knowledge about target distribution, user attention. So we discuss some interesting directions, such as counterfactual simulation, offline policy evaluation and fairness.
Bio: Zhenhua Dong is a principal researcher of Huawei Noah’s ark lab, his current research topics include recommender system, causality, counterfactual learning and their applications. He leads Huawei recommender system research team, and their technologies have been launched significant improvements of industry recommender systems, such as Huawei news feeds, Huawei App Gallery, Huawei music, instant services and advertising. With more than 30 research articles published in TOIS, SIGIR, WWW, RecSys, WikiSym, AAAI etc. and more than 20 patents, he is known for the research on recommender system and machine learning. He is the committee member of some academia organization, such ACM KDD, ACM RecSys, ACM SIGAPP, and the reviewer of TOIS, TKDE, KDD, WWW, ICDM etc.. He received the BEng degree from Tianjin University in 2006 and the Ph.D degree from Nankai University in 2012. He was a visiting scholar at GroupLens lab in the University of Minnesota during 2010-2011.
Yi Su, Ph.D. Candidate at Cornell University
Title: Off-policy Evaluation and Learning for Interactive Systems
Abstract: Many real-world applications, ranging from news recommendation to online advertising and personalized healthcare, are naturally modeled by the contextual-bandit protocol, where a learner repeatedly observes a context, takes an action, and accrues reward. A fundamental question in such settings is: given a new version of the system (i.e. policy), what is the expected reward? Online A/B testing offers a generic way for answering this question through controlled randomized trials. However, such online experimentation is slow, can only be done for a small number of new policies, has high engineering cost, and can have substantial cost for the users when the new policy is of low quality. Overcoming these shortcomings motivates the goal of offline A/B testing, also known as off-policy evaluation (OPE), which does not require new online experiments for every new policy we want to evaluate. Instead, OPE reuses past data we already have. At the core of this methodology lies the design of counterfactual estimators that accurately evaluate the performance of a new policy by only using logged data of past behavior. In this talk, I will present some my recent work on off-policy evaluation. It includes the discovery of a general family of counterfactual estimators, followed by a new optimization-based framework for designing estimators, which obtains a better bias-variance tradeoff than the doubly robust estimator in finite samples. Beyond off-policy evaluation, I will also introduce the hyper-parameter selection problem in OPE and a new data-driven approach for selecting near-optimal parameters. Finally, I will briefly present some work in off-policy learning, especially focusing on how to safely use support-deficient log data to learn an improved policy for deployment in the future.
Bio: Yi Su is a PhD student in the Department of Statistics and Data Science at Cornell University, advised by Professor Thorsten Joachims. Her research interests lie in learning from user behavioral data and implicit feedback in search engines, recommender systems and market platforms. She currently works on off-policy evaluation and learning in contextual bandits and reinforcement learning. She has interned at Microsoft Research and Bloomberg AI. Before joining Cornell, Yi received BSc (Honors) in Mathematics from Nanyang Technological University in Singapore. She is the recipient of Lee Kwan Yew Gold Medal (2016), Bloomberg Data Science Fellowship (2019-2021) and EECS Rising Star 2020.
We welcome contributions of both technical and perspective papers from a wide range of topics, including but not limited to the following topics of interest:
PAPER SUBMISSION GUIDLINES
CSR 2021 paper submissions should be at least 4 pages and at most 12 pages using standard double-column ACM SIG proceedings format, any page number in between 4 and 12 are welcome. Space for references is unlimited. Each accepted paper will have an oral presentation in a plenary session, and will also be allocated a presentation slot in a poster session to encourage discussion and follow up between authors and attendees.
CSR 2021 submissions are double-blind. All submissions and reviews will be handled electronically. Additional information about formatting and style files is available on the ACM website. Papers must be submitted to easychair at https://easychair.org/conferences/?conf=csr20210 by 23:59, AoE (Anywhere on Earth) on April 29 (Abstract) and May 6 (Fullpaper), 2021.
For inquires about the workshop and submissions, please email to csr2021-0@easychair.org
All time are 23:59, AoE (Anywhere on Earth)
Apr 29, 2021: Abstract due
May 6, 2021: Submission due
May 20, 2021: Paper notification
June 20, 2021: Camera ready submission
July 15, 2021: Workshop day
CSR'21 will be co-located with The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval on July 15, 2021.