Skip to main content Skip to secondary navigation
Main content start

Online Causal Inference Seminar

Event Details:

Tuesday, October 25, 2022
8:30am - 9:30am PDT

This event is open to:

General Public

Free and open to the public

  • Tuesday, October 25, 2022 [Link to join] (ID: 996 2837 2037, Password: 386638)

  • Speakers: Rahul Singh (MIT) & Jiaqi Zhang (MIT)
  • Talk 1 Title: Causal Inference with Corrupted Data: Measurement Error, Missing Values, Discretization, and Differential Privacy
  • Talk 1 Abstract: The 2020 US Census will be published with differential privacy, implemented by injecting synthetic noise into the data. Controversy has ensued, with debates that center on the painful trade-off between the privacy of respondents and the precision of economic analysis. Is this trade-off inevitable? To answer this question, we formulate a semiparametric model of causal inference with high dimensional data that may be noisy, missing, discretized, or privatized. We propose a new end-to-end procedure for data cleaning, estimation, and inference with data cleaning-adjusted confidence intervals. We prove consistency, Gaussian approximation, and semiparametric efficiency by finite sample arguments. The rate of Gaussian approximation is nāˆ’1/2 for semiparametric estimands such as average treatment effect, and it degrades gracefully for nonparametric estimands such as heterogeneous treatment effect. Our key assumption is that the true covariates are approximately low rank, which we interpret as approximate repeated measurements and validate in the Census. In our analysis, we provide nonasymptotic theoretical contributions to matrix completion, statistical learning, and semiparametric statistics. We verify the coverage of the data cleaning-adjusted confidence intervals in simulations. Finally, we conduct a semi-synthetic exercise calibrated to privacy levels mandated for the 2020 US Census.

 

  • Talk 2 Title: Active Learning for Optimal Intervention Design in Causal Models
  • Talk 2 Abstract: Sequential experimental design to discover interventions that achieve a desired outcome is a key problem across disciplines. We formulate a theoretically grounded strategy that uses the samples obtained so far from different interventions to update the belief about the underlying causal model, as well as to identify samples that are most informative about optimal interventions and thus should be acquired in the next batch. The inclusion of causality allows for the identification of optimal interventions with significantly fewer but carefully selected samples. This is particularly critical when the ability to acquire interventional data is limited due to cost or ethical considerations. To demonstrate the computation and sample efficiency, we apply our approach to a perturbational single-cell transcriptomic dataset, where significant improvements over baselines are observed. The complexity of the single-cell dataset showcases the applicability of our method to real-world problems where data could be sparse and highly noisy.

Explore More Events