Machine Learning for Uncertainty with Application to Causal Inference
Effective decision making requires understanding the uncertainty inherent in a problem. This covers a wide scope in statistics, from deriving an estimator to training a predictive model. In this thesis, I will discuss three topics towards developing new uncertainty methods for solving individual and population level inference problems with their applications in causal inference. In the first topic, I introduce a novel approach, Collaborating Networks (CN), to capture predictive distributions in regression. It defines two neural networks with two distinct loss functions to approximate the conditional cumulative distribution function and its inverse respectively. This gives CN extra flexibility through bypassing the necessity of assuming an explicit distribution family such as Gaussian. Empirically, CN generates sharp intervals with reliable coverage. In the second topic, I extend CN to estimate the individual treatment effect in observational studies. It is augmented with a new adjustment scheme through representation learning, which is shown to effectively alleviate the imbalance between treatment groups. Moreover, a new evaluation criterion is suggested by combing the estimated uncertainty and variation in utility functions (e.g., variability in risk tolerance) for more comprehensive decision making, which contrasts to traditional approaches that only study an individual's outcome change in mean due to a potential treatment. In the last topic, I will present an analysis pipeline called PSweight for causal inference with propensity score weighting. Comparing to other pipelines for similar purposes, PSweight comprises a wider range of functionality to provide an exhaustive design and analysis platform that enables users to construct different estimators and assess their uncertainties.
Mentor: David Carlson, PhD
Zoom Link: Please contact firstname.lastname@example.org for details on how to join.