Wanqian Yang

I am a second-year PhD student in Computer Science at NYU Courant, focusing on machine learning and co-advised by Andrew Gordon Wilson and Rajesh Ranganath. My broad research interests are: probabilistic ML (Bayesian deep learning, deep generative models), computational cognitive modeling, and more generally helping robots take over building safe and robust AI systems. Most recently, I’ve been interested in causal representation learning and disentanglement, and their application to shortcut learning.

I previously graduated from Harvard College in May 2020 with a joint degree in Computer Science and Statistics, where I was fortunate enough to have collaborated with Finale Doshi-Velez, Hima Lakkaraju and Sam Gershman. In between degrees, I spent a year at Apple.

You can reach me at wanqian at nyu dot edu.


previous experience

Summer 2022 I spent my summer with Berkshire Partners as a data scientist for their private equity efforts.
Sep 2020 - Sep 2021 I was a data engineer on Apple’s Siri Data (Search) team, where I worked on analyzing and improving data quality and health.
Summer 2020 I was a machine learning research intern at Nuro, a self-driving startup focused on autonomous goods delivery. I worked on improving their perception models.
May 2020 I wrote my senior thesis on (i) specifying interpretable priors and (ii) evaluating variational approximations for Bayesian neural networks. I was advised by Finale Doshi-Velez (Computer Science) and Alexander Young (Statistics). My thesis was awarded a Hoopes Prize.
Summer 2019 I was a data science intern at Apple, where I worked on anomaly detection for Siri Search analytics.
Summer 2018 I was a software engineering intern at TCV, a venture capital firm, where I worked on data-driven approaches for automating sourcing efforts.

selected publications

  1. NeurIPS
    Chroma-VAE: Mitigating Shortcut Learning with Generative Classifiers
    Yang, W., Kirichenko, P., Goldblum, M., and Wilson, A. G.
    In Advances in Neural Information Processing Systems,  2022.
  2. NeurIPS
    Incorporating Interpretable Output Constraints in Bayesian Neural Networks
    Yang, W., Lorch, L., Graule, M. A., Lakkaraju, H., and Doshi-Velez, F.
    In Advances in Neural Information Processing Systems,  2020.
    [Accepted as spotlight paper, top ~3% of papers.]
  3. Thesis
    Making Decisions Under High Stakes: Trustworthy and Expressive Bayesian Deep Learning
    Yang, W.
    Senior Thesis, Harvard University,  2020.
    (PDF below contains post hoc corrections for minor errata.)
  4. PLOS CB
    Discovery of Hierarchical Representations for Efficient Planning
    Tomov, M. S., Yagati, S., Kumar, A., Yang, W., and Gershman, S. J.
    PLOS Computational Biology,  2020.
  5. ICML Workshop
    Output-Constrained Bayesian Neural Networks
    Yang, W.*, Lorch, L.*, Graule, M. A.*, Srinivasan, S., Suresh, A., Yao, J., Pradier, M. F., and Doshi-Velez, F.
    In 36th International Conference on Machine Learning Workshop on Uncertainty and Robustness in Deep Learning,  2019.