Zhiting (May) Mei

IRoM Lab | maymei@princeton.edu

prof_pic.jpg

Hi! I’m a fourth-year PhD Student at IRoM Lab in Princeton.

My research vision is to build capable, reliable, and trustworthy robots that can operate safely in the open world by understanding and managing uncertainty. To realize this vision, my work focuses on establishing both theoretical foundations and practical algorithms for uncertainty-aware embodied intelligence.

I aim to bridge the gap between specialist and generalist robots by understanding and improving generalization limits across perception, prediction, and reasoning. I work with generative video world models, large (reasoning) language models, vision foundation models, and vision-language-action models. Across these domains, I derive theoretical bounds on sensor-based and language-instructed autonomy, establish safety assurances via rigorous uncertainty quantification, probe generalizability of foundation models, and derive novel uncertainty quantification methods, improving both the performance and calibration of embodied AI.

news

Mar 17, 2026 I gave a talk at Google DeepMind reading group on PlayWorld with Tenny Yin and Ola Shorinwa! :sparkles:
Mar 12, 2026 I gave a talk at Intent Lab on Trustworthy World Modeling! :sparkles:
Feb 24, 2026 I gave a talk at Apple reading group on Trustworthy World Modeling! :sparkles:

selected publications

  1. playworld.png
    PlayWorld: Learning Robot World Models from Autonomous Play
    Tenny Yin, Zhiting Mei, Zhonghe Zheng, and 8 more authors
    arXiv preprint arXiv:2603.09030, 2026
  2. video_model_survey_thumbnail.png
    Video Generation Models in Robotics: Applications, Research Challenges, Future Directions
    Zhiting Mei*, Tenny Yin*, Ola Shorinwa*, and 8 more authors
    arXiv preprint arXiv:2601.07823, 2026
  3. c-cubed.png
    World Models That Know When They Don’t Know: Controllable Video Generation with Calibrated Uncertainty
    Zhiting Mei*, Tenny Yin, Micah Baker, and 2 more authors
    2025
  4. squbed.png
    How Confident are Video Models? Empowering Video Models to Express their Uncertainty
    Zhiting Mei*, Ola Shorinwa*, and Anirudha Majumdar
    2025
  5. spine.png
    Geometry Meets Vision: Revisiting Pretrained Semantics in Distilled Fields
    Zhiting Mei*, Ola Shorinwa*, and Anirudha Majumdar
    2025
  6. relunc.png
    Reasoning about Uncertainty: Do Reasoning Models Know When They Don’t Know?
    Zhiting Mei*, Christina Zhang, Tenny Yin, and 3 more authors
    In 19th Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2025
  7. verdi.png
    VERDI: VLM-Embedded Reasoning for Autonomous Driving
    Bowen Feng*, Zhiting Mei*, Baiang Li, and 4 more authors
    2025
  8. pwc-ijrr.png
    Perceive With Confidence: Statistical Safety Assurances for Navigation with Learning-Based Perception
    Zhiting Mei, Anushri Dixit, Meghan Booker, and 5 more authors
    The International Journal of Robotics Research (IJRR), 2025
  9. womap.png
    WoMAP: World Models for Embodied Open-Vocabulary Object Localization
    Tenny Yin*, Zhiting Mei, Tao Sun, and 6 more authors
    In 9th Annual Conference on Robot Learning, 2025
  10. uq_for_llms.png
    A Survey on Uncertainty Quantification of Large Language Models: Taxonomy, Open Research Challenges, and Future Directions
    Ola Shorinwa, Zhiting Mei, Justin Lidard, and 2 more authors
    ACM Computing Surveys, Sep 2025
  11. lava_results_f.png
    Fundamental Limits for Sensor-Based Robot Control
    Anirudha Majumdar, Zhiting Mei, and Vincent Pacelli
    The International Journal of Robotics Research (IJRR), Aug 2023