Rohin Manvi

profile_pic.jpg

Hello! I’m a final-year Computer Science student at Stanford University with a broad interest in the development and responsible application of generative foundation models, particularly large language models. Lately, I’ve been exploring ways to enable these models to “think” longer or harder when necessary, leveraging inference-time techniques like search, self-refinement, and new paradigms such as diffusion-based language models.

I’ve had the privilege of working with Prof. Stefano Ermon at Stanford, focusing on efficient inference techniques for LLMs, geospatial tasks, and bias analysis. Previously, I applied my machine learning skills in industry at Meta and Lacework, and I’ve also investigated using LLMs for decision-making in autonomous systems.

If you’re interested in these topics—or just want to say hello—I’d love to hear from you!

selected publications

  1. adaptive_preview.jpg
    Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation
    Rohin Manvi, Anikait Singh, and Stefano Ermon
    2024
  2. bias_preview.jpg
    Large Language Models are Geographically Biased
    Rohin Manvi, Samar Khanna, Marshall Burke, David B. Lobell, and Stefano Ermon
    In Forty-first International Conference on Machine Learning, 2024
  3. geospatial_preview.jpg
    GeoLLM: Extracting Geospatial Knowledge from Large Language Models
    Rohin Manvi, Samar Khanna, Gengchen Mai, Marshall Burke, David B. Lobell, and Stefano Ermon
    In The Twelfth International Conference on Learning Representations, 2024