GeoLLM: Extracting Geospatial Knowledge from Large Language Models

Stanford University

GeoLLM effectively extracts geospatial knowledge from LLMs.

Abstract

The application of machine learning (ML) in a range of geospatial tasks is increasingly common but often relies on globally available covariates such as satellite imagery that can either be expensive or lack predictive power. Here we explore the question of whether the vast amounts of knowledge found in Internet language corpora, now compressed within large language models (LLMs), can be leveraged for geospatial prediction tasks. We first demonstrate that LLMs embed remarkable spatial information about locations, but naively querying LLMs using geographic coordinates alone is ineffective in predicting key indicators like population density. We then present GeoLLM, a novel method that can effectively extract geospatial knowledge from LLMs with auxiliary map data from OpenStreetMap. We demonstrate the utility of our approach across multiple tasks of central interest to the international community, including the measurement of population density and economic livelihoods. Across these tasks, our method demonstrates a 70% improvement in performance (measured using Pearson's r^2) relative to baselines that use nearest neighbors or use information directly from the prompt, and performance equal to or exceeding satellite-based benchmarks in the literature. With GeoLLM, we observe that GPT-3.5 outperforms Llama 2 and RoBERTa by 19% and 51% respectively, suggesting that the performance of our method scales well with the size of the model and its pretraining dataset. Our experiments reveal that LLMs are remarkably sample-efficient, rich in geospatial information, and robust across the globe. Crucially, GeoLLM shows promise in mitigating the limitations of existing geospatial covariates and complementing them well.

Our method

GeoLLM is a simple method that efficiently extracts the vast geospatial knowledge contained in LLMs. It does so by finetuning LLMs on prompts constructed with auxiliary map data from OpenStreetMap. We show that a substantial amount of geospatial knowledge in LLMs can be revealed simply by querying them to describe an address. However, extracting this knowledge from LLMs is not trivial. With our prompting strategy, we can pinpoint a location and provide the LLM with enough spatial context information, thereby enabling it to access and leverage its geospatial knowledge to make predictions.

Performance

Popular LLMs such as GPT-3.5 and Llama 2 can be fine-tuned to achieve state-of-the-art performance on a variety of large-scale geospatial datasets for tasks including assessing population density, asset wealth, mean income, women's education and more. Performance of a fine-tuned GPT-3.5 equals or exceeds satellite-based benchmarks in the literature. GPT-3.5, Llama 2, and RoBERTa, show a 70%, 43%, and 13% improvement in Pearson's r^2 respectively over baselines that use nearest neighbors and use information directly from the prompt, suggesting that the models' geospatial knowledge scales with their size and the size of their pretraining dataset. LLMs are also remarkably sample efficient. Constructing the right prompt is key to extracting geospatial knowledge. Our ablations find that prompts constructed from map data allow the models to more efficiently access their knowledge.

BibTeX

@inproceedings{
  manvi2024geollm,
  title={Geo{LLM}: Extracting Geospatial Knowledge from Large Language Models},
  author={Rohin Manvi and Samar Khanna and Gengchen Mai and Marshall Burke and David B. Lobell and Stefano Ermon},
  booktitle={The Twelfth International Conference on Learning Representations},
  year={2024},
  url={https://openreview.net/forum?id=TqL2xBwXP3}
}