Abstract

Representation learning has been instrumental in the success of machine learning, offering compact and performant data representations for diverse downstream tasks. In the spatial domain, it has been pivotal in extracting latent patterns from various data types, including points, polylines, polygons, and networked structures. However, existing approaches often fall short of explicitly capturing both semantic and spatial information, relying on proxies and synthetic features. This paper presents GeoNN, a novel graph neural network-based model designed to learn spatially-aware embeddings for geospatial entities. GeoNN leverages edge features generated from geodesic functions, dynamically selecting relevant features based on relative locations. It introduces both transductive (GeoNN-T) and inductive (GeoNN-I) models, ensuring effective encoding of geospatial features and scalability with entity changes. Extensive experiments demonstrate GeoNN’s effectiveness in location-sensitive superpixel-based graphs and real-world points of interest, outperforming baselines across various evaluation measures.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.