← Back to Feed
Research Papers agents robotics vision_language spatial_reasoning

MAPG proposes a multi-agent probabilistic grounding system enabling robots to execute metric-semantic navigation command

MAPG proposes a multi-agent probabilistic grounding system enabling robots to execute metric-semantic navigation commands like 'two meters to the right of the fridge' in 3D scenes. The approach addresses the gap in VLMs' ability to reason about precise metric constraints alongside semantic references.
Meanings and Measurements: Multi-Agent Probabilistic Grounding for Vision-Language Navigation Robots collaborating with humans must convert natural language goals into actionable, physically grounded decisions. For example, executing a command such as "go two meters to the right of the fridge" requires grounding semantic references, spatial relations, and metric constraints within a 3D scene. While recent vision language models (VLMs) demonstrate strong semantic grounding capabilities, they are not explicitly designed to reason about metric constraints in physically defined spaces. In this work, we empirically demonstrate that state-of-the-art VLM-based grounding approaches struggle with complex metric-semantic language queries. To address this limitation, we propose MAPG (Multi-Agent Probabilistic Grounding), an agentic framework that decomposes language queries into structured subcomponents and queries a VLM to ground each component. MAPG then probabilistically composes these grounded outputs to produce metrically consistent, actionable decisions in 3D space. We evaluate MAPG on the HM-EQA benchmark and show consistent performance improvements over strong baselines. Furthermore, we introduce a new benchmark, MAPG-Bench, specifically designed to evaluate metric-semantic goal grounding, addressing a gap in existing language grounding evaluations. We also present a real-world robot demonstration showing that MAPG transfers beyond simulation when a structured scene representation is available.

View Original Post ↗