Robotic Information Gathering
In robotics, many systems are designed to navigate and explore unknown territories, often operating in complex and dynamic environments spanning various spatial dimensions—2D, 2.5D, and 3D. Operating in these unfamiliar worlds necessitates leveraging machine learning techniques to build adaptable and robust systems. However, a significant challenge lies in the lack of sufficient prior data. Robots must actively "gather data/information" to construct or learn the necessary world models for their environments.
One key application of such Robotic Information Gathering is environmental sampling and monitoring. This process involves collecting measurements of a specific environmental attribute (e.g., the concentration of spilled oil in water or the density of leaked gas near a factory) from multiple locations to reconstruct a distribution map. Unlike simple averages, a distribution map provides a continuous representation that predicts variations across the entire field. This mapping mechanism, known as environmental state estimation, is fundamentally a learning process. It requires robots to process incoming streams of sampling data to update the parameters of an underlying environment model. Using the reconstructed map, robots can then plan meaningful paths for subsequent sampling actions, enabling more efficient and targeted exploration.
Historically, robotics research was predominantly conducted in controlled indoor laboratories, where environments were static and predictable. However, the evolving demands of modern applications and technologies now require robots to operate in dynamic outdoor spaces, such as aerial, terrestrial, or underwater domains, with tasks spanning long durations (e.g., long-term or life-long autonomy). These real-world scenarios introduce a critical challenge: many natural and human-made environments are dynamic, with environmental attributes that change spatially and temporally. Effectively navigating and monitoring such environments requires robots to continuously adapt their understanding of these changes through persistent sensing. To address this challenge, our research focuses on developing principles that: (1) Employ data-driven methods to guide robots in learning spatio-temporal and stochastic environmental models. (2) Leverage learned models for effective path planning and decision-making, optimizing exploration and monitoring strategies.
Left and middle: an autonomous surface vehicle performs sampling; Right: optimized sampling path is generated by following batches of waypoints computed from information-seeking algorithm.
We have developed informative planning and learning methods that enable autonomous surface vehicles to perform persistent monitoring tasks by continuously learning and refining underlying distribution maps. To achieve real-time decision-making and overcome the computational bottleneck of processing large volumes of accumulated sampling data, we designed a framework that iteratively integrates two key components: (a) A planning component that identifies and collects the most information-rich samples. (b) A sparse Gaussian Process learning component that updates the environmental model and hyperparameters online using only a subset of data that maximizes informational contribution. Additionally, we introduced an uncertainty-aware informative planning method that combines information-theoretic and decision-theoretic frameworks. This approach accounts for both the informativeness of sampling routes and the uncertainties in actions caused by environmental disturbances. As a result, it achieves high efficiency and accuracy in environmental sensing and monitoring tasks.
Left: planned informative path (yellow dots are saved samples); Middle: reconstructed distribution map; Right: prediction variance.
YouTube link 1:
YouTube link 2:
Related Papers:
(Best Application Paper Award Finalist, also Best Student Paper Award Finalist)