• Environmental Sampling and Monitoring

    Robots can be used to sample and monitor our environment. Measurement samples are collected from different locations so that a "distribution map" that describes certain environmental attribute (e.g., pH) can be reconstructed.

    Read More Here »

  • Micro Aerial Vehicle Swarm Control

    Micro aerial vehicles (MAVs) are agile in motion. High autonomy requires them to explore 3D space and interact with surroundings, including static obstacles and dynamic objects such as humans.

    Read More Here »

  • Decision-Making and Reinforcement Learning in Complex Environments

    When uncertainty (e.g., robot's imperfect motion) is considered, stochastic methods are required to cope with the system stochasticity especially if it is time-varying due to spatiotemporal enviromental disturbances. Recent work also includes efficient reinforcement learning using minimalist training trials.

    Read More Here »

  • Navigation and Exploration with Onboard Perception

    Navigation with onboard sensors is an essential function for robots. We develop exploration and mapping components that allow a vehicle to construct navigable space from perception inputs (e.g., cameras) and also plan feasible motion in unstructured outdoor environments.

    Read More Here »

  • Decentralized Multi-Robot Systems

    A common constraint for multi-robot system is communication, where each robot has a limited communication range and is able to exchange information only with neighbors in its vicinity. We work on designing decentralized coordination methods such as distributed task allocation mechanisms.

    Read More Here »