Orchard Reconstruction

From RSN
Jump to: navigation, search

Semantic Mapping of Orchard

Camera and LIDAR system

Lidar.png


Abstract

We present a method to construct a semantic map of an apple orchard using a LIDAR and a camera rigidly attached to each other. The system is able to capture the map as a standalone sensor which is light-weight and can be mounted on a variety of platforms. At the geometry level, we present a new method to associate image features captured by the camera with 3D points captured by the LIDAR. We then use this method to register 3D point-clouds onto a common frame. We show that our association method yields superior registration performance compared to common methods which work in indoor or urban settings. At the level, the apples are identified as distinct objects. Their locations and diameters are extracted as relevant attributes. As an example, a semantic map of an orchard row is constructed.

Superpixel comparison for Point-cloud Association

Superpixel comparison.png