DLR Logo
Thursday, 2011-01-27
 
Department of Robotic Systems

3D Mapping with ToF Cameras

3D mapping is a subset of the well-known simultaneous localisation and mapping (SLAM) problem. Mapping is declared as to be one of the "core competencies of truly autonomous robots'' [Thrun, 2005].
The goal of SLAM is to constitute the base for path planning methods by localising the robot, targets and obstacles. However, map generation is a so-called "chicken-and-egg'' problem. If the robot's location is known, mapping is a simple task. In reverse, localisation is easy, if a perfect map is available. But combining these two tasks turns out to be a challenging problem referred to as the SLAM or concurrent mapping and localisation problem.

Generally, map representations are adapted to the various SLAM algorithms and underlying sensors. So there are topological or metric maps, location based, feature based, volumetric, 2d or 3d maps. At the time, we are examining 3d point cloud mapping with ToF cameras, whereby the sensor is simultaneously localized relative to this point cloud without any external positioning system.

So far, investigations in 3d mapping with ToF cameras are rarely, and mostly the results of distinct research groups do not compare favorably with each other. For this reason, we want to establish a benchmark in ego-motion estimation with ToF cameras and provide the underlying data of our experiments.

Benchmark Data in Tiff3D

The ToF camera is attached to an industrial robot (Kuka KR 16) and moved along several paths. Given the hand-eye-calibration and the robot encoder, the absolute pose of the ToF camera for every frame is computable. Later, the estimated ego-motion can be compared with these kind of "Ground Truth". The depth and amplitude data of the ToF camera is encapsulated in a tiff3D format. Every frame comprises 3 files:

  1. Range File *.range.tif
  2. Amplitude File *.intens.tif
  3. Configuration File *.t3c .

The captured range data is quantized for [0 216] unsigned integer values. Given a pixel value vij (i=columns, j=rows) of the tiff3D-image, the following equation

dij = vij* (qscale/(216)) + qoffset

converts it into a depth value dij. The required quantisation parameters qscale and qoffset are given in the *.t3c configuration file. The configuration file also includes the required parameters for a Cartesian perspective projection. A 3d point in camera coordinate system camerap = [p0,p1,p2,1]' is computed by

p0 =(startu+i*deltau)*dij

p1 =(startv+j*deltav)*dij

p2 = dij .

Furthermore, the *.t3c configuration file provides the worldTtcp (local2world) and tcpTcamera (sensor2local) transformations stored in row-major order. A given 3d point camerap in camera coordinate system is transformed into the world coordinate system by:

worldTtcp tcpTcamera camerap  .

[Matlab-Code to read Tiff3D images]

Circular Path

A laboratory scene is captured on a circular path with a diameter of about 180 mm. The scene is composed of several Styrofoam objects, e. g., cuboids and pyramids. A total of 180 frames is captured. The ToF camera is calibrated. Hence, two image sets are given: One with the manufacturer's calibration and one with the own improved calibration.

Laboratory scene

Measured depth values and circular path

 

 

Download Data (Improved Calibration)

Download Data (Manufacturer's Calibration)

Pyramid

The ToF camera observes a single object, i.e. pyramid, in order to preferably avoid unsystematic errors such as light scattering. The camera is moved around the object so as to keep the pyramid in the image center. This experiment is performed with two ToF camera devices: A Swissranger SR 3000 and an IFM O3D100.

Swissranger SR 3000 Data

 

ToF camera path and overlayed estimated ego-motion

Absolute translational and rotational error in ego-motion estimation. The ego-motion is estimated for every second frame. As a result, an accuracy of 28mm and 2° is achieved.

Download Data (Improved Calibration)

Download Data (Manufacturer's Calibration)

 

IFM O3D100 Data

 

ToF camera path and overlayed estimated ego-motion

Absolute translational and rotational error in ego-motion estimation. The ego-motion is estimated for every 30th frame. As a result, an accuracy of 65mm and 10° is achieved.

Download Data (Improved Calibration)


Bibliography

Thrun, S., Burgard, W., and Fox, D. (2005). Probabilistic Robotics (Intelligent Robotics and Autonomous Agents). The MIT Press.

 

 

 


Copyright © 2011 German Aerospace Center (DLR). All rights reserved.