Limitations

Typical limitations for all sensors

Shades

3D sensor works from own perspective and objects in the scene occlude vision behind. When looking from different viewer perspective, compared to 3D sensor, some areas may appear shaded (have missing data, are not visible).

More Shades Examples

View from above at scene captured by structured light scanner. Both projector and 3D sensor have own shades. The reason for that is 3D sensor needs to capture projection to generate data. Shade of any sub-sensor contributes to whole system. When fusing with colour data we might even get third shade but color sub-sensor is typically close to one of other sub-sensors and has similar shade.

Transparency

Most 3D sensors have trouble with sensing transparent objects (glass, plastic bottles).

Occlusions

For 3D devices with multiple sub-sensors what each sub-sensor sees in the scene may differ. Depending on 3D sensing type and internal handling this may lead to missing texture, missing 3D data or even wrong texture/3D data. Detecting such problem by sensor or other software is not always possible.

More Occlusion Examples

Occlusions between colour and 3D sensor would result in wrong texture when fusing data

If detected, occlusions may be removed (pink). Note that this further contributes to missing data in similar way as shades.