Visual Comparison

Comparison of 3D data from sensors in various conditions

3D sensors supply 3D data (depth image, distance measurements) and often 2D data (texture, standard colour images). Both 2D and 3D data is used to reconstruct view in VR.

Perspective from which VR user sees reconstructed view typically differs from that of the sensor.

In practice all 3D sensors provide 3D data with noise, artifacts and sensors are subject to limitations.

Test Conditions

Sensors:

  • Realsense D435/D455: depth 848x480, RGB 1280x720

  • Zed 2i: neural depth 1920x1080 downsampled to 1152x648, RGB 1920x1080

  • Kinect4A: NFOV depth 640x576, RGB 1280x720

  • MotionCam3D M+: 1120x800 depth, RGB 1932x1096

Surface reconstruction:

  • algorithm similar in spirit to PCL Organized Fast Mesh

  • triangulation constrained by edge length

  • exposes problems of original 3D data

Aligned View

When looking at reconstructed 3D view from the same or similar perspective as 3D sensor, most 3D data imperfections are not visible (with exception for missing 3D data).

Quality of 3D sensor data should not be assessed with aligned view as it hides imperfections of 3D data.

Side View

When looking at 3D reconstruction from the side compared to original sensor perspective we are able to see 3D data inaccuracy and noise.

Last updated