Sensors

Application Programming Interfaces for SenseKit

Firmware

Firmware is packaged as:

  • x86-64 case - .run installer with binary deb packages and OS tweaks, communicating with 3rd party rosnodes through the ROS1 protocol

  • arm64 Jetson case - docker image with binaries communicating with 3rd party rosnodes through the ROS1 protocol

Available Data

A Sensor consumes ROS data from camera including depth, RGB, calibration and geometric relationships. No specific data is shared for 3rd party by Sensor but the same data it uses is available also for 3rd party using ROS interfaces.

Sensor settings may be configured using AMAS (sensor dependent).

Interfacing New Sensors

For unsupported sensors, the process may require custom work.

Requirements for sensor's ROS driver

The minimum to integrate new sensor is ROS driver with:

  • color topic (RGB) and camera_info

    • typically image_raw and camera_info topics

  • depth topic (2D range image) and camera_info

    • typically image_raw and camera_info topics

    • depth in 16 bit unsigned mm range format (also called OpenNI)

  • tf2 geometric relationship data between depth/color subsensors (extrinsics)

    • typically camera has depth and color subsensors

Integration process

Typically, integrating a new sensor requires (assuming ROS driver exists):

  • writing sensor-specific ROS launch files

    • for streaming services

    • for calibration services

  • ensuring optical distortion models used by the sensor are supported

  • ensuring reasonable data workflow

    • decoupling ROS data from compression may be necessary

      • image republish node(lets) may be used

  • ensuring compatibility with custom compression (image transports)

    • support for sensor image pixel format may be added as needed

  • Ensuring depth/RGB synchronization is within tolerance

    • typically driver, specific settings

Was this helpful?