Visual Haptic

How 6 DoF force and torque sensor data is represented visually to intuitively support user perception in AMAS VR

Background

An important capability required to remotely operate robot (robotic teleoperation) is to transmit haptic (physical force, torque, pressure, and vibration) information measured from the robot to the remote operator. In order to allow the operator to perceive this haptic information as intuitive as possible, the user interface for robotic teleoperation generally requires complex mechanically actuated hardware, aiming to recreate physical haptic sensation at user’s hand and/or arm.

However, such solutions results in below limitations:

- Costly and unscalable operator user interface hardware.

- The physical haptic sensation feedback generally sensitive to latency

- The physical haptic sensation is imprecise in judgement of magnitude

- Nonobvious force and torque scaling

On the contrary, graphical representation of the force and torque has been well known in the scientific community. The use of vector-based representation is used to communicate terms for algorithms and mathematics in publications. The vector-based force-torque representation has the advantage of:

- No mechanical user interface hardware required

- Visual cue, not sensitive to latency

- Precise representation for force

- Clear scaling of the measure

However, such representation requires in-depth understanding of the format and conventions, and generally require thorough thinking to fully understand the representation, thus with the following limitations:

- Require training and education for operator

- Slow to understand, not suitable for real-time immediate feedback

- Increased cognitive workload in understanding complex vectors

With the aid in computer graphics and extended reality (VR/AR/MR) visualisation technologies, we present a novel, intuitive, uniform method to represent force-torque only using graphic rendering, while avoiding fatigue in understanding mathematic vectors.

Description of the rendering scheme

This method utilise misplacement between reference model and indicative live model. Specifically, an indicative live model overlaps or coincides with the reference model when no force or torque is applied externally from robotic system. When external force and/or torque is applied at the end effector, the indicative live model translates and/or rotates (misplaces) out of the reference model, depending based on the value of the external force and external torque applied at each coordinate axis.

In AMAS, the force and torque is rendered as shown below:

Math Formulation

Assign the scaled external force to translation of the indicative live model. The translational vector d can be computed as

Where d is the translational distance of the indicative live model in each axis, f is the external force computed from sensor measurements, and b is a scaling factor, which can be tuned depending on the user preference and nominal force computed in general operations.

Assign the scaled external axis torque to axis rotation of the indicative live model,

Gravity Compensation

Methods to obtain the measurements of external force and torque at the end effector, compensating the gravity of the body parts in the robotic system, depending on the location of the sensor, based on inverse dynamics and/or kinematics of the robotic system.

In the below images, the end effector is attached on the force and torque sensor, located at the end of the arm. After the process of Force Torque Calibration. Gravity of the end effector is removed from the sensor measurement, regardless of the orientation.