Visual Haptic
How 6 DoF force and torque sensor data is represented visually to intuitively support user perception in AMAS VR
Background
A vital capability required to operate a robot (robotic teleoperation) remotely is transmitting haptic (physical force, torque, pressure, and vibration) information from the robot to the remote operator. To allow the operator to perceive this haptic information as intuitively as possible, the user interface for robotic teleoperation generally requires complex mechanically actuated hardware, aiming to recreate physical haptic sensation in the user’s hand and/or arm.
However, such solutions result in the following limitations:
- Costly and unscalable operator user interface hardware.
- The physical haptic sensation feedback is generally sensitive to latency
- The physical haptic sensation is imprecise in judgement of magnitude
- Nonobvious force and torque scaling
On the contrary, the graphical representation of force and torque is well-known in the scientific community. Vector-based representation is used in publications to communicate terms for algorithms and mathematics. The vector-based force-torque representation has the advantage of:
- No mechanical user interface hardware is required
- Visual cue, not sensitive to latency
- Precise representation of force
- Clear scaling of the measure
However, such representation requires an in-depth understanding of the format and conventions and generally requires thorough thinking to understand the representation fully, thus with the following limitations:
- Require training and education for operator
- Slow to understand, not suitable for real-time immediate feedback
- Increased cognitive workload in understanding complex vectors
With the aid of computer graphics and extended reality (VR/AR/MR) visualisation technologies, we present a novel, intuitive, uniform method to represent force torque only using graphic rendering while avoiding fatigue in understanding mathematical vectors.
Description of the rendering scheme
This method utilises misplacement between the reference model and the indicative live model. Specifically, an indicative live model overlaps or coincides with the reference model when no force or torque is applied externally from the robotic system. When external force and/or torque is used at the end effector, the indicative live model translates and/or rotates (misplaces) out of the reference model, depending on the value of the external force and external torque applied at each coordinate axis.
In AMAS, the force and torque is rendered as shown below:
Math Formulation
Assign the scaled external force to the translation of the indicative live model. The translational vector d can be computed as
Where d is the translational distance of the indicative live model in each axis, f is the external force computed from sensor measurements, and b is a scaling factor, which can be tuned depending on the user preference and nominal force computed in general operations.
Assign the scaled external axis torque to axis rotation of the indicative live model,
Gravity Compensation
Methods to obtain the measurements of external force and torque at the end effector, compensating the gravity of the body parts in the robotic system, depending on the sensor's location, based on inverse dynamics and/or kinematics of the robotic system.
Was this helpful?