GenForce

Training Tactile Sensors to
Learn Force Sensing from Each Other

Zhuo Chen1*, Ni Ou1, Xuyang Zhang1, Zhiyuan Wu1, Yongqiang Zhao1, Yupeng Wang1,
Emmanouil Spyrakos Papastavridis1, Nathan Lepora2, Lorenzo Jamone3, Jiankang Deng4*, Shan Luo1*

  • 1 King’s College London, London, United Kingdom
  • 2 University of Bristol, Bristol, United Kingdom
  • 3 University College London, London, United Kingdom
  • 4 Imperial College London, London, United Kingdom
  • * Corresponding Authors

Highlight

Background

Humans achieve stable and dexterous object manipulation by coordinating grasp forces across multiple fingers and palms, facilitated by a unified tactile memory system in the somatosensory cortex. This system encodes and stores tactile experiences across skin regions, enabling the flexible reuse and transfer of touch information. Inspired by this biological capability, we present GenForce, the first framework that enables transferable force sensing across tactile sensors in robotic hands. GenForce unifies tactile signals into shared marker representations, analogous to cortical sensory encoding, allowing force prediction models trained on one sensor to be transferred to others without the need for exhaustive force data collection. We demonstrate that GenForce generalizes on simulated data with 132 transfer groups and real-world data with 74 groups, across both homogeneous sensors with varying configurations and heterogeneous sensors with distinct sensing modalities and material properties. This transferable force sensing is also demonstrated with high performance in robot force control including daily object grasping, slip detection and avoidance. Our results highlight a scalable paradigm for robotic tactile learning, offering new pathways toward adaptable and tactile memory–driven manipulation in unstructured environments.

Background

Background
Robot grasping objects with tactile sensors and force control mimics human actions with sensory receptors. These bio-inspired tactile sensors cannot transfer force data with each other due to differences in sensing principles, structural designs and material properties. Current practice to train force prediction models uses repetitive and costly data collection process for force labels.

Human Tactile Memory

Human tactile memory
In humans, the tactile memory system enables the storage and retrieval of experienced tactile information, such as haptic stimuli, across skin regions on hands. Mechanoreceptors in the skin detect deformation, which is translated into a unified sensory encoding, and transmitted to the somatosensory cortex via peripheral nerves for storage and processing. This human ability to adapt, unify, and transfer tactile sensation offers valuable inspiration for developing transferable tactile sensing in robots.

Bioinspiration

Bioinspiration
Overview of the GenForce model. Tactile sensors produce diverse tactile signals under the same deformation due to differences in sensing principles, structural designs and material properties. GenForce unifies tactile signals into marker representation, enables marker-to-marker translation across various sensors, and achieves high-accuracy force prediction on uncalibrated sensors using data transferred from calibrated sensors.

Architecture

Architecture of GenForce
Marker-to-marker translation (M2M) model. The M2M model uses deformed images from calibrated sensors as input and reference images from uncalibrated sensors as conditions to generate deformed images that mimic the deformation applied to uncalibrated sensors. Spatiotemporal force prediction model takes sequential contact images as input and outputs three-axis forces, enhancing prediction accuracy through a spatiotemporal module.

Marker-to-marker Translation Performance

sim
Marker-to-marker translation in simulated data shown with t-SNE. we first propose a simple simulation pipeline to acquire extensive deformed marker images. densities—referring to GelSight, uSkin, TacTip and GelTip sensors. Eighteen 3D-printed indenters with diverse geometrical properties (vertices, edges, and curvatures) are used for indentation33. Each marker pattern can serve as both source sensor and target sensor, resulting in a total of 132 sensor combinations.The generated images and target images are aligned closely in feature space and visually indistinguishable. See Supplementary Video 1.
hetero_m2m
Marker-to-marker translation in heterogeneous sensors shown with images. We unify three distinct tactile signals into marker representation. For electronic sensor arrays, we develop a signal-to-marker pipeline that converts multichannel raw signals into marker displacement and diameter change. Although some tactile arrays (e.g., capacitive or resistive) can only measure pressure in each taxel compared to magnetic sensor with 3-axis measurement, our model still transfers from vision-based tactile sensors to these arrays. We verify this by using only the z component of uSkin, referred to as “uSkin (z‑axis)” in Supplementary Video 3. We showcase generated images and source images in above figure and the Supplementary Video 1.

Force Prediction Performance

homo
Homogeneous Translation Performance compared with ATI nano 17 F/T sensor before and after using GenForce. The source-only method exhibits large errors (Supplementary Video 2). The maximum error in normal force exceeds 4.8 N. Shear force errors are average above 0.28 N. In addition, most sensor combinations have negative R2 values and high variance and demonstrate poor performance in real-time force prediction. After using the GenForce model, all force errors are significantly reduced, and R2 values improve across all combinations.
hetero
Heterogeneous Translation Performance compared with ATI nano 17 F/T sensor before and after using GenForce. The average of MAE over all six combinations decreases below 0.92 N for normal force, while Fx and Fy reduce below 0.22 N and 0.3 N, respectively. Notably, the uSkin_TacTip group shows a 93% improvement of in MAE (from 7.76 N to 0.52 N) and a 66% improvement of Fy (from 0.59 N to 0.2 N). The force errors for all combinations are centered around zero within ranges of -4N to 0N in normal direction and -3N to 3N in shear direction, demonstrating both the accuracy and reliability of our model.

Applications

Dynamic
Force prediction with dynamic contact events compared with ATI nano 17 F/T sensor We evaluated our model in real-time across six transfer groups under more dynamic conditions (see Supplementary Video 3). Tactile sensors were mounted on an ATI Nano17 F/T sensor, and we applied forces using four daily objects with different shapes and materials, including screwdriver, glue stick, plastic pizza, and a LEGO block . A human operator performed five common dynamic contact events, including press, rub, roll, push, and pull, as well as continuous combinations of these on the sensor surface. All test groups exhibited fast, accurate responses comparable to a commercial F/T sensor.
Grapsing
Daily objects grasping with transferable force sensing and control using GelSight (A-II) and uSkin (3 axis). We transfer the force prediction model to above two sensors by using a third flat-surface vision-based tactile sensor (GelSight, marker pattern A-II) tactile sensor with our GenForce model . The task requires robot to grasp nine daily objects with different sizes, shapes and material without damage them. Those objects include potato chip, grape, strawberry, orange, plum, wood block, glue stick, meat box and tea box, which are unseen in the training dataset. During the grasping, the arm is controlled with a proportional controller to grasp those objects with fixed normal forces ranging from 0.6N to 1.2N. Both sensors are shared with the same force controller. As shown in above figure and Supplementary Video 4&5, both sensors can equip the robot arm an accurate force sensing so that the robot arm can successfully grasp all objects with target forces without damage. Even for challenging objects such as chips and fresh fruits, the robot can achieve delicate grasping by using the transferred force prediction model combining with force control.
Slip
Transferable force sensing in robot slip detection and avoidance. A curved surface, vision based TacTip sensor (palm shape) is mounted on the left finger, and a three-axis magnetic uSkin sensor is mounted on the right. The TacTip's force prediction model is transferred from GelSight (D-I), and the uSkin model is subsequently transferred from the TacTip. The task proceeds through several stages: moving down, proportional control grasping, lifting, slip detection and avoidance at the top position, release, and return to home with the force controller active only during the grasp and slip detection phases. We evaluate four objects: banana, plum, meat box, and glue stick. Beyond completing the grasp, external forces are applied by a human at the top position to induce slip. The robot detects slip via changes in shear force and responds by narrowing the gripper width. As shown in above figure and Supplementary Videos 6-7, the system completes all stages successfully, demonstrating the practical applications of our model in real robotic tasks.

Video icon Supplementary Videos

Marker-to-marker translation
Force Prediction Performance
Dynamic Force Test
Daily object grasping (1)
Daily object grasping (2)
Slip Detection & Avoidance (1)
Slip Detection & Avoidance (2)

Citation

If you find our model helpful, feel free to cite it:


  @article{chen2025general,
  title={General Force Sensation for Tactile Robot},
  author={Chen, Zhuo and Ou, Ni and Zhang, Xuyang and Wu, Zhiyuan
          and Zhao, Yongqiang and Wang, Yupeng and Lepora, Nathan
          and Jamone, Lorenzo and Deng, Jiankang and Luo, Shan},
  journal={arXiv preprint arXiv:2503.01058},
  year={2025}

  @inproceedings{chen2025transforce,
  title={Transforce: Transferable force prediction for vision-based tactile sensors with sequential image translation},
  author={Chen, Zhuo and Ou, Ni and Zhang, Xuyang and Luo, Shan},
  booktitle={2025 IEEE International Conference on Robotics and Automation (ICRA)},
  pages={237--243},
  year={2025},
  organization={IEEE}
  }

  @inproceedings{chen2024deep,
  title={Deep domain adaptation regression for force calibration of optical tactile sensors},
  author={Chen, Zhuo and Ou, Ni and Jiang, Jiaqi and Luo, Shan},
  booktitle={2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  pages={13561--13568},
  year={2024},
  organization={IEEE}
  }

  @article{ou2024marker,
  title={Marker or markerless? mode-switchable optical tactile sensing for diverse robot tasks},
  author={Ou, Ni and Chen, Zhuo and Luo, Shan},
  journal={IEEE Robotics and Automation Letters},
  year={2024},
  publisher={IEEE}
  }

}