TY - JOUR
T1 - Robust Multi-User In-Hand Object Recognition in Human-Robot Collaboration Using a Wearable Force-Myography Device
AU - Bamani, Eran
AU - Kahanowich, Nadav D.
AU - Ben-David, Inbar
AU - Sintov, Avishai
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Applicable human-robot collaboration requires intuitive recognition of human intention during shared work. A grasped object such as a tool held by the human provides vital information about the upcoming task. In this letter, we explore the use of a wearable device to non-visually recognize objects within the human hand in various possible grasps. The device is based on Force-Myography (FMG) where simple and affordable force sensors measure perturbations of forearm muscles. We propose a novel Deep Neural-Network architecture termed Flip-U-Net inspired by the familiar U-Net architecture used for image segmentation. The Flip-U-Net is trained over data collected from several human participants and with multiple objects of each class. Data is collected while manipulating the objects between different grasps and arm postures. The data is also pre-processed with data augmentation and used to train a Variational Autoencoder for dimensionality reduction mapping. While prior work did not provide a transferable FMG-based model, we show that the proposed network can classify objects grasped by multiple new users without additional training efforts. Experiment with 12 test participants show classification accuracy of approximately 95% over multiple grasps and objects. Correlations between accuracy and various anthropometric measures are also presented. Furthermore, we show that the model can be fine-Tuned to a particular user based on an anthropometric measure.
AB - Applicable human-robot collaboration requires intuitive recognition of human intention during shared work. A grasped object such as a tool held by the human provides vital information about the upcoming task. In this letter, we explore the use of a wearable device to non-visually recognize objects within the human hand in various possible grasps. The device is based on Force-Myography (FMG) where simple and affordable force sensors measure perturbations of forearm muscles. We propose a novel Deep Neural-Network architecture termed Flip-U-Net inspired by the familiar U-Net architecture used for image segmentation. The Flip-U-Net is trained over data collected from several human participants and with multiple objects of each class. Data is collected while manipulating the objects between different grasps and arm postures. The data is also pre-processed with data augmentation and used to train a Variational Autoencoder for dimensionality reduction mapping. While prior work did not provide a transferable FMG-based model, we show that the proposed network can classify objects grasped by multiple new users without additional training efforts. Experiment with 12 test participants show classification accuracy of approximately 95% over multiple grasps and objects. Correlations between accuracy and various anthropometric measures are also presented. Furthermore, we show that the model can be fine-Tuned to a particular user based on an anthropometric measure.
KW - Human-robot collaboration
KW - intention recognition
UR - http://www.scopus.com/inward/record.url?scp=85118217002&partnerID=8YFLogxK
U2 - 10.1109/LRA.2021.3118087
DO - 10.1109/LRA.2021.3118087
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85118217002
SN - 2377-3766
VL - 7
SP - 104
EP - 111
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 1
ER -