TY - JOUR
T1 - Robust classification of grasped objects in intuitive human-robot collaboration using a wearable force-myography device
AU - Kahanowich, Nadav D.
AU - Sintov, Avishai
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2021/4
Y1 - 2021/4
N2 - Feasible human-robot collaboration requires intuitive and fluent understanding of human motion in shared tasks. The object in hand provides the most valuable information about the intended task of a human. In this letter, we propose a simple and affordable approach where a wearable force-myography device is used to classify objects grasped by a human. The device worn on the forearm incorporates 15 force sensors that can imply about the configuration of the hand and fingers during grasping. Hence, a classifier is trained to easily identify various objects using data recorded while holding them. To augment the classifier, we propose an iterative approach in which additional signals are taken in real-time to increase certainty about the predicted object. We show that the approach provides robust classification where the device can be taken off and placed back while maintaining high accuracy. The approach also improves the performance of trained classifiers that initially produced low accuracy due to insufficient data or non-optimal hyper-parameters. Classification success rate of more than 97% is reached in a short period of time. Furthermore, we analyze the key locations of sensors on the forearm that provide the most accurate and robust classification.
AB - Feasible human-robot collaboration requires intuitive and fluent understanding of human motion in shared tasks. The object in hand provides the most valuable information about the intended task of a human. In this letter, we propose a simple and affordable approach where a wearable force-myography device is used to classify objects grasped by a human. The device worn on the forearm incorporates 15 force sensors that can imply about the configuration of the hand and fingers during grasping. Hence, a classifier is trained to easily identify various objects using data recorded while holding them. To augment the classifier, we propose an iterative approach in which additional signals are taken in real-time to increase certainty about the predicted object. We show that the approach provides robust classification where the device can be taken off and placed back while maintaining high accuracy. The approach also improves the performance of trained classifiers that initially produced low accuracy due to insufficient data or non-optimal hyper-parameters. Classification success rate of more than 97% is reached in a short period of time. Furthermore, we analyze the key locations of sensors on the forearm that provide the most accurate and robust classification.
KW - Human-Robot collaboration
KW - intention recognition
UR - http://www.scopus.com/inward/record.url?scp=85101148059&partnerID=8YFLogxK
U2 - 10.1109/LRA.2021.3057794
DO - 10.1109/LRA.2021.3057794
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85101148059
SN - 2377-3766
VL - 6
SP - 1192
EP - 1199
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
M1 - 9350160
ER -