TY - JOUR
T1 - Learning Human-Arm Reaching Motion Using a Wearable Device in Human-Robot Collaboration
AU - Kahanowich, Nadav D.
AU - Sintov, Avishai
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - Many tasks performed by two humans require mutual interaction between arms such as handing-over tools and objects. In order for a robotic arm to interact with a human in the same way, it must reason about the location of the human arm in real-time. Furthermore, to acquire interaction in a timely manner, the robot must be able predict the final target of the human in order to plan and initiate motion beforehand. In this paper, we explore the use of a low-cost wearable device equipped with two inertial measurement units (IMU) for learning reaching motion in real-time applications of Human-Robot Collaboration (HRC). A wearable device can replace or be complementary to visual perception in cases of bad lighting or occlusions in a cluttered environment. We first train a neural-network model to estimate the current location of the arm. Then, we propose a novel model based on a recurrent neural-network to predict the future target of the human arm during motion in real-time. Early prediction of the target grants the robot with sufficient time to plan and initiate motion during the motion of the human. The accuracies of the models are analyzed concerning the features included in the motion representation. Through experiments and real demonstrations with a robotic arm, we show that sufficient accuracy is achieved for feasible HRC without any visual perception. Once trained, the system can be deployed in various spaces with no additional effort. The models exhibit high accuracy for various initial poses of the human arm. Moreover, the trained models are shown to provide high success rates with additional human participants not included in the training of the model.
AB - Many tasks performed by two humans require mutual interaction between arms such as handing-over tools and objects. In order for a robotic arm to interact with a human in the same way, it must reason about the location of the human arm in real-time. Furthermore, to acquire interaction in a timely manner, the robot must be able predict the final target of the human in order to plan and initiate motion beforehand. In this paper, we explore the use of a low-cost wearable device equipped with two inertial measurement units (IMU) for learning reaching motion in real-time applications of Human-Robot Collaboration (HRC). A wearable device can replace or be complementary to visual perception in cases of bad lighting or occlusions in a cluttered environment. We first train a neural-network model to estimate the current location of the arm. Then, we propose a novel model based on a recurrent neural-network to predict the future target of the human arm during motion in real-time. Early prediction of the target grants the robot with sufficient time to plan and initiate motion during the motion of the human. The accuracies of the models are analyzed concerning the features included in the motion representation. Through experiments and real demonstrations with a robotic arm, we show that sufficient accuracy is achieved for feasible HRC without any visual perception. Once trained, the system can be deployed in various spaces with no additional effort. The models exhibit high accuracy for various initial poses of the human arm. Moreover, the trained models are shown to provide high success rates with additional human participants not included in the training of the model.
KW - Human-robot collaboration
KW - intention recognition
KW - wearable technology
UR - http://www.scopus.com/inward/record.url?scp=85186017104&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3365661
DO - 10.1109/ACCESS.2024.3365661
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85186017104
SN - 2169-3536
VL - 12
SP - 24855
EP - 24865
JO - IEEE Access
JF - IEEE Access
ER -