Feasible human-robot collaboration requires intuitive and fluent understanding of human motion in shared tasks. The object in hand provides the most valuable information about the intended task of a human. In this letter, we propose a simple and affordable approach where a wearable force-myography device is used to classify objects grasped by a human. The device worn on the forearm incorporates 15 force sensors that can imply about the configuration of the hand and fingers during grasping. Hence, a classifier is trained to easily identify various objects using data recorded while holding them. To augment the classifier, we propose an iterative approach in which additional signals are taken in real-time to increase certainty about the predicted object. We show that the approach provides robust classification where the device can be taken off and placed back while maintaining high accuracy. The approach also improves the performance of trained classifiers that initially produced low accuracy due to insufficient data or non-optimal hyper-parameters. Classification success rate of more than 97% is reached in a short period of time. Furthermore, we analyze the key locations of sensors on the forearm that provide the most accurate and robust classification.
- Human-Robot collaboration
- intention recognition