TY - GEN
T1 - Geometric Adversarial Attacks and Defenses on 3D Point Clouds
AU - Lang, Itai
AU - Kotlicki, Uriel
AU - Avidan, Shai
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Deep neural networks are prone to adversarial examples that maliciously alter the network's outcome. Due to the increasing popularity of 3D sensors in safety-critical systems and the vast deployment of deep learning models for 3D point sets,there is a growing interest in adversarial attacks and defenses for such models. So far,the research has focused on the semantic level,namely,deep point cloud classifiers. However,point clouds are also widely used in a geometric-related form that includes encoding and reconstructing the geometry. In this work,we are the first to consider the problem of adversarial examples at a geometric level. In this setting,the question is how to craft a small change to a clean source point cloud that leads,after passing through an autoencoder model,to the reconstruction of a different target shape. Our attack is in sharp contrast to existing semantic attacks on 3D point clouds. While such works aim to modify the predicted label by a classifier,we alter the entire reconstructed geometry. Additionally,we demonstrate the robustness of our attack in the case of defense,where we show that remnant characteristics of the target shape are still present at the output after applying the defense to the adversarial input. Our code is publicly available1.
AB - Deep neural networks are prone to adversarial examples that maliciously alter the network's outcome. Due to the increasing popularity of 3D sensors in safety-critical systems and the vast deployment of deep learning models for 3D point sets,there is a growing interest in adversarial attacks and defenses for such models. So far,the research has focused on the semantic level,namely,deep point cloud classifiers. However,point clouds are also widely used in a geometric-related form that includes encoding and reconstructing the geometry. In this work,we are the first to consider the problem of adversarial examples at a geometric level. In this setting,the question is how to craft a small change to a clean source point cloud that leads,after passing through an autoencoder model,to the reconstruction of a different target shape. Our attack is in sharp contrast to existing semantic attacks on 3D point clouds. While such works aim to modify the predicted label by a classifier,we alter the entire reconstructed geometry. Additionally,we demonstrate the robustness of our attack in the case of defense,where we show that remnant characteristics of the target shape are still present at the output after applying the defense to the adversarial input. Our code is publicly available1.
KW - 3D Point Clouds
KW - Adversarial Attacks
KW - Deep Learning
KW - Defense Methods
KW - Geometry Processing
UR - http://www.scopus.com/inward/record.url?scp=85125007456&partnerID=8YFLogxK
U2 - 10.1109/3DV53792.2021.00127
DO - 10.1109/3DV53792.2021.00127
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85125007456
T3 - Proceedings - 2021 International Conference on 3D Vision, 3DV 2021
SP - 1196
EP - 1205
BT - Proceedings - 2021 International Conference on 3D Vision, 3DV 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th International Conference on 3D Vision, 3DV 2021
Y2 - 1 December 2021 through 3 December 2021
ER -