Deep neural networks are prone to adversarial examples that maliciously alter the network's outcome. Due to the increasing popularity of 3D sensors in safety-critical systems and the vast deployment of deep learning models for 3D point sets,there is a growing interest in adversarial attacks and defenses for such models. So far,the research has focused on the semantic level,namely,deep point cloud classifiers. However,point clouds are also widely used in a geometric-related form that includes encoding and reconstructing the geometry. In this work,we are the first to consider the problem of adversarial examples at a geometric level. In this setting,the question is how to craft a small change to a clean source point cloud that leads,after passing through an autoencoder model,to the reconstruction of a different target shape. Our attack is in sharp contrast to existing semantic attacks on 3D point clouds. While such works aim to modify the predicted label by a classifier,we alter the entire reconstructed geometry. Additionally,we demonstrate the robustness of our attack in the case of defense,where we show that remnant characteristics of the target shape are still present at the output after applying the defense to the adversarial input. Our code is publicly available1.