TY - GEN
T1 - Membership Inference Attack Using Self Influence Functions
AU - Cohen, Gilad
AU - Giryes, Raja
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024/1/3
Y1 - 2024/1/3
N2 - Member inference (MI) attacks aim to determine if a specific data sample was used to train a machine learning model. Thus, MI is a major privacy threat to models trained on private sensitive data, such as medical records. In MI attacks one may consider the black-box settings, where the model's parameters and activations are hidden from the adversary, or the white-box case where they are available to the attacker. In this work, we focus on the latter and present a novel MI attack for it that employs influence functions, or more specifically the samples' self-influence scores, to perform MI prediction. The proposed method is evaluated on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets using various architectures such as AlexNet, ResNet, and DenseNet. Our new attack method achieves new state-of-the-art (SOTA) results for MI even with limited adversarial knowledge, and is effective against MI defense methods such as data augmentation and differential privacy. Our code is available at https://github.com/giladcohen/sif-mi-attack.
AB - Member inference (MI) attacks aim to determine if a specific data sample was used to train a machine learning model. Thus, MI is a major privacy threat to models trained on private sensitive data, such as medical records. In MI attacks one may consider the black-box settings, where the model's parameters and activations are hidden from the adversary, or the white-box case where they are available to the attacker. In this work, we focus on the latter and present a novel MI attack for it that employs influence functions, or more specifically the samples' self-influence scores, to perform MI prediction. The proposed method is evaluated on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets using various architectures such as AlexNet, ResNet, and DenseNet. Our new attack method achieves new state-of-the-art (SOTA) results for MI even with limited adversarial knowledge, and is effective against MI defense methods such as data augmentation and differential privacy. Our code is available at https://github.com/giladcohen/sif-mi-attack.
KW - Algorithms
KW - Explainable
KW - accountable
KW - ethical computer vision
KW - fair
KW - privacy-preserving
UR - http://www.scopus.com/inward/record.url?scp=85192009079&partnerID=8YFLogxK
U2 - 10.1109/WACV57701.2024.00482
DO - 10.1109/WACV57701.2024.00482
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85192009079
T3 - Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
SP - 4880
EP - 4889
BT - Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
Y2 - 4 January 2024 through 8 January 2024
ER -